id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
258418256 | pes2o/s2orc | v3-fos-license | Entanglement monogamy via multivariate trace inequalities
Entropy is a fundamental concept in quantum information theory that allows to quantify entanglement and investigate its properties, for example its monogamy over multipartite systems. Here, we derive variational formulas for relative entropies based on restricted measurements of multipartite quantum systems. By combining these with multivariate matrix trace inequalities, we recover and sometimes strengthen various existing entanglement monogamy inequalities. In particular, we give direct, matrix-analysis-based proofs for the faithfulness of squashed entanglement by relating it to the relative entropy of entanglement measured with one-way local operations and classical communication, as well as for the faithfulness of conditional entanglement of mutual information by relating it to the separably measured relative entropy of entanglement. We discuss variations of these results using the relative entropy to states with positive partial transpose, and multipartite setups. Our results simplify and generalize previous derivations in the literature that employed operational arguments about the asymptotic achievability of information-theoretic tasks.
Introduction
For tripartite discrete probability distributions P ABC , the mutual information of A and B conditioned on C can be written as the relative entropy distance to either the closest Markov chain A − C − B or to the closest state that can be recovered from the marginal P AC by acting only on C.More precisely, we can rewrite the mutual information into the following variational forms (see, e.g.[31]) I(A : B|C) P = H(AC) P + H(BC) P − H(C) P − H(ABC) P = min where D(P Q) = x P (x)(log P (x)−log Q(x)) is the Kullback-Leibler divergence (or relative entropy) and H(A) P = − x P A (x) log P A (x) is the Shannon entropy.Here, in the expression (2), the joint distribution Q B|C P AC can be interpreted as the output of a recovery channel Q B|C with access to C (but not A); the expression is minimized when Q B|C = P B|C .The minimization in the expression (3) is over all distributions with a Markov chain structure A − C − B; the expression is minimized when Q A−C−B = P B|C P AC .As a consequence, using the non-negativity of the Kullback-Leibler divergence, one finds I(A : B|C) P ≥ 0, which is equivalent to strong sub-additivity (SSA) of entropy.
More generally, for tripartite quantum states ρ ABC , one defines the quantum conditional mutual information as with the von Neumann entropy H(A) ρ = − tr [ρ A log ρ A ].A highly non-trivial argument by Lieb and Ruskai from the seventies [41,40] then shows that due to entanglement monogamy the SSA inequality I(A : B|C) ρ ≥ 0 still holds in the quantum case.
In recent years, the quantum information community has seen a lot of progress on understanding potential refinements of SSA for quantum states, with the goal of mimicking the classical version of Eqs.(3) and (2) for quantum states and quantum channels.Firstly, one can simply rewrite [6] I(A : B|C) ρ = min σ AC ,ω BC max τ C D(ρ ABC exp(log σ AC + log ω BC − log τ C )) (5) = D(ρ ABC exp(log ρ AC + log ρ BC − log ρ C )) (6) in terms of the Umegaki's quantum relative entropy D(ρ σ) = tr[ρ(log ρ − log σ)], but due to noncommutativity any interpretation in terms of quantum Markov chains remains largely unclear [14].
Secondly, in general, we have for the alternative local recovery map form where R C→BC denotes quantum channels [22].However, a series of results, first by Fawzi & Renner [23] and then in [12,8,50,47,32,45,4], revealed that weaker forms of Eq. ( 7) still hold, e.g., (8) in terms of Donald's measured relative entropy [20], with the maximum over positive operator-valued measure (POVM) measurement channels M. 1 A regularized version in terms of the quantum relative entropy distance then also follows from the asymptotic achievability of the measured relative entropy [28,4].Compared to the bound in Eq. ( 6), the bound in Eq. ( 8) lifts the classical Markov picture of approximately recovering the state with a local recovery map P B|C applied to the marginal P AC to the quantum setting via (I A ⊗ R C→BC )(ρ AC ) (see [44] and references therein).
Thirdly, a suitable generalization of an exact quantum Markov chain was established via the SSA equality condition [26] with respect to some induced direct sum decomposition Unfortunately, lower bounding the quantum conditional mutual information in terms of the distance to exact quantum Markov chains neither works for relative entropy distance [31], nor for regularized relative entropy distances, nor for measured relative entropy distance [16].Now, in the context of the quantum conditional mutual information based entanglement measure squashed entanglement [17], it is of importance that for an exact quantum Markov state, the reduced state σ AB = k p k σ k A ⊗ σ k B is separable -as can be easily checked using Eq.(10).Then, even though the quantum relative entropy is monotone under the partial trace over C, still, in general D ALL (ρ AB σ AB ) (11) and the same for regularized versions thereof [16].Only relaxing even further and employing locally measured quantum distance measures [42] and in particular locally measured quantum relative entropies [43], one finds that [10,38,39] I(A : B|C) ρ ≥ min where LOCC 1 (A → B) denotes measurements that use a single round of communication: They first measure out A and then perform a conditional measurement on the system B depending on the measurement outcome on A. Even though such measurements have a reduced distinguishing power [42,34,35], crucially, they are still tomographically complete, and thus the right-hand side is zero if and only if ρ AB is separable.
Going back to the bigger picture, the two types of refined SSA bounds as in Eqs. ( 8) and ( 12) seem in general incompatible, but are both entanglement monogamy inequalities with widespread applications in quantum information science (see aforementioned reference and references therein).Moreover, for the former type, a unified matrix analysis based proof approach has emerged.Namely, extending Lieb and Ruskai's original argument for the proof of SSA [40,41], the first step is to employ the multivariate Golden-Thompson inequalities from [45,27,46]: For any n ∈ N, Hermitian matrices {H k } n k=1 , and any p ≥ 1, one has log exp where • p denotes the Schatten p-norm and β 0 (t) = π 2 (cosh(πt) + 1) −1 is a fixed probability density on R. The second step is then to combine this with dual variational representations of quantum entropy in terms of matrix exponentials [5,7].
In contrast, the previously known proofs of the refined SSA bound from Eq. ( 12) are based on involved operational arguments about the asymptotic achievability of information-theoretic tasks [10,38], including the asymptotic achievability of quantum state redistribution [19,53], partial state merging [52], and Stein's lemma in hypothesis testing [38,11]. 2 Here, we seek after a unified matrix analysis based proof for Eq. ( 12) and other entanglement monogamy inequalities of similar type.For this, we derive novel variational formulas for quantum relative entropies based on restricted measurements, which then, indeed, enable us to employ a similar, matrix analysis approach in terms of multivariate Golden-Thompson inequalities.Namely, the core step in our derivations is to employ the multivariate Eq. ( 13) for n = 3, 4, 5, 6 and p = 1, 2. Importantly, this allows us to fully bypass the previously employed operational arguments from quantum information theory.Consequently, we give concise proofs that lead to tight SSA separability refinements and other new entanglement monogamy inequalities, including positive partial transpose bounds and multipartite extensions.On the way we further derive various strengthened recoverability bounds, such as for the conditional entanglement of mutual information and the multipartite squashed entanglement.In turn, the explicit form of our novel entanglement monogamy inequalities also feature recoverability maps, revealing a deeper connection between SSA separability refinements and SSA recoverability bounds.
The rest of the manuscript is structured as follows.In Section 2, we derive new variational formulas for locally measured quantum relative entropies.In Section 3 we present the derivations of our entanglement monogamy inequalities around the SSA separability refinements from Eq. ( 12).This is in terms of squashed entanglement (Section 3.1), relative entropy of entanglement (Sections 3.2 and 3.4), conditional entanglement of mutual information (Section 3.3), as well as for multipartite extensions thereof (Section 3.5).In Section 4 we then conclude with some outlook on open questions.
On measured divergences and entanglement measures
We start by introducing some notational conventions used in this work.Throughout we assume that Hilbert spaces, denoted A, B, C, etc., are finite-dimensional and quantum states are positive semi-definite operators with unit trace acting on such spaces, or tensor product spaces of them.We use subscripts to indicate what spaces an operator acts on and by convention when we introduce an operator X AB acting on A ⊗ B we implicitly also introduce its marginals X A and X B , defined via the respective partial traces of X AB over B and A, respectively.We often omit identity operators, e.g., X A Y AB should be understood as the matrix product (X A ⊗ 1 B )Y AB .Functions are applied on the spectrum of an operator coinciding with the domain of the function, which means that X −1 A is the generalized inverse and log(X A ) is always bounded.At various points we employ indices x, y or z that are meant to be taken from discrete index sets X , Y and Z that are understood to be defined implicitly.We use ≥ and > to denote the Löwner order on operators, e.g., an operator L is positive semi-definite if and only if L ≥ 0, and a positive semi-definite operator L has full support if and only if L > 0.
2.1.Definitions and some properties.Consider a quantum state ρ > 0 and an operator σ > 0. We recall the definition and variational formula for Umegaki relative entropy between ρ and σ as 2 The conceptually different work [39] gives extendability refinements of SSA based on iterating Markov refinements of SSA and then combining these bounds with finite quantum de Finetti theorems with quantum side information [15] to make the connection with separability.
Here the optimization is over all operators ω with full support, a set that is clearly not closed.Nonetheless, the supremum is taken as ω = exp(log ρ − log σ).We can extend the definition to general states (without full support) by taking an appropriate continuous extension, namely3 where π is the completely mixed state.We note that the above quantity is finite if and only if ρ ≪ σ, i.e., if the support of ρ is contained in the support of σ.In the following we will always assume full support in our definitions and use Eq. ( 16) to extend to the general case where needed.
Based on this we arrive at the definition of the relative entropy of entanglement for a bipartite quantum state ρ AB and the bipartition A : B, which is given by where Sep(A : B) denotes the set of separable states on the bipartition A : B, i.e. quantum states that decompose as Here, the minimum is always taken since D(• •) is jointly convex and continuous in σ AB as long as we stay away from the (uninteresting, as we are seeking a minimum) boundary where ρ AB ≪ σ AB .
We will also use various notions of measured relative entropy.In the following M is a set of POVMs, and its elements M = {M z } z are sets of positive semi-definite operators satisfying z M z = 1.
For example, ALL denotes the set of all POVMs.If the states are bipartite on A and B, we consider various specialized sets.On the one hand, the sets SEP(A : B) and PPT(A : B) contain POVMs whose elements are separable (SEP) or have positive partial transpose (PPT), respectively.On the other hand, elements of LOCC(A : B) are operationally defined as a POVMs that can be implemented by local operations and finite classical communication (LOCC).Elements of LOCC 1 (A → B) are POVMs that only use a single round of communication: they first measure out A and then perform a conditional measurement on the system B depending on the measurement outcome on A. Without loss of generality, such measurements can be written in the form where Here x labels the data sent from Alice to Bob whereas z is the final output after Bob's measurement.Finally, the set LO(A : B) allows only local measurements without communication, which are of the form , where z = (x, y) collects the local outputs.
With this in hand, let us define a measured relative entropy and an entanglement measure for each M described above: Here, P ρ,M (z) = tr(ρM z ) is the probability mass function emanating from Born's rule.We note that the minimum is achieved as D M , a supremum of jointly convex functions, is itself jointly convex and thus, as argued above, the minimum is taken.
From the inclusions ALL ⊇ PPT(A : with the shorthand E LOCC 1 (A → B) ρ := E LOCC 1 (A→B) (A : B) ρ .We further introduce PPT variants defined as where ppt(A : B) denotes the set of states that have positive partial transpose withe respect to the bipartition A : B, which we study in particular in combination with measurements M = PPT.
We note that all of the above quantities are faithful since LO(A : B) is already tomographically complete.Further, there are minimax statements available that interchange the supremum over the set of measurements with the infimum over the set of states [11,Lemma 13].
Above quantities are in general not additive on tensor product states and one can then write down the regularization which are well-defined, with operational interpretations in terms of optimal asymptotic quantum Stein's error exponents for the corresponding restricted class of measurements [11,Theorem 16].In general, it is unclear how to make quantitative statements about the regularization, but for the class ALL we have the following [4,Lemma 2.4].
Lemma 1.For any n-partite quantum state ρ A n and σ This is an extension of the asymptotic achievability of the measured relative entropy [28] and follows from the pinching inequality [25] together with Schur-Weyl duality showing that the number of distinct eigenvalues of σ A n only grows polynomial in n (see, e.g., [24,Lemma 4.4]).
Finally, one can also define multipartite extensions of above quantities.For example, we have the tripartite separable measured relative entropy of entanglement E SEP (A : B : C) ρ and its regularization, E ∞ SEP (A : B : C) ρ .We will not directly use multipartite versions of LOCC 1 (A → B) and hence we do not discuss its different variations [11,37].
We should verify that all these entanglement measures are indeed entanglement monotones, i.e., monotone under application of LOCC(A : B) completely positive and trace preserving (cptp) maps.It is easy to see, and well-known, that E M with M ∈ ALL, SEP(A : B), PPT(A : B), LOCC(A : B) are entanglement monotones.This is no longer true for M = LOCC 1 (A → B).Instead, we show the following, weaker, statement.Proof.Without loss of generality a measurement in LOCC 1 (A → B) is of the form Eq. (18).To show the monotonicity under an LOCC 1 (A → B) operation we only need to show that the above structure of the measurement is preserved under the adjoint operation.Again, without loss of generality, we can write a LOCC 1 (A → B) operation in the form G = k E k ⊗ F k where F k : B → B ′ are cptp maps and E k : A → A ′ are completely positive trace non-increasing (cptni) maps forming an instrument, such that k E k is cptp again.Given a measurement in LOCC(A ′ : B ′ ) can now construct a measurement on LOCC(A : B) with the matrices that has the same structure as in Eq. ( 18).Namely, we can verify that , and since this holds for all separable states σ AB and G preserves this structure, the desired result for E LOCC 1 (A ; B) ρ also follows.
Moreover, using similar arguments, one can verify that D LO(A:B) (ρ AB σ AB ) and E LO (A : B) ρ are monotone under local operations.
2.2.General variational formulas.Our approach is to employ dual representations of quantum entropy as in [7,45].For that, we explore variational expressions for measured relative entropies.For unrestricted measurements, we have the well-known expression which is in fact consistent with Eq. ( 16) without assumptions on the support of ρ or σ, and will be finite if and only if ρ ≪ σ.For other classes of measurements we can show the following generic bound.Lemma 3. Define C M as the union of the cones spanned by the POVM elements of measurements in M, i.e., C M := M ∈M cone {M z } z .Then, for a quantum state ρ and any σ ≥ 0, we have The proof is an adaptation of the argument in [5].
Proof.We first treat the case where both ρ and σ have full support, we thus have Using the operator Jensen's inequality, we can bound the measured relative entropy as follows: where, in order to establish the last inequality, we used the fact that tr by definition of the cone, and that ω > 0 as For the general case we simply note that the right-hand side of Eq. ( 32) is jointly convex in (ρ, σ) and vanishes for (π, π), and thus from which the result immediately follows.
We note that C ALL is the cone of positive semi-definite operators, and from Eq. ( 27) we know that equality in the above lemma holds.For other sets of measurements we do not always have a good characterization of the respective set (which might not even be convex in general), but C SEP(A:B) and C PPT(A:B) are comprised of separable positive semi-definite operators and positive semi-definite operators with positive partial transpose, respectively.We do not know if equality in Lemma 3 holds for either SEP(A : B) or PPT(A : B).
2.3.
Cone for local measurements and constrained communication.On first look, note that the set C LOCC 1 (A→B) is comprised of positive semi-definite operators of the form where ω x B ≥ 0 and Q x A ≥ 0 such that x Q x A = 1 A , and x goes over some finite alphabet.However, the upper bound we get using this in Lemma 3 does not appear to be tight.We can, however, show the following exact variational formula for the LOCC 1 (A → B) measured relative entropy.Lemma 4. Let A ′ be isomorphic to A ⊗ A and consider the set of operators (These are operators that are classical-quantum in some basis on A ′ .)Then, with ρ A ′ B and σ A ′ B consistent embeddings of ρ AB > 0 and σ AB > 0, respectively, we have Moreover, the optimal measurement is comprised of a (rank-1) POVM on A with at most d 2 outcomes, followed by a conditional projective measurement on B.
Proof.We first note that due the joint convexity of D(• •) we know that the optimal measurement on A is extremal.From [29, Theorem 2.21] follows that extremal POVMs have at most d 2 rank-1 elements, where d is the dimension of A. In particular, via Naimark's dilation, there exists a rank-1 projective measurement on A ′ that produces the same statistics. Since ) due to the data-processing inequality for local operations in Lemma 2, we can restrict the optimization over measurements for the latter quantity to POVMs with elements of the form where Applying the series of steps in the proof of Lemma 3 we arrive at the bound Using the eigenvalue decomposition where M defines a projective measurement on A ′ using P x A ′ followed by a conditional projective measurement on B using P y|x B .Optimizing the above expression over λ y|x yields and, thus, we can conclude that sup where the form of the measurement M in the supremum can be restricted as prescribed in the statement of the lemma.
Next, we discuss the case of LO measurements.For this, let A ′ be isomorphic to A ⊗ A and B ′ be isomorphic to B ⊗ B and consider operators that are classical in some basis on A ′ and B ′ , i.e., the set x,y=1 Then, with ρ A ′ B ′ and σ A ′ B ′ consistent embeddings of ρ AB and σ AB as above, we have This characterization, as for the case of one-way communication in Lemma 4, essentially comes from the fact that the optimal local measurements can be assumed to be (rank-1) POVMs with at most d 2 outcomes on A ′ and B ′ .
Variations of the above arguments are also possible for more complex multi-partite measurement structures, but we leave this as an exercise for the reader who has applications of those in mind.
2.4.
Comparison with restricted Schatten one-norms.Restricted Schatten one-norms leading to metrics have been considered in the literature [42,34].Similar versions can be defined for the fidelity as well, which we denote by A couple of properties are noteworthy: • Two-outcome POVMs are optimal for ρ AB − σ AB M .
Entropic entanglement inequalities
3.1.Squashed entanglement.Based on the conditional quantum mutual information (CQMI) one defines squashed entanglement as [17] where the infimum is over all tripartite quantum state extensions ρ ABC of ρ AB on any system C (with no bound on the dimension of C).The following theorem implies that squashed entanglement is non-zero on entangled states [10,38,39]. 5heorem 5. Let ρ ABC > 0 be any tripartite state.We have and consequently, Moreover, the same lower bounds hold for A ↔ B as I(A : B|C) ρ is symmetric under this exchange.
Note that strong sub-additivity (SSA) of quantum entropy corresponds to I(A : B|C) ρ ≥ 0 and hence Theorem 5 corresponds to a strengthening of SSA.The stronger single-copy version in Eq. ( 55) is new.The consequence in Eq. (56) corresponds to [38,Theorem 2], which is itself a strengthening of [10, Theorem 1] (see also [11]).One advantage of our formulation in Theorem 5 is that we have some information on the structure of the optimizer in the lower bound E ∞ LOCC 1 (B ; A) ρ , as in fact (see the proof of Theorem 5) for any separable state optimizer σ ) and the probability density β 0 (t) = π 2 (cosh(πt) + 1) −1 .This features a recovery map and thus points to further connections between entanglement monogamy and recovery refinements of SSA.However, unfortunately this structure does not seem to further translate to the single-copy lower bound E LOCC 1 (B ; A) ρ .We refer to the discussion around [47,Lemma 3.11] and related results on composite hypothesis testing [4].
If wanted, further standard estimates can be made on the single-copy lower bound from Eq. (56) as done in [33,Corollary 3.13] following the considerations from Section 2.4.We state these bounds from [33, Corollary 3.13] here, as since the original proofs of similar statements [11,38,39], the dimension dependent factors in above chain of inequalities have been improved to their optimal value as stated above [34].As such, our work also supersedes the bounds from [39, Corollary 1].Finally, as discussed in [16], the dimension dependent factor in Eq. ( 60) is necessary due to the anti-symmetric state example.
Proof of Theorem 5. Let us fix some slack parameter ν > 0. We first prove the bound in Eq. (55) up to this slack.We start by constructing some states and operators that we will be using in the proof.First, let us introduce which is a minimizer for the entanglement entropy and is separable on the partition A : C, as indicated in the second equality.We now introduce the space B ′ isomorphic to B ⊗ B and an (arbitrary) embedding ρ AB ′ C of ρ ABC into this larger space.Next we apply a rotated Petz recovery map to the state σ A:C and introduce the recovered states for t ∈ R. One notes that these states are separable in the bipartition A : B ′ C by construction.We now use Lemma 4 as well as the definition of the supremum to write where X AB ′ ∈ C * AB ′ is some operator with full support that is classical on B ′ , i.e. it has the form where {P x B ′ } x are orthonormal rank-1 projectors decomposing the identity on B ′ and F x A ≥ 0 are arbitrary positive semi-definite matrices.Finally, we construct the state γA: which inherits separability in the partition A : C since where we used that P x B ′ P x ′ B ′ = δ xx ′ P x B ′ and cyclicity under tr B to simplify the expression.In essence, the structure of LOCC 1 measurements and the respective operator X AB ′ ∈ C * AB ′ as in Eq. ( 64) is needed here to ensure that separability is preserved and no entanglement is created in this multiplication.Finally, we introduce an operator Y AC > 0 satisfying where we used the variational formula for measured relative entropy.Now we have everything in place, and the proof proceeds straightforwardly.First, we write where in the last step we employed the variational formula for the relative entropy.At this point we simply choose ω = exp(log X AB ′ + log Y AC ) using the two operators defined above.This, and the five matrix Golden-Thompson inequality for the Schatten two-norm from [45, Corollary 3.3] allow us to further bound where the equality simply follows by substitution of (65) and the ultimate inequality follows from the definition of X AB ′ and Y AC .This concludes the proof of Eq. (55) once we leverage the fact that ν > 0 can be chosen arbitrarily small.
Next, the first step in Eq. (56) follows from the additivity of the CQMI together with the asymptotic achievability of the measured relative entropy in Lemma 1, realizing that for tensor product inputs the optimization over separable states in the definition of E M can be restricted to permutation invariant states (due to the unitary invariance and joint convexity of the relative entropy).
Finally, the second step in Eq. ( 56) can be deduced from the super-additivity [43, Theorem 1] noting that -in the notation of [43] -the set of measurements LOCC 1 (B → A) is compatible with the set of states Sep(A : B).
3.2.
Relative entropy of entanglement.Previously known lower bound proofs on the CQMI proceeded via two steps of multipartite monogamy inequalities, going through the relative entropy of entanglement [10,38].As the intermediate steps are of independent interest, we now give simple and direct proofs for strengthened single-copy versions of these bounds.Proposition 6.Let ρ ABC be any tripartite state.We have and consequently where the regularized relative entropy of entanglement terms on the right-hand side are defined as E ∞ (A : B) ρ := lim n→∞ 1 n E(A : B) ρ ⊗n .Moreover, the same lower bounds hold for A ↔ B as I(A : B|C) ρ is symmetric under this exchange.
We note that the stronger single-copy version in Eq. ( 76) is novel.The consequence in Eq. ( 77) is [10, Lemma 1], which was based on the asymptotic achievability of quantum state redistribution [19,53] together with the asymptotic continuity [21,48] and non-lockability [30] of the relative entropy of entanglement.We emphasize that Eq. ( 77) was also invoked in the later proof in [38].In contrast, our proof is elementary via multivariate matrix trace inequalities.
Proof of Proposition 6.For the proof of the first bound, we use similar, but simpler arguments as in the proof of the first bound in Theorem 5. Namely, we employ the three matrix Golden-Thompson inequality for the Schatten two-norm in the form of [45,Eq. 39] 6 .With a separable state optimizer we find with the probability density β 0 (t) = π 2 (cosh(πt) + 1) −1 .The second bound follows from additivity of the quantum mutual information on tensor product states together with the asymptotic achievability of the measured relative entropy from Lemma 1, in the same way as we derived the second bound in Theorem 5.
The next relative entropy of entanglement bound is as follows.
Proposition 7. Let ρ ABC be any tripartite state.We have and, consequently, We note that the stronger single-copy version in Eq. ( 83) is novel.We were not able to directly replace the E ALL (A : C) ρ term in the lower bound with the larger E(A : C) ρ .The first consequence in Eq. ( 84) is [38,Theorem 1], whereas the second consequence can now be combined with the regularized Eq. ( 84) leading to as proven directly in Theorem 5.
Proof of Proposition 7. We first prove Eq. ( 83), which is almost analogous to the proof of Theorem 5, up to some simplifications.We use the same embedding of ρ ABC to ρ AB ′ C .Let be a separable state optimizer.We may express the relative entropy of entanglement using the variational formula for relative entropy where ω AB ′ C is an arbitrary positive definite matrix.We will now choose it to be of the form where Y AC > 0 is general and X AB ′ > 0 is of the LOCC 1 (A → B) form in Eq. ( 64), both still to be optimized over.We can then bound E(A : BC) ρ using the three-matrix Golden-Thompson inequality as follows: Due to the LOCC 1 (A → B) structure of X AB ′ in Eq. ( 64) and σ A: and, thus, σ A:C inherits the separable structure on the bipartition A : C from σ A:B ′ C .Using this definition we can now further bound Eq. ( 93) to arrive at Finally, Eq. (84) then follows by the additivity of the quantum relative entropy on product states together with the asymptotic achievability of the measured relative entropy from Lemma 1.
where the infimum goes over all bipartite extensions ρ A ĀB B of ρ AB on systems Ā B (with no bound on the dimensions of Ā and B).By definition we have and CEMI shares similarly complete axiomatic entanglement measurement properties as squashed entanglement [52].However, whereas no separation between I CEMI and I SQ is known, CEMI often gives more structure.For example, one finds the following recoverability lower bounds (see [49] for related bounds.).Proposition 8. Let ρ A ĀB B be any four-party state.We have with local quantum channels and the probability density β 0 (t) = π 2 (cosh(πt) + 1) −1 .The proof is as in [45,46] via multivariate trace inequalities and is given in Appendix A. Additionally, the corresponding regularized lower bound as lim sup then also follows from the asymptotic achievability of the measured entropy (Lemma 1).As for squashed entanglement, it is unclear how these recoverability lower bounds would directly imply faithfulness bounds.
Nevertheless, using again multivariate trace inequalities, a strengthened lower bound in terms of the measurement set SEP(A : B) can be shown -compared to LOCC 1 (B ; A) for squashed entanglement.Theorem 9. Let ρ A ĀB B be any four-party state.We have and consequently The stronger single-copy version in Eq. ( 106) is novel.The consequence in Eq. ( 106) corresponds to a strengthening of [38,Equation 41] that stated the (a priori weaker) lower bound with respect to LOCC(A : B).One further advantage of our formulation in Theorem 9 is that we have some information on the structure of the optimizer in lower bound E ∞ SEP (A : B) ρ , as in fact (see the proof of Theorem 9) for any separable state optimizer σ Ān : Bn ∈ arg min σ Ān Bn ∈Sep( Ān : Bn ) D(ρ ⊗n Ā B σ Ān Bn ), with local quantum channels and the probability density β 0 (t) = π 2 (cosh(πt) + 1) −1 .However, similarly as for squashed entanglement, this structure does not seem to further translate to the single-copy lower bound E SEP (A : B) ρ .Further lower bounds in terms of restricted fidelity and Schatten one-norm as leading to Eq. (60) are possible [42,34].
Proof of Theorem 9.The idea of the proof is similar as for Theorem 5 and we first prove the bound in Eq. (106).Namely, for a separable state optimizer using the four matrix Golden-Thompson inequality for the Schatten two-norm from [45, Corollary 3.3] and the variational characterization from Lemma 3 with the choice ω A ĀB B = X A:B ⊗ Y Ā B with general Y AB > 0 and X A:B ∈ SEP(A : B) to be optimized over, we find = sup where we set and used that γ A:B ∈ SEP(A : B) as well as γ Ā: B ∈ SEP( Ā : B) inherit the separability structure from the choice X A:B ∈ SEP(A : B).
Next, the first step in Eq. (107) follows from the additivity of I(A| Ā : B| B) on tensor product states together with the asymptotic achievability of the measured relative entropy in Lemma 1.
Finally, the second step in Eq. ( 107) can be deduced from the super-additivity result [43, Theorem 1] noting that -in the notation of [43] -the set of measurements SEP(A : B) is compatible with the set of states Sep(A : B).
Alternatively, we can derive PPT bounds, where the set of measurements and the set of states are both in terms of PPT.We are not aware of any previous such bounds in the literature.
Note that the lower bounds in Proposition 10 are in general not directly comparable to the bounds from Theorem 9, as both the set of measurements as well as the set of states is enlarged.Moreover, the same form as in Eq. ( 108) is available and lower bounds in terms of restricted fidelity and Schatten one-norm as leading to Eq. ( 60) are possible as well [42,34].
Proof of Proposition 10.The first part of the proof is similar as that of Theorem 9. Namely, for a PPT state optimizer we find where we set for the choice X AB ∈ PPT(A : B). (124) Eq. ( 123) is then further lower bounded to the claimed inequality once it is realized that both γ AB ∈ ppt(A : B) and γ Ā B ∈ ppt( Ā : B) inherit the PPT structure.This follows as by inspection and hence γ A ĀB B ∈ ppt(A Ā : B B), and further Finally, Eq.(120) follows as in the proof of Theorem 9, except now using [43, Theorem 1] noting that -in the notation of [43] -the set of measurements PPT(A : B) is compatible with the set of states ppt(A : B).
3.4.
Piani based relative entropy of entanglement.The previously known CEMI lower bound proof proceeded via two steps of multipartite monogamy inequalities [38] (see also the alternative [49]), going through the relative entropy of entanglement and prominently making use of Piani's results [43].As the intermediate steps of these proofs are of independent interest, we now give simple and direct proofs for strengthened single-copy versions of these steps.The first bound is as follows.
Proposition 11.Let ρ AB Ā B be any four-party state.We have and consequently We note that the stronger single-copy version in Eq. ( 131) is novel.The consequence in Eq. ( 132) is [38,Equation 40], which was based on the asymptotic achievability of partial state merging [52] together with the asymptotic continuity [21,48] and non-lockability [30] of the relative entropy of entanglement.In contrast, our proof is elementary via matrix trace inequalities.
Proof.The proof is a simplified version of the arguments leading to Theorem 9. We only sketch the steps: For a separable state optimizer we estimate for Eq.(131) that with local quantum channels and R [t] B→B B (•) similar, (137) and the probability density β 0 (t) = π 2 (cosh(πt) + 1) −1 .Eq. ( 132) then follows by the additivity of I(A| Ā : B| B) on tensor product states together with the asymptotic achievability of the measured relative entropy from Lemma 1.
Having Proposition 11 at hand, we can employ [43,Theorem 1] in the form to again conclude that gives rise to the tripartite CEMI as [52] I CEMI (A : where the infimum goes over all tripartite extensions ρ A ĀB BC C of ρ ABC on systems Ā B C (with no bound on the dimensions of Ā, B, C).We first note the following recoverability lower bounds that resolve a conjecture from [49].
As the additional, third recovery map R
[t] C→C
C commutes with the other tensor product recovery maps, the proof is exactly the same as the proof in the bipartite case (Proposition 8).Additionally, the corresponding regularized lower bound as lim sup then also follows from the asymptotic achievability of the measured entropy (Lemma 1).
We find the following faithfulness bound in terms of tripartite separability.
Proposition 13.Let ρ A ĀB BC| C be any six-party state.We have and consequently This strengthens the conceptually different multipartite CEMI faithfulness bounds from [49].Further lower bounds in terms of restricted fidelity and Schatten one-norm as leading to Eq. (60) are possible [36].The proof is similar as in the respective bipartite cases, Theorem 9 and Proposition 10, and is given in Appendix A.
The tripartite squashed entanglement is defined as [51, 1] where the infimum is over all four-party quantum state extensions ρ A 1 A 2 A 3 C on any system C (with no bound on the dimension of C). 7 We note the following recoverability lower bounds.Proposition 14.Let ρ A 1 A 2 A 3 C be any four-party state.We have with quantum channels and R [t] C→A 3 C (•) similar, (157) 7 In reference [51] other definitions of multipartite squashed entanglement are explored as well, which we do not discuss here, but should be amenable to similar considerations.and the probability density β 0 (t) = π 2 (cosh(πt) + 1) −1 .By the symmetry of I(A 1 : A 2 : A 3 |C) ρ under A 1 ↔ A 2 ↔ A 3 other orderings are possible as well.
The proof is as in [45,46] via multivariate trace inequalities and is given in Appendix A. Additionally, the corresponding regularized lower bound as lim sup then also follows from the asymptotic achievability of the measured entropy (Lemma 1).
However, we do not know how to show faithfulness lower bounds of I SQ (A 1 : A 2 : A 3 ) with respect to global separability SEP(A 1 : A 2 : A 3 ).As also noted in [39,49], this difficulty arises because compared to the multipartite CEMI case, there is now only one extension system C that all operators act on.
Outlook
In addition to exploring applications of our variational formulas for quantum relative entropy under restricted measurements, there are two immediate questions that remain open around entanglement monogamy inequalities in the spirit of this manuscript.First, is multipartite squashed entanglement faithful?Second, and as an extension of the separability refinements of SSA, is there a connection between the quantum conditional mutual information and exact quantum Markov chains [26,31]?We hope that our direct matrix analysis approach can shine some further light on these questions.Lastly, it would also be interesting to explore applications of the CEMI entanglement measure and its characterizations.
Choosing G 1 = log ρ A ĀB B and G 2 = and used that γ ABC ∈ ppt(A : B : C) as well as γ Ā B C ∈ ppt( Ā : B : C) inherit the relevant PPT structure from the choice X ABC ∈ PPT(A : B : C), similarly as in the bipartite case.Again, the crucial point is that the recovery maps all commute.Eq. ( 152) then follows from multipartite version of [43,Theorem 1] in the form Proof of Proposition 14.We first prove Eq. (155) by writing where we employed the six matrix Golden-Thompson inequality for the Schatten two-norm from [45,Corollary 3.3].Next, Eq. (158) directly follows from Eq. (183) via the additivity of CEMI on tensor product states, together with the asymptotic achievability of the measured relative entropy in Lemma 1. Finally, for the proof of Eq. (156), we follow the same consideration as in [45,Appendix F].Namely, for Peierls-Bogoliubov inequality as in Eq. ( 163), we choose G 1 = ρ A 1 A 2 A 3 C and
Lemma 2 .
Both D LOCC 1 (A→B) (• •) and E LOCC 1 (A → B) are monotone under LOCC 1 (A → B) operations, i.e. under local operations supported by one-way communications from A to B. | 2023-05-01T01:15:16.509Z | 2023-04-28T00:00:00.000 | {
"year": 2023,
"sha1": "abd920bfd6176cd674645e878770ca5b64e0b124",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00220-023-04920-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "837bd6ab9035ca2558aea4b2ba82e556f5318ce5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
159057631 | pes2o/s2orc | v3-fos-license | Teaching Research and Reform of Higher Vocational Medical Education in Guizhou Province of China
Article history Received: 3 January 2019 Revised: 12 February 2019 Accepted: 25 March 2019 Published Online: 1 April 2019 With the development of Guizhou’s economy and society, higher vocational medical education in Guizhou has developed rapidly, making it its mission to cultivate practical and skilled talents oriented to the grassroots and serving for frontline. However, due to the social environment, policy environment and insufficient funding, many difficulties and problems are faced. It is necessary to have a uni ed management throughout the province, rationally lay out higher vocational colleges and specialties, and promote the healthy and rapid development of medical higher vocational education in Guizhou with advanced concepts, proper policies, and suf cient funds in place, making higher vocational medical education in Guizhou enter a benign development period.
W
ith the rapid development of the country, a large number of application-oriented and operation-oriented talents are needed.Higher vocational medical education is part of higher education.In the process of massi cation and popularization of higher education in China, higher vocational medical education plays a pivotal role, which cultivates a large number of high-quality application-oriented talents for the country with the aim of cultivating practical and skilled talents oriented to the grassroots and serving for frontline.In recent years, the school size and student enrollment number of higher vocational colleges have expanded rapidly.How-ever, at present, higher vocational medical education has certain problems in terms of school regulations, school characteristics, school management, school hardware and software conditions, teaching quality, specialty settings and curriculum arrangements.In order to better grasp the development opportunities and make higher vocational medical education in Guizhou enter a benign development period, this paper attempts to start from the development status of higher vocational education in Guizhou to explore and discuss the countermeasures to promote the reform and development of higher vocational medical education in Guizhou.
The Development Status and Training Modes of Higher Vocational Education in Guizhou 2.1 Lack of Connotation Development
Higher vocational education in Guizhou Province started in 1998, from the organization and establishment originated from general universities and colleges, the restructuring of adult colleges, the merger of secondary schools in various regions, and the upgrading of secondary or secondary vocational schools, developing to where it is today.It can be said that the platform for higher vocational education in Guizhou has been basically completed.However, it is not optimistic that there are not many largescale medical vocational colleges.Although the scale of higher vocational education has developed, the connotation adapting to it has shown obvious deficiencies.The main performances are: the scale of higher vocational colleges is small, the number of higher vocational students is small, there is still a big gap compared with ordinary higher education.Compared with the situation in which higher vocational education accounts for half of the higher education all over China, it is far from the situation.And many higher vocational colleges include general higher education, general secondary education and secondary vocational education.Pure higher vocational education does not have an absolute advantage.
Since many higher vocational colleges are reformed or upgraded from other school forms, they are inherently inadequate in terms of higher vocational education thoughts, infrastructures, practical training conditions, and faculty levels.After establishment, the capital investment is seriously insuf cient, and the concept change is not in place, so that education and teaching cannot reach their goal of cultivating high-quality application-oriented talents.Higher vocational education organized by vocational and technical colleges subordinated from some general colleges and universities has become a compression education of general undergraduates.The proportion of practical training in teaching plans is too small, and the skills are cultivated in a single way.The teaching is still based on the related disciplines and the theoretical teaching is the main body, ignoring the cultivation of vocational skills, which cannot re ect the characteristics of higher vocational education; some of the upgraded higher vocational colleges have turned higher vocational education into a magni ed and even repetitive education for secondary vocational education.In the same school's teaching plan, the difference between secondary vocational education and higher vocational education is only re ected in the amount of courses, while the differences between knowledge structure and skill structure cannot be re ected.
The products of higher vocational colleges are students.The ability of students to serve the society is an embodiment of the school's ability and an important indicator of the school's comprehensive strength, which can indirectly reflect the maturity of the school's specialty construction and the closeness to the related industries and society.The reality shows that the level of higher vocational education in Guizhou is not high, the characteristics of vocational education are not obvious, and the quality needs to be improved.
The Characteristics of Vocational Education Need to Be Strengthened Urgently
Lack of overall planning in specialty setting and development, although the specialty is developing rapidly, the repetition is serious.Many schools have computer majors, accounting majors, and tourism majors, but some specialties are still seriously lacking.The characteristic specialties and specialty advantages of schools are not strong, and the market competitiveness is weak.With the full implementation of the Poverty Alleviation Program and the adjustment of the national industrial structures, the forefront of production requires a large number of application-oriented talents.Obviously, Guizhou lacks a specific overall planning guidance for the specialty settings and construction to meet the needs of higher vocational talents in Guizhou's social and economic development.There is still a gap between the training of application-oriented talents in higher vocational colleges and social development needs in Guizhou.
The Education Concepts Need to Be Strengthened Urgently
Due to the backward education concepts on education system and school management, many colleges have unreasonable curriculum settings.The performance is as follows: the course structure is single, the content is outdated, the order of the courses is not reasonable, and the links between the courses are not close; the teaching channels are single, and most of them still follow the single traditional mode of class teaching.The individual differences of students are dif cult to take care of, which limits the development of students' personality; the teaching methods lack innovation, and the "teaching plus examination" constitutes the main body of teaching activities, which ignores the improvement of students' quality and the cultivation of innovation ability. [1]n addition, the teaching evaluation system is simple.The evaluation of students is over-emphasizing test scores, and the examinations and tests are still the only magic weapon for teachers to evaluate students, which ignores the development and evaluation of students' innovative ability.This single form is not conducive to examining the overall quality of students, and is not conducive to the cultivation of the independence and creativity of students.Besides, the evaluation method for teachers is single and the content is not reasonable.It is dif cult to truly judge the level and achievements of teachers.
Traditional Higher Vocational Medical Education Focuses on Classroom Teaching while Operating Ability Is Poor
After the students graduated to work, lots of knowledge has to be re-learned, the practicality is not targeted, the learning is out of touch, and some students' employability is poor, which is far from the characteristics of modern vocational education, and the ability to adapt to job needs is weak.
New Policy and Talent Requirements of the Country for Modern Vocational Education and Teaching
The strategic deployment of the "Deepen Comprehensive Reform in the Field of Education" adopted by the 19th National Congress of the Communist Party of China puts forward new requirements for the continuous promotion of the scienti c development of higher education and the overall improvement of the quality of talent cultivation.To implement the spirit of the 19th National Congress of the Communist Party of China, we must intensify efforts to deepen the reform of education and teaching, innovate the training mechanism for talents in universities and colleges, and comprehensively improve the quality of talent cultivation.
Innovate System and Mechanism, Establish a "Comprehensive Education Concept", and Explore Diverse Talent cultivation Modes
It is necessary to focus on the core elements of talent cul-tivation objectives, curriculum system, teaching methods, teaching evaluation, and teaching environment.Vigorously promote the integration of the curriculum system and teaching contents, implement the teaching method reform oriented by students' independent learning and active practice, and establish a diverse, individualized and open education system.Construct an educational and teaching environment that is conducive to the development of students' comprehensive ability and individuality.It is necessary to follow the law of talent growth and in light of the actual situation of the school, constantly explore and improve various types of talent cultivation modes, and take the connotative development path with quality as the core. [2]2 Optimize the Curriculum System, Promote the Reform of Teaching Methods, and Construct a Reasonable Curriculum and Practical Teaching System It is necessary to rationally design the curriculum structure according to the talent cultivation objectives, and combine the specialty training standards to construct a curriculum system with specialty core courses as the mainstay, and to do a good job of the "Double Basic" (basic knowledge points and basic skills) of specialty knowledge.According to specialty needs, the curriculum system should be integrated and adjusted, the teaching content should be reformed, and the characteristics of professional talents should be highlighted.Teachers should be encouraged to adopt heuristic, inquiry, discussion, and participatory teaching methods, focusing on cultivating students' independent learning ability to raise, analyze, and solve problems, and inspiring students' thinking potential.Strengthen the construction of teaching resources, establish a platform for teaching resources sharing, and promote the opening of quality educational resources, such as quality online open courses.Support teachers to use the network education platform and other modern educational technologies and means to carry out teaching activities and improve the quality of multimedia teaching.Actively promote the reform of the curriculum evaluation method, combine the learning process examination with the student ability evaluation, which not only comprehensively evaluates the knowledge acquisition, exploration research, innovative thinking and other aspects of the learning process, but also evaluates the basic knowledge and key content requirements of the syllabus.For classroom teaching, Confucius-type Classroom (knowledge-driven, teacher-initiative, and students-active) Socrates-type Classroom (problem-driven, student-initiative, and teacher-student interaction), Flipped Classroom (MOOC online-learning, classroom-questioning), Independence Classroom (in-teresting target-driven, autonomous & conscious-active, curricular & extracurricular-linkage) and other forms of classroom teaching should be used exibly according to different teaching contents. [1]As the distribution center of ideas, models and mechanisms, the focus of reform is to start from the classroom and the curriculum reform.The classroom is the home of all education reforms.All reform ideas and measures must go to the classroom to re ect the ef ciency and results. [3]There are methods of teaching, but no xed rule of teaching.The methods and means to improve the quality of teaching are diverse and systematic.It is necessary to teach according to different disciplines, different courses, different contents, and different objects.As long as it is conducive to enhancing the teaching effect and improving the quality of teaching, it is a good method; any classroom that achieves knowledge, enlightens wisdom, and develops abilities, it is a good classroom. [4]
Focus on the Mutual Promotion of Scientific Research and Teaching
Make full use of the faculty and experimental conditions to strengthen the construction of courses and teaching materials.Focus on timely integration of the latest scienti c research results into the classroom teaching contents, so that students can know about the academic frontiers.Promote improvement in teaching conditions through scienti c research.Introduce high-quality educational resources, grasp and follow the general direction of higher education development.
The Renewal and Development of Modern Vocational Education Thoughts and the Transformation of Modern Medical Models
With the renewal and development of modern vocational education thoughts and the transformation of modern medical models, its influence is gradually infiltrating into education and teaching, which requires all educators to pay more attention to the combination of society and individuals; pay more attention to the penetration of humanistic spirit in science and technology education; pay more attention to quality education; pay more attention to the main role of students and the ability to analyze and solve problems; pay more attention to specialty technology and skills education, and better serve the society.Do a good job of "Six Dockings", that is, "the docking between occupational ethics education and comprehensive quality standards; the docking between specialties and related industries, enterprises and posts; the docking between specialized course contents and occupational standards; the docking between teaching process and productive process; the docking between academic certificates and occupational quali cations; the docking between vocational education and lifelong education docking". [5]
Favorable Conditions and Restrictive Factors for the Development of Higher Vocational Medical Education in Guizhou
With the gradual improvement of the policy environment and social environment, the introduction of "New Medical System Reform Scheme", the vigorous development of general medicine and community health services, the demand for talents in society and the market has steadily increased, and a large number of health technicians are required at the grassroots level.There is a large space for the development of higher vocational medical education in Guizhou.In the 8 specialties or specialty demonstration groups that have been established, the degree of development varies.For example, the two specialties of nursing and clinical medicine are relatively mature; while for the two specialties of medical imaging technology and medical beauty technology, the development history of these two specialties is relatively short, the specialty core construction workload is large, which is still in the exploration stage.The direction of medical imaging technology social services should be oriented to the industry, and the development of service projects is dif cult.The potential social training demand for the specialty of medical beauty technology is relatively large, but there are many social training institutions.Therefore, the development of school projects must create brands and highlight features.
The development of the local economy and the intrinsic advantages of medical vocational education have significantly improved the students' admission rate, registration rate and employment rate.As far as our school is concerned, the registration rate in these years has reached 85%, and other peer colleges are similar, which shows that the social recognition rate of higher vocational medical education has improved.In particular, with the implementation of a series of policies (such as the implementation of the occupational qualification access system), it will bring good development opportunities and favorable conditions for higher vocational medical education.
On the other hand, the restrictive factors for the development of higher vocational medical education in Guizhou are: late start, low starting point, insufficient capital investment, low faculty levels, poor training conditions, outdated education concepts, backward education and teaching level, and further improvement of talent quality is needed.
Distributed under creative commons license 4.0 Due to historical and practical reasons, the funds for higher vocational education in Guizhou are insufficient.The normal operation of higher vocational colleges relies mainly on the original foundation, limited social nancing and tuition income below the cost of education, and is stretched in terms of teacher training and training base construction.The faculty members engaged in higher vocational education have low academic qualifications, low professional titles, and there are few "double-certicated teachers", so that available teachers still have single knowledge structure, and weak practical ability, which cannot adapt to the new situation requirements in terms of how to cultivate practical innovative talents. [6]
Thoughts and Suggestions on Teaching Reform of Higher Vocational Medical Education in Guizhou
Teaching work is always the central task of the school.The quality of teaching is always the lifeline of the school.It is the eternal theme of the school to cultivate innovative quali ed talents.All reforms must be innovated and carried out around above three aspects.
Practically Change Ideas
Clarify the nature and essence of education, which is the high moral values establishment and people cultivation.Students are trained to have "independent personality, the spirit of exploration, the ability to learn, and the capability to practice", so that students can form correct values, outlook on life, and worldview.Therefore, every vocational education worker should be a conscious and sober educator.
Effectively Optimize the Operating Environment of Higher Vocational Education, Increase Propaganda Work, and Promote Educational Informationization
As a kind of education, higher vocational education, due to the late start and limited conditions for school-running, people generally attach importance to general higher education and despise vocational and technical education.There are even thoughts and awareness of despising vocational education in the society and even in the education circles.Some local governments and functional departments also have insufficient understanding of the status and role of higher vocational education.Due to the low social recognition rate and insuf cient social support, there are "Three Lows" phenomena of low starting point of students, low registration rate and low employment rate, which have seriously affected the normal development of higher vocational education.Therefore, we should increase the propaganda work of higher vocational education from the school-running level, school-running characteristics, training objectives and employment prospects, etc., especially for some good vocational colleges to be established as a model to let people change their concepts about higher vocational education, fully understand the role and status of higher vocational education in the country's economic and social development, and thus establish a good image.At the same time, the employment information channels for higher vocational students and students are opened, so that higher vocational education can nd a broader development space in the social environment with open information.At the same time, the enrollment and employment information channels for higher vocational students should be opened, so that higher vocational education can nd a broader development space in the social environment with open information.
Strengthen and Improve Students' Understanding of National Vocational Qualification Certifcation System and Labor Employment Access System
The implementation of the vocational quali cation certication system has created a good employment environment for vocational students.If a strict labor employment access system is implemented, no one can be employed without training or without the related quali cation certicates, which will be greatly bene cial to higher vocational education, so that the advantages of higher vocational education can be fully re ected.As long as the employment position access policies are actively developed and implemented, a virtuous circle will be established among vocational and technical education, position quali cation training and employment.
Build High-quality Teachers and Management Teams
Adjust the policy of teacher title evaluation in higher vocational colleges and establish a high-quality team of teachers with higher vocational characteristics.Due to the distinctive characteristics of higher vocational education teaching, it is necessary to adjust the evaluation scale from over-emphasizing the academic theoretical level and research ability of teachers to paying more attention to the knowledge transfer ability and knowledge conversion ability of higher vocational teachers.Therefore, a set of policies should be established to meet the quali cations of teachers with higher vocational education characteristics, and then paving the way for the establishment of a "double-certi cated teachers" team to provide system guarantee for ensuring the implementation of higher vocational characteristic education.
From the Aspect of Government
Strengthen government's policy management of higher vocational education, and do well in top-level design as follows:
Do Well in the Scale and Enrollment Control of Higher Vocational Colleges
Avoid the vicious competition brought about by blind expansion and the decline in the level of school-running.Reduce the battle for student enrollment between schools, making higher vocational education in Guizhou develop moderately.
Support Higher Vocational Colleges with Good Conditions to Improve the Status of Higher Vocational Education
Build the brand of higher vocational colleges to fully demonstrate the connotation and advantages of higher vocational education, thereby improving the status and reputation of higher vocational education, and gradually promote the overall level of school-running through the construction of model higher vocational colleges.
Improve the Education Level of Vocational Colleges, Consummate the Vocational Education System, and Do well in Lifelong Education
In the vocational education system, there are only vocational high schools, vocational technical schools or secondary technical schools and higher vocational colleges.After that, it is dif cult to have the opportunity to further study.This kind of capping method of vocational education is not conducive to the further study and development of higher vocational students, which is also not conducive to cultivating high-level application-oriented talents.At present in Guizhou, the proportion of secondary vocational education entering to higher vocational education is about 50%; and the proportion of higher vocational education entering to general higher education is less than 10%.The proportion is too small to meet the students' needs for study, so that higher vocational education becomes the end of education.Therefore, vocational and technical undergraduate courses should be set up in qualified vocational and technical colleges to enroll students of fouryear undergraduate or higher vocational students for further study, increase the proportion of students from higher vocational education entering to general higher education, and truly build a vocational education system from primary to intermediate to advanced.
Build
Open-type Practical Teaching Bases through Co-construction of Both the Government and Colleges on the Stage Offered by the Government For higher vocational colleges, practical teaching conditions are the fundamental guarantee for the quality of education and characteristic teaching.For a long time, the school's funds are tight, the equipment is rudimentary, and the update is not timely, and it cannot meet the needs of practical teaching of higher vocational education.How to build a practical teaching base?What kind is it?How to use it?It has been plaguing vocational and technical colleges.We believe that the government should invest in guiding funds, and then enterprises and schools invest in multiple elds, and establish several high-level practical teaching bases according to the industry characteristics and specialty advantages of different colleges.In addition to meeting the teaching needs of the college, it should be managed by the college and is open to the outside on the basis of the principle of mutual benefit.There are many experiences in other provinces that are worth learning.
Strengthen the Government's Macro-control, Use System to Monitor the Teaching Quality and Use Policies to Guide the School-running Characteristics
The government functional department should establish a scienti c evaluation system to evaluate the enrollment system, teaching organization, curriculum system, practical teaching, evaluation system and student employment; organize experts to review the teaching plans of higher vocational colleges, and comprehensively monitor the scale, structure, quality and effectiveness of higher vocational education; develop quantitative standards for the characteristic education of higher vocational education, so as to promote the improvement of teaching quality and the substantial development of higher vocational education characteristics.
Use Policy and Financial Advantages to Encourage the Development of Specialties in Short Supply
It is necessary to have a concept of uni ed management throughout the province in the construction of higher vocational specialties, starting from the overall situation of the provincial economic development, formulate pref-erential policies, use government subsidies to guide and encourage, and develop the "unpopular" specialties in society, especially the technological specialties that are in short supply.
Conclusion
In summary, comprehensively deepening the reform of education and teaching is the key to improving the quality of talent cultivation.This requires us to closely focus on the needs of society and the needs of students' comprehensive development, guided by advanced education concepts, and based on the construction of teachers, deepen the reform of education and teaching management, strengthen the cultivation of students' ability, and comprehensively deepen the comprehensive reform of education and teaching. | 2019-05-21T13:05:53.945Z | 2019-04-11T00:00:00.000 | {
"year": 2019,
"sha1": "392d0e2a7d1e26d84545b1eee78da90c84823311",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.30564/jams.v2i2.386",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "392d0e2a7d1e26d84545b1eee78da90c84823311",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
22880721 | pes2o/s2orc | v3-fos-license | Combined KIT and FGFR2b Signaling Regulates Epithelial Progenitor Expansion during Organogenesis
Summary Organ formation and regeneration require epithelial progenitor expansion to engineer, maintain, and repair the branched tissue architecture. Identifying the mechanisms that control progenitor expansion will inform therapeutic organ (re)generation. Here, we discover that combined KIT and fibroblast growth factor receptor 2b (FGFR2b) signaling specifically increases distal progenitor expansion during salivary gland organogenesis. FGFR2b signaling upregulates the epithelial KIT pathway so that combined KIT/FGFR2b signaling, via separate AKT and mitogen-activated protein kinase (MAPK) pathways, amplifies FGFR2b-dependent transcription. Combined KIT/FGFR2b signaling selectively expands the number of KIT+K14+SOX10+ distal progenitors, and a genetic loss of KIT signaling depletes the distal progenitors but also unexpectedly depletes the K5+ proximal progenitors. This occurs because the distal progenitors produce neurotrophic factors that support gland innervation, which maintains the proximal progenitors. Furthermore, a rare population of KIT+FGFR2b+ cells is present in adult glands, in which KIT signaling also regulates epithelial-neuronal communication during homeostasis. Our findings provide a framework to direct regeneration of branched epithelial organs.
INTRODUCTION
During organogenesis, epithelial progenitor cells generate the branched architecture of the tissue. These progenitors must increase in number while retaining their progenitor qualities, in a process known as expansion. Organogenesis further involves communication between expanding progenitors and other cell types located in the niche or local microenvironment (Wagers, 2012). Stromal, endothelial, and neuronal cells provide external cues that control the number of progenitors and their survival, maintenance, and differentiation (Kiger et al., 2000;Knox et al., 2010;Shen et al., 2004). Thus, it is imperative to understand the mechanisms by which progenitors expand and how they communicate with other cell types in order to regenerate or reengineer the branched architecture of epithelial organs.
KIT (C-KIT, CD117), a receptor tyrosine kinase (RTK), has been studied extensively in hematopoietic progenitors (Kent et al., 2008), but less is known about its function in epithelial progenitors. The ligand for KIT is stem cell factor (SCF), the gene product of Kitl. KIT signals via numerous pathways, including phosphatidylinositol 3-kinase (PI3K), phospholipase C g (PLCg), mitogen-activated protein kinase (MAPK), and Janus kinase/Signal Transducer and Activator of Transcription (JAK/STAT) (Lemmon and Schlessinger, 2010), and can transactivate other receptors (Jahn et al., 2007;Wu et al., 1995). Importantly, KIT-expressing (KIT+) progenitors form and regenerate various epithelial organs. Prostate tissue can be generated from a single KIT+ cell (Leong et al., 2008), epithelial-specific KIT+ progenitors functionally regenerate irradiated salivary glands (Lombaert et al., 2008;Nanduri et al., 2013), and KIT+ cells repair lungs postthoracotomy (Kajstura et al., 2011). These findings suggest that epithelial KIT+ progenitors somehow lay the foundation for branching organ architecture. Importantly, the loss of KIT signaling due to a homozygous SNP (Chabot et al., 1988), Kit W/W , is lethal by embryonic day 14 (E14) due to hematopoietic defects, but the effects of this mutation on epithelial progenitors and organogenesis are unclear.
Severe defects in epithelial organogenesis occur in mice lacking Fgf10 or its receptor, Fgfr2b, and provide valuable insight into epithelial progenitor cell biology. Many organs, such as the salivary glands and lungs, do not form or are hypoplastic (De Moerlooze et al., 2000;Ohuchi et al., 2000). These phenotypes suggest defects in the survival, maintenance, and/or expansion of epithelial progenitors. In addition, mutations in fibroblast growth factor receptor 2 (FGFR2) and KIT occur in many epithelial tumors, and both receptors are being targeted with specific RTK inhibitors in breast, lung, liver, salivary gland, skin, renal, gastrointestinal, colorectal, ovarian, and uterine cancers (Casaletto and McClatchey, 2012;Hanahan and Weinberg, 2011;Lemmon and Schlessinger, 2010;Takeuchi and Ito, 2011). We thus hypothesized that an interaction between FGFR2b and KIT signaling could regulate epithelial progenitor expansion during organogenesis.
To investigate this hypothesis, we studied mouse submandibular glands (SMGs), which develop by reiterative rounds of distal endbud and proximal duct formation, and require communication with the neuronal niche (Knox et al., 2010). We discovered that FGFR2b signaling upregulates the epithelial KIT pathway so that combined KIT/FGFR2b signaling, via separate AKT and MAPK pathways, amplifies FGFR2b-dependent transcription. The combined KIT and FGFR2b signaling increases the number of KIT+FGFR2b+ distal progenitors, but loss of KIT signaling depletes these progenitors. This KIT/FGFR2b-dependent mechanism is conserved during adult tissue homeostasis and in other branching organs.
KIT+ Progenitor Expansion Occurs in Endbuds during SMG Branching Morphogenesis
We first investigated the developmental expression and localization of Kit and Kitl mRNA by quantitative PCR (qPCR; Figure 1A), in situ hybridization ( Figure 1B), and microarray during development (Figure S1A available online). mRNA products of both Kit and Kitl were detectable during gland initiation at E11.5, when the initial endbud forms distal to a primary duct, and expression of both peaked at $E15 ( Figure 1A). From E12 to E15, branching morphogenesis occurred with reiterative rounds of distal endbud expansion and proximal duct formation. Whereas Kit mRNA was localized to endbuds, Kitl mRNA was found mainly in the mesenchyme around the endbuds, but was also detected within endbuds ( Figure 1B), as confirmed by qPCR analysis of isolated E13 endbuds, ducts, and mesenchyme ( Figure S1B). During branching morphogenesis, KIT protein was localized to E-cadherin+ (ECAD+) endbud cells ( Figure 1C, E16, arrows), but was not detected in ducts (KITÀ) (Figures 1B and 1C). Fluorescence-activated cell sorting (FACS) analysis confirmed that during the rapid branching phase, the number of epithelial KIT+ cells (ECAD+KIT+) increased from 10% to 20% of total cells in the intact SMG ( Figures 1D and S1C). Furthermore, FACS analysis and Ki67 staining showed that E13 ECAD+KIT+ cells were highly proliferative ( Figure 1E), since $70% of cycling SMG cells (Ki67+) were KIT+. This highly proliferative state occurred up to E16 ( Figure 1E). By the time secretory differentiation began after E16, both Kit and Kitl mRNA expression decreased (Figures 1A and S1A). KIT+ cells accounted for only $3% of total cells at postnatal day 1 (P1; Figure 1D), which is comparable to levels in adult SMGs (Lombaert et al., 2008). Since the number of KIT+ endbud cells increases during branching morphogenesis, the data suggest that KIT+ progenitor expansion occurs in endbuds.
FGFR2b and KIT Signal via Separate MAPK and AKT Pathways
Because FGFR2b signaling may upregulate an autocrine epithelial KIT pathway, we investigated the molecular interactions of these pathways and paracrine KIT signaling using cell lines expressing KIT and FGFR2b, and isolated SMG epithelium. We were unable to coimmunoprecipitate FGFR2b and KIT from SMG epithelium (not shown). Since we could not show a direct interaction, we investigated possible transactivation or signaling crosstalk on two levels. First, at the ligand level, we used a rat myoblast cell line (L6) expressing either Flag-tagged FGFR2b (FGFR2b-FL) or hemagglutinin (HA)-tagged KIT (KIT-HA) to show that the ligands are specific and do not phosphorylate (Anti-PY) the other receptor in these cells ( Figure 2D). Second, at the receptor level, we used L6 cells expressing both receptors (KIT+FGFR2b+L6) to show that KIT and FGFR2b do not transactivate (Anti-PY) each other (Figure 2E). We then used FGFR2b+L6 or KIT+L6 cells to show that pERK1/2 is downstream of both KIT and FGFR2b, and that the KIT inhibitor ISCK03 (ISCK) specifically reduced SCF/KIT-dependent pERK1/2, whereas SU specifically reduced pERK12 downstream of FGF10/FGFR2b (Figure 2F). We confirmed that ISCK specifically inhibited KIT, but not FGFR2b, phosphorylation ( Figure S2A). Further, we used a KIT+ human leukemic cell line, Mo7e, which does not express FGFR2b, to confirm that SU did not inhibit SCF-dependent pKIT Y721 and downstream pAKT, whereas ISCK did ( Figure S2B). Neither inhibitor reduced pERK1/2, which is downstream of other RTKs in Mo7e cells. Taken together, these data suggest that FGFR2b and KIT do not transactivate each other, and that separate MAPK and AKT signaling pathways occur downstream of each receptor.
We also measured phosphorylation of KIT Y721 in isolated SMG epithelia cultured for 24 hr with FGF10 and then treated for 15 min with additional SCF or FGF10 and/or SU and ISCK. There was a robust baseline level of pKIT Y721 , and SCF further increased pKIT Y721 and pAKT, whereas FGF10 increased pERK1/2 ( Figure 2G). As expected, SU reduced pERK1/2 downstream of FGF10, and ISCK reduced pAKT downstream of SCF. Exogenous FGF10 also increased pKIT Y721 , which is likely due to endogenous epithelial SCF (i.e., autocrine KIT signaling in the epithelium) being enhanced by FGF10. This FGF10-dependent pKIT Y721 and pAKT were reduced by SU or SU+ISCK, suggesting that in SMG epithelia, FGF10 may transactivate KIT or that SU inhibits another receptor that transactivates KIT. We also show that ISCK reduced pKIT Y721 and pAKT to near control levels after SCF treatment and partially reduced pKIT Y721 after FGF10 treatment. In other experiments, ISCK was specific and did not reduce pERK1/2 or pAKT downstream of FGFR2b (with FGF10), epidermal growth factor receptor (EGFR; with HBEGF), or FGFR1 receptors (with FGF2) after 1 hr ( Figures S2C and S2D). These data suggest that separate KIT and FGFR2b signaling occurs in SMG epithelia KIT+ progenitors.
Combined FGFR2b and KIT Signaling Amplifies FGFR2b-Dependent Transcription
To investigate how separate KIT and FGFR2b signaling pathways regulate progenitor expansion, we treated isolated SMG epithelia with SCF and FGF10 alone or in combination. The downstream phosphorylation with SCF and FGF10 was additive and led to enhanced and sustained phosphorylation of SHP2, AKT, and ERK1/2 compared with either SCF or FGF10 alone ( Figure 3A). We predicted that this might increase downstream gene transcription. Thus, we measured the expression of a cassette of transcription factors (TFs), Sox10, Myc, Etv4, Etv5, and DNp63, all of which are expressed in endbuds and potentially involved in SMG progenitor expansion (Lombaert et al., 2011).
Costimulation with FGF10+SCF for 3 hr increased expression of Sox10, Myc, Etv4, and Etv5, but not DNp63, above the level observed with FGF10 alone ( Figure 3B). We show that KIT signaling was essential for this increased expression, since ISCK reduced expression to a level similar to that obtained with FGF10 alone. The PI3K inhibitor LY29052 (which reduces pAKT), but not the PLCg inhibitor U73122, mimicked this effect, suggesting that the positive regulation by KIT occurs via increased PI3K/AKT signaling. We confirmed that activation of FGFR2b signaling upregulated TFs in a MAPK-dependent manner ( Figure S3). Importantly, SCF alone did not have direct effects on TF gene expression. Furthermore, KIT and FGFR2b regulate these conserved TF targets in cells that do not Data are mean ± SEM, normalized to Rsp29 and Control. One-way ANOVA with post-hoc Dunnett's test; ***p < 0.001, n = 3 biological samples. (B) qPCR analysis of E13 epithelia 2 hr after growth factor addition. Data are mean ± SEM, normalized to Rsp29 and epithelia at 0 hr (dashed line). Unpaired t test; *p < 0.05, n = 3 biological samples. (C) qPCR analysis of E13 epithelia at different times after FGF10 treatment. Data are mean ± SEM, normalized to Rsp29 and 0 hr (gray dashed line), n = 3 biological samples. (D) Immunoblot for p-tyrosine kinase (PY), FGFR2 (Anti-Bek), and KIT (Anti-HA) after immunoprecipitation (IP) of FGFR2b-FL-or KIT-HAtagged proteins from FGFR2b+L6 or KIT+L6 cells 15 min after growth factor stimulation with either FGF10 or SCF. (E) Immunoblot of phosphotyrosine (PY) and FGFR2 (Bek), and KIT (HA) after IP of Flag-FGFR2b or HA-KIT in KIT+FGFR2b+ L6 cells, after FGF10, SCF, and/or ISCK treatment. (F) Immunoblot of pERK1/2 and total (T) ERK after FGFR2b+L6 or KIT+L6 cells were treated with FGF10, SCF, SU, and ISCK. (G) Immunoblot of pKIT Y721 , pAKT, pERK1/2, and total (T) AKT or ERK1/2 and KIT after epithelia were cultured for 5 min with SCF and/or FGF10 in the presence of ISCK or SU. Representative blot, n = 3 biological samples. See also Figure
KIT and FGFR2b Signaling Selectively Expands Distal Progenitors in the Endbud Next, we investigated whether KIT and FGFR2b regulate the expansion of both distal KIT+K14+ and proximal KIT+K5+ progenitors. Combined FGF10+SCF treatment increased K14 expression in isolated epithelial endbuds within 24 hr ( Figure 4D). Staining for KIT, SOX10, and p63 confirmed that multiple cell layers of K14+ progenitors were present in the endbud. Conversely, central K19+ ductal cells were reduced as compared with FGF10 treatment alone. Consistent with this, Krt14 and Sox10 increased, whereas Krt19 decreased, with FGF10+SCF versus FGF10 treatment ( Figure S4D). To confirm that KIT and FGFR2b signaling selectively amplify KIT+K14+ distal progenitors, we cultured E13 SMGs with FGF10À/+ SCF and did a FACS analysis. Similar to the case with isolated epithelium, combined KIT and FGFR2b signaling expanded the number of K14+, KIT+, and SOX10+ cells ( Figures 4E and S4E). The KIT+K14+ distal progenitors increased in number at the expense of KIT+K14À cells ( Figure 4F). This is reflected in the doubling number of all KIT+K14+ cells, including those coexpressing K5 and/or K19: K14+ (K14+K5ÀK19À), K14+K5+ (K14+K5+K19À), K14+K19+ (K14+K5ÀK19+), and K14+K5+K19+. Immunostaining validated the FACS data, showing that endbuds contained more cells expressing K14, SOX10, and KIT ( Figure 4G). Thus, KIT and FGFR2b selectively expand distal KIT+K14+ progenitors, but not proximal KIT+K5+ progenitors or their K19+ progeny. We also measured proliferation in intact SMGs and isolated epithelia after FGF10À/+ SCF stimulation. FACS analyses showed that the number of proliferating (Ki67+) and epithelial (ECAD+) SMG cells did not change with FGF10+SCF treatments ( Figure 4E), and there was no difference in gland morphology or endbud number ( Figure S4F). In isolated epithelia, SCF alone did not support survival or proliferation ( Figures S4G and S4H). Since KIT signaling alone did not affect proliferation, we conclude that selective expansion of KIT+K14+ progenitors involves combined KIT and FGFR2b signaling for proliferation and doubling of the cell number.
Expansion of Distal Progenitors Is Essential for Branching Morphogenesis
We then asked whether distal progenitor cell expansion is required for branching morphogenesis. We used two lossof-function approaches: Kit W/W SMGs, which lack KIT signaling due to a SNP in the W locus (Chabot et al., 1988), and ISCK treatment of intact SMGs (Figures 5A-5H). In both E14 Kit W/W and ISCK-treated SMGs, we found a significant reduction in the number of cells expressing KIT, K14, SOX10, and p63 as determined by protein staining, FACS analysis, and mRNA expression ( Figures 5A, 5B, 5E, S5A, and S5B). Importantly, loss of KIT signaling reduced branching in cultured ISCK-treated and E14 Kit W/W SMGs, which were smaller than Kit W/+ and Kit +/+ glands ( Figures 5C, 5D, and S5C). Even though E14 Kit W/W SMGs were similar in size to E13 Kit +/+ SMGs, they exhibited reduced branching in culture ( Figures 5C, 5D, and S5C). Notably, this reduction in size was associated with differentiated ducts with limited proliferative capacity. Consequently, Kit W/W and ISCK-treated SMGs had reduced numbers of proximal K5+ progenitors and a concomitant increase in the number of K19+ duct cells (Figures 5A, 5B, and 5E). Increased expression of Hbegf and Egfr ( Figure 5E), which drive ductal differentiation (Knox et al., 2010) and reduce proliferation, further supported this finding ( Figures 5E and S5B). In sum, loss of KIT signaling drives ductal differentiation and reduces branching morphogenesis.
To confirm that the primary defect was due to loss of KIT function in the epithelium, and not in the mesenchyme or blood vessels, we cultured both Kit W/W and ISCK-treated wild-type epithelia. Both exhibited reduced growth compared with the Kit +/+ and DMSO-treated controls (Figure 5F). As expected, they showed reduced staining for KIT, CCND1, and K14; increased staining for K19; and Figures 5G and 5H). We conclude that reduced epithelial branching in Kit W/W and ISCK-treated glands was primarily due to reduced KIT signaling in the epithelium.
We further evaluated whether combined KIT and FGFR2b signaling has a conserved role in other organs that require FGFR2b, such as the lung. The lung, limb, kidney, liver, and pancreas express transcripts for Kit and Kitl ( Figure S5D). At early stages (E10-E12), both lung epithelia and mesenchyme expressed Kit mRNA and protein as determined by qPCR, in situ analysis, and immunostaining ( Figures S5E-S5G). Lung KIT+ cells are proliferative (CCND1+), express the progenitor marker SOX9 (Lu et al., 2008), and are located in FGFR2b+ and inhibitor of DNA binding 2 (ID2)+ endbuds . However, in E14 lung epithelia, KIT protein was not detectable and there was reduced Kit expression compared with earlier stages (Figures S5E and S5G). Similar to the case with SMGs, the loss of KIT signaling in E13 Kit W/W lungs resulted in $40% smaller lungs, and there was reduced branching morphogenesis in E11 lungs cultured with ISCK for 48 hr ( Figures S5H and S5I). This correlated with a reduction in expression of TFs (Id2, Sox9, Myc, Etv4, and Etv5) in lung endbuds ( Figure S5J). Furthermore, there was reduced expression of proliferation (Ccnd1) and neuroendocrine cell markers (Sv2b and Calca), suggesting that KIT signaling may also impact these cell types. The primary defect in lung branching was also due to reduced KIT signaling in the epithelia, as shown by treating E10 lung epithelia with ISCK ( Figure S5K). In gain-of function experiments with lung epithelia, SCF alone did not affect proliferation (Figure S5L), but altered TF gene expression when combined with FGF10. These data suggest that KIT and FGFR2b signaling expands progenitors by regulating a cassette of TFs in both SMGs and lungs.
Distal KIT+ Progenitors Communicate with the Neuronal Niche to Maintain Proximal Progenitors and Coordinate Ductal Architecture
Since a loss of KIT signaling depletes both distal and proximal progenitors, and proximal progenitors are maintained by innervation (Knox et al., 2010), we hypothesized that communication occurs between these distinct progenitors via the neuronal niche. To test this, we analyzed the parasympathetic innervation in Kit W/W and ISCK-treated SMGs. In both situations, we observed defects in epithelial innervation and reduced nerve function ( Figure 6A). There was reduced defasciculation (arrowhead), such that fewer but wider nerve bundles extended toward the endbuds. Nerves also contained fewer varicosities ( Figure 6A, middle panels), which release neurotransmitters such as acetylcholine. Loss of neuronal function in Kit W/W and ISCK-treated SMGs was confirmed by measuring reduced expression of Tubb3 (an axonal marker), Gfra2 (a parasympathetic marker), and markers of neuronal acetylcholine function (vesicular acetylcholine transporter [Slc18a3 or Vacht] and choline acetyltransferase [Chat]; Figure 6B). Similar reductions of innervation and similar trends in the reduction of neuronal function were observed when KIT signaling was inhibited with ISCK at later stages of development after branching morphogenesis had occurred ( Figures S6A and S6B). Washout of ISCK resulted in reinnervation, continued branching morphogenesis, and increased expression of genes associated with neuronal function (Figures S6B and S6C). These data suggest that maintenance of epithelial-neuronal communication requires KIT signaling.
To confirm that KIT signaling in KIT+ mesenchymal or endothelial cells did not influence nerve defasciculation and proximal K5+ progenitors, we recombined isolated epithelia from either Kit WW or Kit WW ;K5Venus mice with control (Kit +/+ ) mesenchyme, which contains endothelial and neuronal cells. Recombined SMGs had reduced epithelial endbud growth and innervation ( Figures 6C and S6D), and depletion of K5+ progenitors occurred in the Kit WW ;K5Venus SMGs ( Figure S6E) as compared with control. Taken together, these data confirm that KIT signaling in distal epithelial progenitors is essential for functional innervation and maintenance of proximal K5+ progenitors.
Since loss of KIT signaling affects K5+ proximal progenitors indirectly via the nerves, we predicted that we could rescue the K5+ cells from ISCK treatment with CCh and
KIT/FGFR2b in Salivary Glands
HBEGF, which maintain K5+ progenitors (Knox et al., 2010). The addition of CCh and HBEGF to ISCK-treated SMGs maintained K5+ cells, but K14+ proximal progenitors were not rescued ( Figures 6D and S6F). There was also no increase in branching morphogenesis or expression of Krt14, Kit, and Sox10 ( Figure S6F). These data suggest that coordination of the organ's architecture requires KIT-mediated expansion of K14+ distal progenitors.
We then hypothesized that KIT+ cells secrete neurotrophic factors that promote innervation and neuronal function. We recently identified that neurturin (NRTN) is produced by SMG epithelium and is important for neuronal survival (Knox et al., 2013). Kit W/W and ISCKtreated SMGs showed reduced Nrtn expression compared with controls ( Figure 6B). Furthermore, FACS-sorted epithelial KIT+FGFR2b+ cells from E13 SMGs had 3-fold more Nrtn expression than KITÀ cells ( Figure S6G). In addition, we confirmed that this occurred both in Kit W/W lungs and in control E11 lungs treated with ISCK, in which we also observed reduced innervation and varicosities ( Figure S6H). Importantly, in both situations there was reduced Nrtn, Tubb3, and Vacht expression ( Figure S6H). There was also reduced Gfra2 in Kit W/W lungs and reduced Chat expression with ISCK treatment. Thus, the reduction in neuronal gene expression is likely secondary to the reduction in NRTN. We conclude that the production of neurotrophic factors by distal KIT+ progenitors is a conserved mechanism to support the neuronal niche during organogenesis.
KIT Signaling in a Rare Population of Adult KIT+FGFR2b+ Progenitors Maintains Epithelial-Neuronal Communication during Homeostasis
Since adult SMG epithelial KIT+ cells (Lombaert et al., 2008) and lung KIT+ cells (Kajstura et al., 2011) were used for regeneration, we sought to determine whether our findings from organogenesis were conserved during adult tissue homeostasis. Immunostaining of the adult SMG revealed that KIT+ cells were localized in intercalated ducts (IDs; Figure 7A), which harbor progenitor cells based on label-retaining assays (Man et al., 2001) and are in close proximity to differentiated Aquaporin5+ acinar cells ( Figure 7B). Adult mouse lungs exhibited abundant KIT+Vimentin+ or KIT+CD31+ blood vessels, with only rare epithelial KIT+ECAD+ or K18+ cells detected (Figure S7A). Similar to what was observed during organogenesis, adult SMG KIT+ progenitors were proliferative (CCND1+) (Figures 7C and 7F) and expressed FGFR2b on their cell membrane (Figures 7D and 7F). To compare their expression with that of embryonic KIT+ cells, we FACS sorted adult SMG epithelial LIN-EPCAM+KIT+FGFR2b+ cells, termed KIT+FGFR2b+. FGFR2b was expressed on fewer KIT+ cells in the adult SMG (23% ± 3%, green peak; Figure 7E) compared with KIT+ cells in E13 SMGs (90% ± 4%, green peak; Figure 7E). Since KIT+ cells account for $3% of epithelial cells in the adult gland (Lombaert et al., 2008), adult KIT+FGFR2b+ cells are a rare population of 0.6% ± 0.2%. Intriguingly, qPCR analysis of adult KIT+FGFR2b+ sorted cells revealed a striking similarity to embryonic KIT+ progenitors, with Etv4, Sox10, Ccdn1, and Nrtn being highly expressed compared with KIT+FGFR2bÀ cells ( Figure 7F). Therefore, we predicted that KIT and FGFR2b signaling in adult ID epithelial cells might also maintain communication with nerves. Since Kit W/W is embryonic lethal, we cultured adult wild-type SMG explants for 3 days in the presence of ISCK. As predicted, neuronal innervation was affected ( Figure 7G) and K5 and K14 staining was reduced along with an increase in ductal K19+ cells ( Figure 7H). Accordingly, ISCK reduced expression of Tubb3, Krt5, Krt14, Kit, and Ccnd1 ( Figure 7J). Conversely, adding FGF10 and SCF increased cell proliferation and Ccnd1 (Figures 7I and 7J). Taken together, these data indicate that during organ homeostasis, KIT signaling also maintains the ductal architecture via communication among multiple epithelial progenitors and their niches.
Finally, we confirmed that a similar KIT+FGFR2b+ cell population occurs in human salivary glands ( Figure 7K). Human SMG KIT+ cells are located in IDs and excretory ducts (EDs). Although many KIT+ cells expressed SOX10 ( Figure S7B), only a rare population of KIT+ cells coexpressed FGFR2 ( Figure 7K). Surprisingly, we also detected KIT expression on acinar cells, which may make them potentially responsive to KIT signaling as well, although they were not FGFR2+. The working model presented in Figure 7L shows that mesenchymal-derived SCF and FGF10 expand KIT+FGFR2b+ distal progenitors that (E) qPCR analysis of E14 Kit +/+ and Kit W/W SMGs, and SMGs cultured for 72 hr with DMSO (Control) or ISCK. Data were normalized to E14 Kit +/+ or Control (dotted line), respectively, and Rps29. Unpaired t test, *p < 0.05, **p < 0.01, ***p < 0.001; n = 3 biological samples. (F) Isolated Kit W/W epithelia and E13 epithelia treated with ISCK for 48 hr undergo less epithelial morphogenesis than control epithelia. Scale bar, 20 mm. (G) Isolated epithelia cultured with ISCK for 48 hr have reduced staining for KIT, CCND1, and K14, and increased K19 staining. LCSM image of 1 mm section through endbud. (H) Gene-expression profile of Kit +/+ and Kit W/W or DMSO and ISCK-treated epithelia. Data were normalized to control (DMSO or Kit +/+ ) and Rps29. Unpaired t test, *p < 0.05, **p < 0.01, ***p < 0.001; n = 3 biological samples. See also Figure S5.
DISCUSSION
Here, we propose that combined KIT and FGFR2b signaling regulates epithelial progenitor expansion. We identify an interaction between KIT and FGFR2b signaling pathways that converges to upregulate a conserved group of FGFR2b-dependent TFs and expand distal progenitors. These progenitors further communicate with the neuronal niche to direct proximal progenitors to form ducts. These interactions are maintained during adult organ homeostasis, and exist in human organs. Our findings may have implications for regenerative medicine because they demonstrate that both KIT and FGFR2b signaling and communication among multiple cell types are necessary for organogenesis.
We propose that a similar conserved mechanism occurs in other organs, because FGFR2b signaling is required to initiate development of mammary, pituitary, and thyroid glands, as well as the kidney, pancreas, and prostate (Lin et al., 2007;Mailleux et al., 2002;Ohuchi et al., 2000). Similarly, KIT is expressed in these organs and is essential for kidney and prostate development (La Rosa et al., 2008;Leong et al., 2008;Li et al., 2007;Schmidt-Ott et al., 2006;Ulivi et al., 2004). Although previous studies have shown that FGFR2b signaling induces progenitor survival and proliferation (Bhushan et al., 2001;Ohuchi et al., 2000), we now demonstrate that it is a key upstream driver that induces distal KIT+ progenitor expansion. FGFR2b signaling achieves this by upregulating the KIT pathway to maintain KIT+-responsive progenitors. This is a critical event because combined KIT and FGFR2b signaling pathways converge at the transcriptional level to upregulate the expression of a conserved cassette of FGFR2b-dependent TFs in KIT+ progenitors. Distal progenitors of other organs, such as the pancreas (Kobberup et al., 2007) and limb (Zhang et al., 2009), express ETV4 and ETV5, which are part of this conserved TF cassette. Our experiments with L6 rat myoblasts also suggest that conserved TFs are upregulated when KIT and FGFR2b are overexpressed in cells where they are not normally present, and that for each cell type additional TFs may provide a cell-type-specific response.
We also propose that the KIT/FGFR2b pathway likely integrates with other signaling pathways to direct progenitor expansion. For example, WNT signaling has been proposed to be a master regulator of lung development (Rajagopal et al., 2008). Yet, our data suggest that WNT and KIT/ FGFR2b can affect distal progenitors in different ways. The specific loss of WNT7b did not alter the number of Sox9 or SOX9+ cells in lungs, whereas loss of KIT signaling reduced Sox9 in lung endbuds. In contrast to WNT signaling, KIT does not directly influence CCND1-mediated proliferation, and although lung development is arrested in FGFR2b À/À mice, it is only reduced in b-Catenin À/À mice (Mucenski et al., 2003;Shu et al., 2005). It will be interesting to determine whether the neuronal niche is affected in WNT-depleted mice.
Prior to this work, we did not fully understand how epithelial progenitors communicate with their surrounding niche during organogenesis (Bryant and Mostov, 2008;Hogan, 1999). Here, we propose that a key mechanism is distal progenitor expansion, since these cells produce critical neurotrophic signals to communicate with the neuronal niche to regulate proximal progenitors. Neurturin is also produced by distal progenitors in the kidney (Davies et al., 1999) and is found in the prostate, pituitary gland, and pancreas (Golden et al., 1999). It is not known whether KIT+ cells secrete neurotrophic factors in these organs.
Previous studies have shown that paracrine and autocrine morphogenic gradients control branching morphogenesis, and a number of models propose the existence of iterative positive and negative unidirectional cues between the epithelium and mesenchyme or within the epithelial lineage (Bryant and Mostov, 2008;Gjorevski and Nelson, 2011;Hogan, 1999;Hsu and Fuchs, 2012). These studies provided valuable insights into organogenesis, but focused on communication from a single cell type. Progenitors in vivo must interpret signals from multiple cell types in their surrounding microenvironment during organogenesis. Our work has implications for regenerative medicine and the bioengineering of tissues, which may require multiple progenitor cell types to generate branching organs. Also, our findings will inform efforts to expand epithelial progenitors in vitro, which will be critical in clinical settings where small numbers of progenitors from biopsies could be expanded with reagents that target FGFR2b, KIT, and the neuronal niche. We propose that using progenitor Stem Cell Reports transplants with factors that preserve nerves will improve regeneration of damaged organs. For instance, nerves may facilitate the production of insulin (Rossi et al., 2005) after grafting of beta-progenitors to treat diabetic neuropathy and pancreatitis (Melton, 2011), or enhance lung repair after progenitor transplants (Rock and Hogan, 2011).
It has been proposed that tumors and organs are similar in terms of the complexity of cell types, microenvironments, and signaling networks involved in these components (Hanahan and Weinberg, 2011). The clonal nature of cancer means that targeting a single RTK may provide selective pressure for resistant tumor clones (Nik-Zainal et al., 2012). It is also possible that an activating KIT or FGFR2 mutation in an FGFR2b-or KIT-expressing tumor, respectively, may expand a dominant tumor clone and amplify downstream responses. Thus, by targeting KIT and FGFR2b, and/or the signals between the progenitors and their niches, we may be able to target tumors more efficiently. In conclusion, a clear resolution of the signaling pathways and communication among multiple cell types that are representative of the endogenous microenvironment during organogenesis and homeostasis provides insight into tumor biology and a framework to direct therapeutic organ regeneration.
EXPERIMENTAL PROCEDURES
Mouse Lines, Breeding, Genotyping, and Lineage Tracing All protocols involving mice (Supplemental Experimental Procedures) were approved by the NIH ACUC. Culture of Mo7e and L6 cells, and staining of human biopsies are described in the Supplemental Experimental Procedures.
Ex Vivo Organ Culture
We performed fetal and adult intact organ and mesenchyme-free epithelia culture of SMG or lung in the presence of different growth factors and/or inhibitors as described in the Supplemental Experimental Procedures. qPCR Real-time qPCR was performed as previously described (Knox et al., 2013). cDNAs were generated and amplified to determine fold changes in expression by normalizing to the housekeeping gene Rps29. The generation of single amplicon products was confirmed by melt curve analysis.
Immunofluorescence Analysis and FACS
We performed whole-mount immunofluorescence as described in the Supplemental Experimental Procedures. For FACS, single-cell suspensions of SMGs were analyzed on either a BD Calibur or an LSRII, and sorted on a BD Aria sorter (see Supplemental Experimental Procedures). Negative cell populations were used as controls.
In Situ Hybridization
Digoxigen-11-UTP-labeled single-stranded RNA probes were prepared using the DIG RNA labeling kit (Roche Applied Science) according to the manufacturer's instructions.
Western Blot Analysis
Protein lysates were resolved on Tris gels, transferred to membranes, probed with antibodies, and visualized with West Dura reagent as described in detail in the Supplemental Experimental Procedures.
Statistical Analysis
Experiments were performed with at least three biological replicates. To determine significance between two groups, comparisons were made using Student's t test. Analysis of multiple groups was (I) Adult SMG explants cultured for 3 days ± FGF10+SCF were stained for CCND1 and DAPI. Images are 1 mm LSCM section. Scale bar, 20 mm. (J) qPCR analysis of adult SMG explants cultured in basal media (Control), FGF10+SCF, DMSO, or ISCK for 72 hr. Data were normalized to Control or DMSO, and Rps29. Unpaired t test, *p < 0.05, **p < 0.01, ***p < 0.001; n = 3 biological samples. | 2018-04-03T06:13:44.229Z | 2013-12-12T00:00:00.000 | {
"year": 2013,
"sha1": "214d251cada3105cc00253e050da9ba5e78964b1",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.stemcr.2013.10.013",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "214d251cada3105cc00253e050da9ba5e78964b1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
174805392 | pes2o/s2orc | v3-fos-license | Use of Secukinumab in a Cohort of Erythrodermic Psoriatic Patients: A Pilot Study
Erythrodermic psoriasis (EP) is a dermatological emergency and its treatment with secukinumab is still controversial. Furthermore, no data exist regarding the prognostic value of drug abuse in such a condition. We performed a multi-center, international, retrospective study, enrolling a sample of EP patients (body surface area > 90%) who were treated with secukinumab (300 mg) during the study period from December 2015 to December 2018. Demographics and clinical data were collected. Drug abuses were screened and, specifically, smoking status (packages/year), cannabis use (application/week) and alcoholism—tested with the Alcohol Use Disorders Identification Test (AUDIT)—were assessed. All patients were followed for up to 52 weeks. We enrolled 13 EP patients, nine males, and four females, with a median age of 40 (28–52) years. Patients naïve to biologic therapy were 3/13. Regarding drug use, seven patients had a medium-high risk of alcohol addiction, three used cannabis weekly, and seven were smokers with a pack/year index of 295 (190–365). The response rate to secukinumab was 10/13 patients with a median time to clearance of three weeks (1.5–3). No recurrences were registered in the 52-week follow-up and a Psoriasis Area Severity Index (PASI) score of 90 was achieved. The entire cohort of non-responders (n = 3) consumed at least two drugs of abuse (alcohol, smoking or cannabis). Non-responders were switched to ustekinumab and obtained a PASI 100 in 24 weeks. However, given our observed number of patients using various drugs in combination with secukinumab in EP, further studies are needed to ascertain drug abuse prevalence in a larger EP cohort. Secukinumab remains a valid, effective and safe therapeutic option for EP.
Introduction
Erythroderma is an uncommon and severe dermatological manifestation of a variety of diseases. The most common form of erythroderma is erythrodermic psoriasis (EP), which accounts for 1-2.25% of all psoriatic patients, with a male predominance as demonstrated by a male to female ratio of 3:1 [1]. EP clinically manifests with diffuse erythema (body surface area (BSA) > 75%) involving also skin folds with or without exfoliate dermatitis. Several triggers have been described to elicit EP in predisposed subjects such as environmental factors (sunburn, alcoholism, and infections), drugs (lithium, anti-malarial drugs), and the rebound phenomenon following discontinuation of anti-psoriatic treatments (oral steroids or methotrexate) [1]. However, the pathogenesis of EP remains elusive, which can limit a physician's capability to deliver safe and effective therapy. In 2010, the National Psoriasis Foundation (NPF) published a guideline describing the current evidence regarding EP treatment, stating that cyclosporine and infliximab should be the first line treatment in acute and unstable patients, whilst methotrexate and acitretin are recommended in more stable patients [2].
Despite this clear advice, prominent limitations included that few high-quality studies assessing EP treatment were present in the literature [2]. In a clinical setting, EP treatment faces two other prominent challenges, namely the difficulty in differential diagnosis and in implementing a biological approach that rules out non-inflammatory conditions. Although histological confirmation is mandatory in suspected EP cases, it is sometimes challenging due to the potential lack of histological parameters resembling classical psoriasis, such as parakeratosis or acanthosis [1].
The exclusion of neoplastic causes (Sézary syndrome) is mandatory if biologics are the selected approach. In fact, in the last 5 years, neoplasia has been a relative contraindication [3]. The NPF guidelines did not include IL-17 inhibitors [2], such as secukinumab, and recently two case series studies described the use of secukinumab in EP patients [4,5]. Current evidence seems to support the use of secukinumab in EP patients, even though there is a dearth of data concerning potential predictors of responsiveness in these patients.
Remarkably, among psoriatic patients, alcohol use/abuse and smoking are described and linked to both psoriasis development and exacerbations but are not studied in EP [6][7][8][9][10]. Conversely, the prevalence of cannabis users among psoriatic patients and the effect of cannabis use on psoriasis are still missing. Furthermore, in vitro or murine studies explored keratinocyte changes only in response to a single cannabis compound [11]. Thus, due to the increasing prevalence of cannabis users in the general population and also its promoting role in medicine [11], reports focusing on the effect in psoriasis are needed.
The current study aimed to evaluate (i) first the efficacy and safety of secukinumab in psoriatic erythroderma and (ii) second to describe the prevalence of drug abuses, namely alcohol, tobacco, and cannabis smoking, in EP patients.
Experimental Section
This multi-center, international, retrospective, pilot study enrolled a sample of EP patients (BSA > 90%) treated with a loading dose of 300 mg subcutaneous secukinumab at weeks 0, 1, 2, 3 and 4, followed by 300 mg every 4 weeks, in the period from December 2015 to December 2018.
All erythrodermic patients were biopsied and malignancies were ruled out by complete blood count, blood smear, transaminases, lactate dehydrogenase (LDH), gamma-glutamyl transferase (GGT), anion gap, Sézary cell search, and total body computed tomography. Smoking history (pack/years), cannabis use (smoking episodes/week) and alcohol use (Alcohol Use Disorders Identification Test (AUDIT)) status were assessed.
AUDIT is a 10-question screening tool (0-40 points) developed by the World Health Organization (WHO) in order to evaluate alcohol consumption, drinking behavior, and alcohol-related complications. According to AUDIT, patients are stratified as follows: 0-7 points indicate a low risk, 8-15 points a medium risk, 16-19 points a high risk, and 20-40 points a probable addiction.
Drug History
In Table 1 we assessed drug history. Only 3/13 patients were naïve to biologic therapy. Among patients treated with biologics, eight had switched more than two biologics. Furthermore, 8/13 had a previous episode of erythroderma and six patients had more than two episodes. Drug history indicated that some of the EP patients had previously received therapeutic agents that could potentially trigger psoriasis, namely four underwent beta blockers, three received angiotensin II blockers (ARBs), two patients received angiotensin-converting enzyme (ACE) inhibitors, and one patient was previously treated with thiazide diuretics.
Drug Abuses and Comorbidities
Drug abuse screening revealed that seven patients had a medium-high risk of alcohol abuse, three patients used cannabis on a weekly basis, and seven patients were smokers with a pack/year index of 295 (190-365). The comorbidities represented in our cohort included: dyslipidemia (five patients), hypertension (three patients), osteoporosis (two patients), atrial fibrillation (one patient) and pulmonary tuberculosis (one patient), respectively ( Table 2). Table 2. Prevalence of drug abuses in our cohort.
Clinical Response to Secukinumab
Clinical and therapeutic data are summarized in Table 3. The median value of the last recorded PASI was 10 (7-15). Responders to secukinumab were 10/13 (Figure 1a,b) and the median clearing time was three (1.5-3) weeks. Table 3. Clinical and therapeutic records in our cohort.
After recovering from erythroderma at week eight, four patients achieved PASI 75, while none achieved PASI 90 or PASI 100. At week 52, five patients achieved PASI 90 and five achieved PASI 100. Interestingly, looking at the PASI trends of this cohort (Figure 2), all three non-responders used two out of three of the aforementioned drugs (alcohol, cannabis, and smoking) and no recorded comorbidities. Non-responders were switched to ustekinumab (90 mg) and obtained a PASI 100 in 24 weeks.
Discussion
Our study further supports that secukinumab is an effective therapy in EP and suggests that the use of recreational and accepted drugs (alcohol, cannabis, and tobacco) is prevalent in EP patients.
Furthermore, in the literature, EP patients treated with secukinumab had 16 [4] or 24 [5] months of follow up, lacking an assessment of long-term DLQI. Thus we assessed DLQI at 8, 12, 16, 24, and 52 weeks and found that secukinumab contributed to the improvement, not only in skin disease, but also in the long-term quality of life of EP patients. In our cohort, EP patients that responded to secukinumab did not exhibit recurrences and maintained long-term responsiveness to the drug. This study further supports the results described in a retrospective 52-week, observational, multicenter study, evaluated by Galluzzo et al., which suggested long-term efficacy of secukinumab in plaque psoriasis [12].
Focusing on EP patients, we assessed for the first time in detail the timing related to clearance of erythroderma, and after that, how secukinumab managed to clear the residual plaque psoriasis during the 52-week follow-up period. These two parameters together, are of pivotal importance in the clinical setting to guide therapeutic decisions made by dermatologists. In addition, the 52-week follow-up data highlighted that the EP responders to secukinumab achieved at least PASI 90 after clearing EP.
Among the three patients who did not respond to secukinumab therapy, one patient developed generalized urticaria at week three, the second patient experienced recurrent oral candidiasis and stopped the drug at week 12, and the third patient lost response at week 16. Remarkably, the second non-responder also smoked cannabis. Non-responders have not previously been treated with ustekinumab, and in accord with the recent real-life data on secukinumab non-responders, they were switched to ustekinumab and achieved a complete remission [13]. Ustekinumab is an IL-12/IL-23 blocker that targets the p40 subunit shared by these two cytokines. Furthermore, IL-12 plays a pivotal role in T helper cell type 1 (Th-1) polarization, as does IL-23 in Th-17 polarization [14]. We interpret the non-responsiveness of our patients as potentially due to the development of anti-secukinumab antibodies or up-regulation of Th-1-related pro-inflammatory cytokines, as previously demonstrated by Zaba et al. [15].
Evaluation of clinical characteristics in secukinumab non-responders indicated that all three had a familial history of EP and had used more than one drug, including smoking, alcohol, and cannabis. However, none of them were treated with any drug known to trigger psoriasis.
In the literature, the prevalence of drug abuse in the rare subset of EP is not reported, conversely, in plaque psoriasis patients several authors addressed the problem of drug abuse prevalence (alcohol and tobacco smoking) and its impact on anti-psoriatic therapies [7,8,16,17].
Alcohol intake and, consequently, also the abuse, may favor psoriasis-related systemic inflammation by promoting lipopolysaccharide (LPS) translocation from intestine to blood flow, increasing the pro-inflammatory activation of several immune cells, including lymphocytes (producing TNF-α and IFN-γ) and monocytes/macrophages (producing TNF-α), and directly by triggering keratinocytes pro-inflammatory activation via keratinocyte growth factor receptor (KGFR) [9]. These observations are further supported by Brenaut and colleagues, who conducted a meta-analysis on the epidemiological link between psoriasis and alcohol intake, and found that alcohol is a risk factor in developing psoriasis [17]. Furthermore, Qureshi et al. described a correlation between heavy beer intake and psoriasis severity during exacerbation [8]. This concept is translatable to EP patients, where erythroderma is an acute and very severe exacerbation of pre-existent psoriasis. Thus, alcohol abuse seems to increase TNF-α levels and may theoretically explain a possible lack or loss of response to IL-17 blockers, as with secukinumab in our EP patients.
Tobacco smoking and its link with psoriasis was assessed by Armstrong and colleagues in a large meta-analysis, involving 28 studies.
They found an odds-ratio (OR) of 1.78 (95% confidence interval = 1.52-2.06) and a higher PASI in psoriatic smokers compared to non-smokers [16]. Remarkably, psoriasis severity gradually increases with the number of cigarettes smoked per day [17], but may benefit from a stop in smoking [18,19]. The nicotine contained in cigarettes activates nicotinic acetylcholine receptors (nAChRs) on the surface of dendritic cells, macrophages, endotheliocytes and keratinocytes, leading to an increased Th-1/Th-17 polarization of naïve T cells and to an increased production of pro-inflammatory cytokines, such as TNF-α, IL-12, IL-17, IL-23, IL-1β and IFN-γ [20]. These are all capable of decreasing the therapeutic effects of both TNF-α [7] and IL-17 blockers.
Conversely, fragmentary data exist regarding the effects of cannabis on the immune system and skin [21,22], but no data have been published about cannabis smoking in psoriatic patients or in murine models of psoriasis. However, some purified extracts derived from cannabis may inhibit in vitro keratinocyte proliferation [21] and Th-17 cell-related cytokine production in a dose-dependent manner [22]. Cannabinoids mainly interact with two receptors, cannabinoid-1 receptor (CB1R) and (CB2R), and both inhibit adenylate cyclase and activate mitogen-activated protein kinase (MAPK) [11]. This theoretically contrasts the anti-psoriatic function of apremilast, with regard to the intracellular cyclic adenosine monophosphate (AMPc) increase due to phosphodiesterase-4 inhibition. CB1R is prevalently present in keratinocytes, whilst CB2R is prevalent in immune cells, such as T cells and monocytes/macrophages [11]. Upon stimulation in the presence of purified cannabis extracts, namely cannabidiol (CBD) and tetrahydrocannabinol (THC), Th-17 cells massively decrease both transcription and release of IL-17A [22], which may theoretically act synergistically with IL-17 blockers. This aspect may be also confirmed by reports that list candidiasis as a side effect of both IL-17 blockers and chronic cannabis use [23]. Consequently, patients under IL-17 blockers that use cannabis may be exposed to a higher risk of candidiasis. Remarkably, Russo and colleagues pointed out that, in order to evaluate the global effect of cannabis, it is necessary to take into consideration the synergism existing among different cannabis compounds that altogether determine the final so-called entourage effect, capable of enhancing or even obscuring the properties of single compound [24]. Furthermore, no studies evaluated how smoking cannabis can modify these compounds and their biological effect. Thus, this is the first report to evaluate this relevant use of such drugs in a cohort of patients affected by EP, a chronic systemic inflammatory disease.
Moreover, both smoking and alcohol consumption were found to increase IL-17 and TNF-α production [9,12,16], corroborating our hypothesis that drug use may promote systemic inflammation, contributing to less favorable results from anti-psoriatic therapies.
The main limitation of the present study remains the small sample of enrolled patients, which was due to EP rarity and due to the fact secukinumab is still off-label in treating EP. Therefore, we cannot conclude that drug use in the non-responding patient group was causal. Other plausible reasons for the failed response in the small number of patients with addiction problems in the present cohort might well be insufficient compliance, even though all of our patients regularly attended dermatological appointments and reported to have auto-injected secukinumab.
Conclusions
Although not conclusive, our preliminary results in EP patients treated with secukinumab enlighten two presently unmet needs: (i) the need of therapy-specific biomarkers/prognostic factors and (ii) the prevalence of drug use in EP.
In conclusion, secukinumab may be a safe and effective treatment in EP, however, larger studies are needed to validate our results. | 2019-06-05T13:13:25.620Z | 2019-05-31T00:00:00.000 | {
"year": 2019,
"sha1": "5ed00e9df7c362f04dc41cdd288776e58ecbb8bd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/8/6/770/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ed00e9df7c362f04dc41cdd288776e58ecbb8bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228904139 | pes2o/s2orc | v3-fos-license | Filtering precipitation through soil
The article studies the process of filtering precipitation through soil. Two mathematical models are obtained and used, specifically, deterministic and stochastic ones. The basis of these tasks is a diffusion parabolic partial differential equation supplemented by initial-boundary conditions. The findings are used in solving the stochastic problem. The mathematical analytical formulation of the problem using the finite difference method is reduced to an inhomogeneous linear algebraic system of equations, which is implemented in the environment of the Matlab computing complex. Consideration of the task operator linearity and the principle of superposition reduced the amount of computation significantly. A number of conclusions were drawn on the basis of solutions.
Introduction
The increased loads posed on the hydrosphere intensify the large-scale processes occurring in it. Today the catastrophic changes in the natural cycle of water whose consequences are difficult to predict and evaluate are being intensively discussed. The requirements for environmental protection and environmental management are constantly being restricted. In this regard, one of the priority scientific tasks is the rational and environmentally safe use of natural resources, diagnostics of the state of the hydrosphere, forecasting the development of technogenic processes and managing them. And the attention to it is constantly increasing. These and other reasons lead to the necessity to clarify the models of practical problems and methods for solving them.
The interconnected issues concerning filtering water through soil and predicting groundwater levels are currently becoming critical. The reason is the global climate change of the Earth and the extremely and commonly increased intensity of precipitation in particular. Studying the pattern of moisture distribution in the upper layers of the soil is relevant for successful land use in agriculture and construction industry.
Groundwater is formed mainly due to the infiltration of precipitation and the seepage of water from rivers, lakes and reservoirs. Groundwater causing catastrophic floods everywhere is very sensitive to all atmospheric changes. These issues and the related ones were studied in [1][2][3][4].
Mathematical models of precipitation filtration and groundwater level contain many parameters of statistical origin, which makes it difficult to accurately predict them. In such cases, one has to invoke stochastic models [5].
Depending on the amount of precipitation and the depth of groundwater, their surface experiences seasonal and long-term fluctuations. The magnitudes of seasonal and perennial amplitudes of fluctuations in groundwater levels can reach 20 meters or more, which must be taken into account
Deterministic task
The one-dimensional advancement of the wetting boundary in a distributed system such as soil under precipitation in a linearized version is described by the diffusion equation in partial derivatives of the parabolic type Here, L is a linear deterministic operator, H(x,t) is the boundary of moistened soil, a is the moisture conductivity coefficient, b is a coefficient that takes into account the loss of soil permeability, for example, due to an increase in soil density in proportion to depth, G is the domain of problem definition, f1(x,t) is the precipitation intensity. The boundary and initial conditions must be added to equation (1) The conditions for matching the initial and boundary conditions must also be met.
Often at the beginning of the process, there is already a certain layer on the surface of the soil saturated with liquid. Its lower boundary can be set by the function f4(x).
The boundary conditions at all points are equal to zero. This means that the boundaries of the problem definition domain coincide with the boundary of a thundercloud. Let us take it as a rectangle In this case, all boundary conditions (2)-(4) will be satisfied automatically with a suitable choice of function f1(x,t). For example, as follows It is worth noting that the actually observed precipitation process approximately corresponds to such a description, when at the beginning and end of precipitation along the time and edges through the rectangle, the rain intensity is equal to zero, and in the center it is the highest.
The initial boundary value problem (1)-(5) is a mathematical model for determining the level of groundwater H(x, t). Its implementation with real components of the vector -function f( f1, f2, f3, f4) presents significant complications when using analytical methods. Therefore, we use numerical methods [6,7], the finite difference algorithm in particular. In order to use it, instead of the twodimensional continuum G, we introduce a grid domain of discrete points Let us then take a four-point implicit scheme in the finite difference method. By following [6,7] instead of equations (1)-(3), we obtain a finite-difference scheme of the problem. Since in this case, only discrete coordinates will be used in contrast to analytical methods and approximate values of the groundwater level will be obtained, we introduce the following notation Then the main equation (1) in the finite difference algorithm takes the following form Let us note that here the analytic derivatives are replaced by finite-difference with the accuracy О(h 2 ).
Instead of conditions (2) Equations (6) and (7)-(9) form a programmable algorithm for computer calculations. The final result is that the initial-boundary task for solving a partial differential equation of parabolic type is reduced to the solution of an inhomogeneous system of linear algebraic equations By = d (10) on each layer of the grid region in time. Here y, d are vectors of dimension n, B is a square matrix of n order. Such a system is compiled and solved with the gradual movement of the two-layer template one step up.
The matrix and the vector have the following form H(x,t). The blue area is discussed in paragraph 2.2 given below.
Stochastic task
Variability of moisture in soils and stochasticity are caused by many reasons, specifically, the intensity of rainfall, their duration and repeatability, season, geographical location, type of rain (drizzling, widespread, rainfall), heterogeneity of the soil, etc. Simultaneous consideration of all random factors in the mathematical models is almost impossible, since the necessary statistical information is missing.
In this case, we will be interested in the random nature of the rain intensity, which has the greatest effect on soil moisture saturation. Intensity is defined as the layer or amount of rainfall that falls per unit of time. It usually ranges from 0.25 mm/h to 100 mm/h. Its registration and calculation of indicators are necessary in the construction of many economic systems and structures. The monthly average rainfall is taken into account when designing sewer systems, engineering structures, and drainage of farmland.
The data published in the specialized literature and design standards allow a wide range of parameter values, which clearly shows the relevance of stochastic models for an adequate description of soil moisture.
The input function of the problem in the form of precipitation intensity is actually a spatio-temporal non-stationary random process. Consequently, the output process H(x,t) will be the same. Next, we will be interested in the most important parameters H(x,t) in the form of the mathematical expectation mH(x, t) and the standard deviation σH(x, t). The methods of probability theory and mathematical statistics will be used when calculating them [8,9]. Here mF = 0.15 m/h, is a random number expectation, σF = 0,1mF m/h is a mean-square deviation, f2 = 0.03 m, f3 = 0.05 m.
The choice of such parameter values and the function F significantly reduce the amount of computation. Let us take into account that this system is a linear distributed one, that is, it satisfies the condition of the superposition principle [11] Let us use the principle of superposition two times: to determine the mathematical expectation mH(x,t) and mean-square ,t). Herewith, the operator A includes the main equation and all initial boundary conditions. The first task of determining mathematical expectations takes the following form AmH=mf. The expectation function coincides with the function of precipitation intensity of the Example 1. In this case, the values calculated in example 1 yi(xi) coincide with expectations mH(xi). Therefore, two graphs are combined in fig. 1. A similar procedure is repeated with respect to the standard deviation σH and the results shown in Fig. 2. Comparison of fig. 1 and fig. 2 confirms the validity of applying the superposition principle, i.e., the ordinates of Fig. 2 decreased exactly 10 times.
Figure 2. Wet standards deviations
In all cases, the form of the distribution function of random variables does not affect the calculation results. However, if the boundary of the movement of the moistened soil zone H(x,t), as a random variable, has a Gaussian distribution, we can calculate the probability of a given deviation from the mathematical expectations shown in Fig. 2. It can be used to predict soil moisture.
For this purpose, we use the widespread "three sigma rule" [9]. The task is to find the probability of deviation of a random variable H(x,t). In absolute value it would be less than a given positive number δ, that is, it is required to find the probability of inequality | H -mH | < δ. In this case, δ = 3σН, and desired probability is equal to P(| H -mH | < 3σН) = 2F(3) = 0.9973. Where F is a Laplace tabulated function. The obtained result means that the probability of a random deviation Н(x,t) from the expectation in absolute value will be less than three sigma. This area is shaded in blue in Fig. 1.
The probability of a random value ejecting beyond the specified boundaries is 0.0027, i.e. 0.27 %. According to the principle of the impossibility of improbable events, such an event is considered almost impossible. | 2020-11-12T09:08:55.246Z | 2020-11-05T00:00:00.000 | {
"year": 2020,
"sha1": "6b195f6fcd010c226529a9d6f946a8b77a6bcd0c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/579/1/012089",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7f5d064227627b911cd5307c02554f9f457c064d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
11603566 | pes2o/s2orc | v3-fos-license | Efficacy of the Essential Amino Acids and Keto-Analogues on the CKD progression rate in real practice in Russia - city nephrology registry data for outpatient clinic
Background Renal replacement therapy (RRT) is growing by 10 % per year in Russia, but pre-dialysis care which can retard CKD progression and delay the start of RRT remains limited. We evaluate the effect of Essential Amino Acids and Keto-analogues (EAA/KA) on CKD progression. Methods The effect of low protein diet (LPD), supplemented by EAA/KA, on GFR slope changes between first and second treatment period (five sequential visits per period) in 96 patients withs CKD Stage 3B-5 was compared to GFR slope changes in the control group of 96 patients, randomly selected from matched (by gender, age, diagnosis and CKD Stage) cohort of 320 patients from the city Registry. The mean baseline eGFR was 23 ± 9 ml/min/1.73 m2; 29 % had CKD3B, 45 % - CKD4, 26 % - CKD5. Results The rate of eGFR decline changed from −2.71 ± 2.38 to −2.01 ± 2.26 ml/min/1.73 m2 per year in the treatment group and from −2.18 ± 2.01 to −2.04 ± 2.18 ml/min/1.73 m2 per year in the control group. Only in the treatment group the difference was significant (p = 0.04 and p = 0.6). Standardized effect size for intervention was significant in treatment group: −0.3 (of pooled SD), 95 % CI −0.58 ÷ −0.02 and non-significant in control group: −0.07 (−0.35 ÷ +0.22). The univariate and multivariate analysis of EAA/KA therapy effect demonstrated that it was probably more effective in patients of older age, with higher time-averaged proteinuria (PU), lower phosphate level, in patients with glomerular v. interstitial diseases, and in females. Only the latter factor was significant at pre-specified level (<0.05). Conclusions LPD combined with EAA/KA supplementation lead to the decrease of the CKD progression both in well-designed clinical study and in real nephrology practice in wide variety diseases and settings. Registry data can be helpful to reveal patients with optimal chances for beneficial effect of LPD supplemented by EAA/KA. Trial registration ISRCTN28190556 06/05/2016.
Background
Renal replacement therapy (RRT) is growing rapidly in Russia with average 10 % increase per year [1]. That reflects the increasing number of new dialysis centers (mainly private, erected by international dialysis networks) in substantially changing conditions, including implementation of diagnosis-related groups (DRG) system for reimbursement [2]. Prevalence of RRT remains rather low, however it grew from 56 per million population (pmp) in 1998 to 246 pmp in 2013 (Russian Dialysis Society Registry data [1]) and 299 pmp in 2015 (marketing data). It means that dialysis treatment becoming now available for majority of patients in most regions of Russia, the more so as the average incidence rate remains rather low -51 pmp, presumably due to younger population, lower prevalence of diabetes mellitus, lower and later CKD revealing. Nevertheless, some regions, like Saint Petersburg [3] (70 pmp) look similar to the neighboring countries like Finland (89 pmp) and Estonia (63 pmp). The proportion of patients with functioning graft among those on RRT decreased slowly from 25 % in 1998 to 19 % in 2013, while the proportion of HD patients increased from 70 % in 1998 to 75 % in 2013. The proportion of PD patients remains stable low -5-7 %. Thus lowprotein diet (LPD) actually is no more considered as the method to ensure survival while waiting period for RRT, but the issue of quality of life, retardation of CKD progression and delaying "the health start" of dialysis.
Requirements for dietary protein intake in the nonnephrotic CKD patients without inflammatory conditions or other catabolic states appear to be equal to the requirements of healthy individuals: average 0.6 g/kg/day of unselected or mixed biological value [4].
Regimens with less than 0.6 g/kg/day protein intake are often difficult for application, unless 'non-proteic' (commercially available) carbohydrates are applied [5] to maintain enough calorie consumption, however this products are not easily available everywhere for every patient and different approaches are acceptable [6].
The rate of CKD progression, evaluated by decrease of estimated GFR (CKD-EPI equation) depends on numerous factors and can be delayed by nephroprotective therapy, in particularby nutrition therapy. We assessed the effect of LPD supplemented by keto-analogues (EAA/ KA) therapy on the rate of eGFR decline in the real practice, using prospectively collected data obtained from St.-Petersburg city nephrology Registry.
Methods
All patients with CKD Stage 3-5, referred to city nephrology center, are offered dietary counseling by experienced nephrologist. LPD is routinely recommended to all patients at high risk for CKD progression after evaluation the CKD stage, GFR decline rate and excluding the symptoms and signs of protein-energy wasting based on physician's judgment and labs: albumin <3.8 or <3.5 g/dl for diabetics, phosphate < 0.8 mmol/l; in some cases anthropometric data (skinfold, calculated muscle section area) and additional criteria (absolute lymphocyte number, transferrin level) are used. Patients with very low life expectancy are excluded from low protein diet interventions. Detailed nutritional manual for patients with CKD is available for patients [7]. Several examples of daily LPD (0.6 g/kg/day) and VLPD (0.3 g/kg/day) adopted for Russian traditions and food habits are presented in the Appendix. Patients can get information about exchangeability of some products, about forbidden and allowed products.
Patients were initially prescribed a standard LPD (0.60 g protein/kg body weight/day). For patients who demonstrated treatment compliance to LPD, low-phosphate diet, nephroprotective therapy (iACE or ARB), and have moderate to severe proteinuria (PU)defined as PU >1.0 g/day, the additional restriction of dietary protein, supplemented by EAA/KA may be considered. Patients are provided with EAA/KA (prescribed dose -one pill per 5 kg body weight) by the special social drugstore, supplied from budgetary funded source, so-called "additional medicinal providing". About 200 patients simultaneously receive EAA/KA in St.-Petersburg from this source. For logistics, clinical and personal reasons not all patients have received EAA/KA without substantial interruptions and long enough to evaluate the changes in GFR slope.
Patients, comprising study group were selected from St-Petersburg CKD registry (which includes more than 6000 patients with CKD3+ as we described in details earlier [3]) according following criteria: confirmed moderate GFR decline rate (see below); patient's compliance to diet and pharmacological therapy; and prolonged history of regular EAA/KA therapy according to the data from special database, recording patient's' visits to drugstore for EAA/KA supplied by budgetary funded source (≥10 consecutive visits with pre-defined for each CKD stage frequency). Patients, who received commercially available EAA/KA were not included in the study as we could not ensure adequate treatment regimens. In 96 patients with CKD stages 3B-5 the progressive decrease of eGFR was confirmed by previous monitoring during > 6 months. GFR decrease rate was evaluated as linear regression coefficient of CKD-EPI GFR versus visit date with the number of visit ≥ 4 (median 5; IR 4 ÷ 7). Patients with rapid CKD progression (more than 10 ml/ min/1.73 m 2 per year) were excluded from analysis. The mean baseline eGFR was 23 ± 9 ml/min/1.73 m 2 ; 29 % of patients had CKD3B, 45 % -CKD4, 26 % -CKD5. We intend to maintain all patients on LPD with protein intake 0.8-0.6 g/day. Patients receiving EAA/KA therapy are advised to keep to LPD with protein intake < 0.6 g/day (evaluated by 3-day diaries). Protein-free products are included to supply energy with negligible load of phosphorus, sodium, potassium and nitrogen [8].
Evaluation of treatment compliance for the diet is more qualitative than quantitative. We request our patients to provide food diaries at least for three days between visits at every visit. The most concerned patients provide diaries for all period between visits. During the visit nephrologist can discuss the patient's menu while labs are processed, as unfortunately patients have no regular access to dietician consult. LPD have to provide at least 30 kcal/kg/day of energy, lower intake of calories is considered in those with body mass index (BMI) >28 kg/m 2 . Phosphate binders (calcium carbonate or calcium acetate or aluminum hydroxide) are prescribed according to the guidelines in order to maintain serum phosphate within the normal range (0.81-1.45 mmol/l) [9]. As a nephroprotective and antihypertensive therapy 64 % of patients received ACEi, 16 % -ARB, 24 % -CCB, 11 % -beta-blockers.
Patients keeping diet undergo full clinical evaluation, including dietary counseling with special emphasis to diet adherence at least quarterly for CKD3, every 2 months for CKD4, and monthly for CKD5. At each visit, blood pressure (BP), body weight, serum urea, creatinine, sodium, potassium, phosphate, calcium, total protein, albumin, and hemoglobin are measured; transferrin saturation and ferritin are measured quarterly, total cholesterol, triglyceridestwice a year. In patients with more often visits, additional visit data was averaged with those of the closest visit.
Statistical analysis
Continuous variables were presented as the mean ± standard deviation (SD)for normally distributed variables -or as median (25 ÷ 75 percentiles) for others. Categorical data were presented as percent of frequency. The comparisons between normally distributed variables were performed by Student test, between other continuous variablesby Mann-Whitney test, between categorical parametersby χ 2 test with P-values <0,05 for significance. The relationship between continuous variables was investigated by the Pearson correlation coefficients and P-values. The size effect was evaluated by raw paired difference and by paired difference standardized by standard deviation and corresponding 95 % confidence interval.
Evaluation of effect in treatment group
The effects of LPD supplemented by EAA/KA were evaluated during period of 10 routine visits to nephrologist following the EAA/KA prescription. The mean dose of EAA/KA was 11 ± 2 pills per day (0.8 ± 0.1 pills/5 kg BW -20 % lower than prescribed). The slope of the eGFR decrease was calculated by regression model for first 5 visits and for following 5 visits. These data were compared to the data for 96 patients, randomly selected from matched group (by gender, age, diagnosis, proteinuria and CKD stage) of 320 patients from the registry. Table 1 shows comparison of baseline parameters in two groups.
The only statistically significant (but not clinically significant) was the difference in serum sodium level. Table 2 shows that the comorbidity in treatment and control groups was similar. Table 3 shows the changes of eGFR during first and second periods of the study in treatment group vs control group. The duration of the first and second five-visit period was 8.6 ± 1.6 and 8.4 ± 1.9 months for treatment group and 8.9 ± 2.0 and 8.6 ± 2.0 months for control group (all differences are non-significant, p > 0.05).
The search for pro-and con-factors for EAA/KA effect Table 4 demonstrates the results of univariate and multivariate analysis of EAA/KA therapy effect in association with some potential confounding parameters. For categorical (binary) variables the comparison of the changes (from first to second period) of GFR slope are presented for two levels of binary variable. Table 3 also shows the results of multi-variate analysis of EAA/KA therapy. This intervention was probably more effective in patients of older age, with greater time-averaged PU, lower phosphate level, in glomerular v. intestinal diseases, and in females (p for exclusion from regression model 0.10). Only the latter factor was significant at pre-specified level (<0.05).
Although beneficial effect of therapy was not significantly linked to PU level in uni-and multivariate analysis, the slope of eGFR decline was not surprisingly directly associated with PU.
As PTH target ranges were determined at different levels in CKD3 (<70 pg/ml), CKD4 (<85 pg/ml) and CKD5 (non-dialysis -<110 pg/ml) in Russian National CKD-MBD guidelines, we evaluated the percentage of patients with PTH above these levels: 23/96 in treatment group and 26/96 in control group (p = 0.6); the mean baseline PTH levels were 69 ± 32 v. 63 ± 37 pg/ml (p = 0.4). The slope of PTH elevation was not significant in both groups and did not differ between the groups (2 ± 11 pg/ ml/year, p = 0.1).
During the study period 26 patients from treatment group entered the more advanced CKD stage, in 62 patients CKD stage did not change, and 8 patients showed slow but stable improvement of kidney function; corresponding subgroups in control group consisted of 32, 57 and 7 patients respectively (for difference in χ 2 -test p > 0.2).
Discussion
Comparing the changes of GFR slope from first to second period in the intervention (EAA/KA therapy) group and control group we demonstrated that LPD, supplemented by EAA/KA, could decrease the rate of CKD progression. Five-visit interval of evaluation for eGFR slope was rather long and different factors could interfere the CKD progression, however five visits data is necessary to calculate eGFR precisely enough to be reliably compared in various settings. Possibly the first period (as comparator) should be shorter to exclude the already achieved effect of EAA/KA therapy that mitigate the difference.
Chang et al. demonstrated similar results, in their study EAA/KA was added for 6 months to previously prescribed LPD (0.6 g/kg/day of proteins for 6 months) in 120 patients with CKD3-4. The decline of GFR slopes during the LPD + EAA/KA period was significantly lower than during the LPD alone period. Multivariate analysis revealed that responsiveness to LPD + EAA/KA was independently related to diabetes (p = 0.006) and high serum albumin levels (p = 0.011) in the LPD alone period [10]. The potentially influencing factors in our study (Table 3) were different, that may reflect the differences in the patient's population. Di Iorio et al. demonstrated the influence of phosphate level in the study of 99 proteinuric CKD patients, transferred from LPD to very low protein diet (VLPD), which resulted in the halving of proteinuria, but the anti-proteinuric effect was attenuated by high level of phosphate [11].
In earlier (and smaller) prospective randomized study Teplan et al. demonstrated that LPD (0.6 g/kg/day of proteins) supplemented with EAA/KA led to lower decrease of kidney function in patients with baseline GFR 22-36 ml/min, compared to LPD alone, although the number of follow-up visits was very small (once in 6 months) [12]. Despite VLPDs are usually considered more effective in postponing dialysis in compliant patients, only minority (as low as 14 %) of patients can follow it properly [13], which was demonstrated by the data from the assessment period of randomized study with 3-year period of enrollment. After such strict selection VLPD group showed 57 % lower GFR decline rate (−4.9 v. -2.1 ml/min per year). Interestingly, we found very similar results in our supplemented LPD group (−2.01 ± 2.26). In above mentioned publication standard deviation for GFR decline rate was not presented, so it is not possible to compare standardized size effect of interventions.
Recent meta-analysis (7 randomized controlled trials, one cross-over trial, and one non-randomized concurrent control trial, all of them published before April 2015) results indicated that comparing to normal protein diet, LPD or VLPD supplemented with keto-analogues (SLPD/ SVLPD) were able to significantly prevent the deterioration of kidney function, defined by eGFR (P < 0.001); hyperparathyroidism (P = 0.04); hypertension (P < 0.01); and hyperphosphatemia (P < 0.001)thus could delay the progression of CKD effectively without causing malnutrition [14].
Supplementation with EAA/KA is not strictly needed for vegan diets with protein intake 0.6 g/kg/day, but if applied, EAA/KA ensure the balanced intake of different aminoacids. If EAA/KA are not used, different types of products (e.g. legumes and cereals) should be combined [15], that can be burdensome for some patients. Supplementation permits also a simplified approach, which just exclude animal products, but allows a free choice of vegetables, fruits, legumes and cereals [5,6]. Additional approach to enhance the compliance to LPD -an occasional unrestricted meal (one to three times per week) -is practiced by at least two Italian groups of researchers [5], in elderly patients in particular [16]: Usually we do not practice broadly this approach as we consider that some aspects of Russian mentality will prompt our patients to subsequent diet liberalization. Nevertheless, we believe this approach could be useful for some patients.
As special protein-free food is virtually available in Russia (with one historical precursorprotein-free bread in 1980s), but the choice is limited (as some of them are very salty, and others are rather expensive or contain substantial amount of phosphate). More or less acceptable is mainly pasta of stretch origin, or mixtures for homemade bakery. Protein-free food is partly available for free to CKD patients only in Italy [17].
The MDRD Study and a secondary analysis of this study did not suggest a clear benefit from SVLPD as compared with the 0.60 g protein/kg/d, although there was a trend towards slower progression of kidney failure with the SVLPD. However, SVLPD used in the MDRD Study probably was not ideal because the keto acid/EAA supplements contained rather large amount of tryptophan, which could generate more nephrotoxic metabolites, particularly indoxyl sulfate [4]. Hence, it is possible that alternative keto acid/EAA supplements will be more effective in slowing of the CKD progression.
Presently only two of mixtures/combinations of amino acids and keto acids, both of which are marketed by the same company, are available: Alfa Kappa (in Italy) or Ketosteril (globally) [6].
Potential drawbacks come down when the vegetarian diet is associated with too restricted protein intake and/or insufficient energy intake, justifying an early and regular follow-up by a nephrologist to avoid malnutrition [15]. Unfortunately, due to lack of reimbursement for this service, we presently have no access to concealing our patients by certified dietitian on regular basis.
Although we have no rigorous confirmation for benefit of supplementation of LPD by EAA/KA in RCT, we have to keep in mind that as soon as the efficacy of intervention strongly depends on the patient's active participation, RCT may be inappropriate tool because the random allocation per se can reduce the effect of the intervention [17]. The alternative approach is the search for the factors and conditions, which could make the intervention more efficient. Controlled observation studies are one of important step on this way. Various LPDs cannot only influence the CKD progression, but also impact on dialysis outcomes, when patients finally achieve CKD5D stage. In this area, the RCT's are more unfeasible.
Bellizzi et all showed, that mortality risk in dialysis period was 0.59 (p < 0.001) after SVLPD pre-dialysis treatment, compared to propensity score matched group of unselected control, but risk for patients, previously treated with LPD was not different from SVLPD group (RR = 0.84; p = 0.496).In subgroup analysis females aged <70, without diabetes and cardiovascular diseases, had more pronounced treatment effect [18]. In recent study from Taiwan, Markov model showed that the group with EAA/KA early initiation gained higher QALYs with lower cost when compared to the watchful-waiting group. Analysis of sensitivity indicated that early EAA/KA initiation (eGFR 17-29 mL/min/ 1.73 m 2 ) would be the preferred cost-effective option, if achieved relative reduction of eGFR decline, associated with LPD plus EAA/KA, is > 4 % [19]. Due to insufficient power our study we could not differentiate the size effect between different CKD stages, but could show that the LPD with EAA/KA supplementation resulted in 7 % relative reduction of eGFR decline rate (compared with control group)thus, could be cost-effective.
Moderately restricted LPDs may be adapted to virtually any cuisine and should be tailored to the patients' preferences, while VLPDs usually require trained, compliant patients; a broader offer of diet options may lead to more widespread use of LPDs [17].
Limitations
The patient selection for LPD (<0.6 g/kg/day) supplemented by EAA/KA in our study was not random and could contain selection bias (patients with better compliance to diet can have better compliance to other nephroprotective strategies, such as antihypertensive and anti-RAAS therapy, sodium restriction, and optimal glycemic control. On the other hand, the matched control group recruited from 320 similar patients was rather close to the treatment group. These confounding factors were taken in consideration in multiple regression analysis. Of note, patients with pre-dialysis CKD were not regularly evaluated for acid-base balance, and rarely received bicarbonates; this information was not included into analysis. Evaluation of compliance to diet was more qualitative than quantitative, but such approach may be considered as acceptable [6]. The number of patients, included into analysis, was relatively small as it was restricted to those who received EAA/KA supplied from budgetary funded source; on the other hand, the amount of EAA/KA gained in drugstore was available as a surrogate measure for received doses in parallel with patients' diary.
Conclusions
Low protein diet combined with EAA/KA supplementation lead to the decrease of the CKD progression both in well-designed clinical study and in real nephrologist practice in wide variety diseases and settings. Registry data can reveal patients with optimal chance for beneficial effect of LPD supplemented by EAA/KA. | 2017-07-07T05:28:46.083Z | 2016-07-07T00:00:00.000 | {
"year": 2016,
"sha1": "6601bcad74ec5fe368b01a38b2a36fead777a192",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-016-0281-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6601bcad74ec5fe368b01a38b2a36fead777a192",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201118607 | pes2o/s2orc | v3-fos-license | Pochonia chlamydosporia Induces Plant-Dependent Systemic Resistance to Meloidogyne incognita
Meloidogyne spp. are the most damaging plant parasitic nematodes for horticultural crops worldwide. Pochonia chlamydosporia is a fungal egg parasite of root-knot and cyst nematodes able to colonize the roots of several plant species and shown to induce plant defense mechanisms in fungal-plant interaction studies, and local resistance in fungal-nematode-plant interactions. This work demonstrates the differential ability of two out of five P. chlamydosporia isolates, M10.43.21 and M10.55.6, to induce systemic resistance against M. incognita in tomato but not in cucumber in split-root experiments. The M10.43.21 isolate reduced infection (32–43%), reproduction (44–59%), and female fecundity (14.7–27.6%), while the isolate M10.55.6 only reduced consistently nematode reproduction (35–47.5%) in the two experiments carried out. The isolate M10.43.21 induced the expression of the salicylic acid pathway (PR-1 gene) in tomato roots 7 days after being inoculated with the fungal isolate and just after nematode inoculation, and at 7 and 42 days after nematode inoculation too. The jasmonate signaling pathway (Lox D gene) was also upregulated at 7 days after nematode inoculation. Thus, some isolates of P. chlamydosporia can induce systemic resistance against root-knot nematodes but this is plant species dependent.
INTRODUCTION
The root-knot nematodes (RKN), Meloidogyne spp., are obligate parasites of plants. The genus comprises more than 100 species, but only four of them are considered the most damaging plant parasitic nematodes due to its wide range of plant hosts, worldwide distribution, and high reproductive capacity (Jones et al., 2013). The RKN infective juveniles (J2) enter the root near the elongation zone and migrate intercellularly to establish a permanent feeding site into the vascular cylinder, inducing the formation of giant cells and root galls by affecting cell wall architecture, plant development, defenses, and metabolism (Shukla et al., 2018). Once the infection occurs, J2 become sedentary, and molt three times to achieve the mature adult female stage. The most frequent and damaging tropical species, M. arenaria, M. incognita, and M. javanica, reproduce parthenogenetically.
The female lays a large number of eggs in a gelatinous matrix, the egg mass, located on the surface or within the galled roots.
The damage potential of some Meloidogyne species has been summarized (Greco and Di Vito, 2009;Wesemael et al., 2011). M. arenaria, M. incognita and M. javanica are responsible for the majority of vegetable yield losses caused by plant parasitic nematodes (Sikora and Fernández, 2005). Among vegetables, those belonging to the solanaceae and cucurbitaceae families are commonly included in rotation schemes because they are economically important for growers. The estimation of maximum crop yield losses caused by the nematode in field and plastic greenhouse cultivation varies according to the plant germplasm, environmental conditions and agronomic practices. For instances, maximum yield losses from 62 to 100% have been reported in susceptible tomato cultivars (Seid et al., 2015;, 30 to 60% in aubergine (Sikora and Fernández, 2005), 50% in cantaloupe (Sikora and Fernández, 2005), 37 to 50% in watermelon (Sikora and Fernández, 2005;López-Gómez et al., 2014), and 88% in non-grafted or grafted cucumber on Cucurbita hybrid rootstocks . Control of RKN is conducted mainly with fumigant and non-fumigant nematicides (Djian-Caporalino, 2012;Talavera et al., 2012). However, due to environmental and toxicological concerns, some legislative regulations, such as the European Directive 2009/128/EC, aim to reduce the use of pesticides by promoting alternative methods such as biological control and plant resistance.
Several nematode antagonists belonging to different taxonomic groups have been described (Stirling, 2014). They can act in three ways: (1) directly parasitizing several or specific RKN development stages, such as Pasteuria penetrans (Davies et al., 2011); (2) indirectly by repelling, immobilizing and/or killing them by means of metabolites, and/or inducing plant response, such as Fusarium oxysporum strain Fo162 (Dababat and Sikora, 2007a,b); or (3) both directly and indirectly, such as Trichoderma atroviride strain T11 or Trichoderma harzianum strain T-78 (de Medeiros et al., 2017;Martínez-Medina et al., 2017). Pochonia chlamydosporia (syn. Metacordyceps chlamydosporia) is a fungal antagonist of RKN and cyst nematodes that acts directly by parasitizing eggs, and could also acts indirectly. This fungal species colonizes endophytically the root of several plants, including barley (Maciá-Vicente et al., 2009), tomato (Bordallo et al., 2002), potato ), or Arabidopsis (Zavala-Gonzalez et al., 2017, inducing plant defense mechanisms, such as the formation of papillae (Bordallo et al., 2002) and the modulation of miRNA in tomato (Pentimone et al., 2018) or plant defense genes related to salicylic acid and jasmonic acid pathways in barley and Arabidopsis (Larriba et al., 2015;Zavala-Gonzalez et al., 2017). It is assumed that some of these defense mechanisms could suppress root infection, development and/or reproduction of RKN (Escudero and Lopez-Llorca, 2012), but as far we know, only one study has proven the induction of local resistance (de Medeiros et al., 2015), and none to elucidate the capability of this fungal species to induce systemic resistance. As P. chlamydosporia parasitizes RKN eggs, a split-root system is required to determine the capability of this nematophagous fungus to induce plant resistance avoiding the direct interaction with the nematode. Therefore, in the present study, the capability of five P. chlamydosporia isolates to induce plant resistance against M. incognita was assessed in a split-root system. To assess whether the response was plant dependent, tomato and cucumber were used as representatives of solanaceous and cucurbit crops frequently including in rotation schemes.
Plant Material, Nematode, and Fungi
Tomato cv. Durinta and cucumber cv. Dasher II were used in this study. For all the experiments, seeds were surfacesterilized in a 50% sterilized bleach solution (35 g L −1 active chlorine) for 2 min, washed three times in sterilized distilled water for 10 s each, sown in a tray containing sterile vermiculite, and maintained in a growth chamber at 25°C ± 2°C with a 16 h:8 h (light:dark) photoperiod.
Five P. chlamydosporia isolates M10.41.42, M10.43.21, M10.51.3, M10.55.6, and M10.62.2 were used. The fungal isolates were obtained from horticultural commercial growing sites in northeastern Spain from RKN eggs (Giné et al., 2012) and maintained as single-spore isolates at the Universitat Politècnica de Catalunya. Fungal chlamydospores were produced in barley seeds following the procedure of Becerra and collaborators (Becerra Lopez-Lavalle et al., 2012) with some modifications. Briefly, for each isolate, three 200 g batches of barley seeds were soaked for 18 h and each batch sterilized in an Erlenmeyer flask at 121°C for 22 min over two consecutive days, then were incubated at 25°C ± 2°C in the dark. Afterward, 10 5-mm plugs from the edge of each P. chlamydosporia isolate grown in CMA were added to each Erlenmeyer flask and they were shaken at 5-day intervals to homogenize fungal growth. After a month, the number of chlamydospores produced on barley was determined following the procedure of Kerry and Bourne (2002). Three 10-seed subsamples per Erlenmeyer were plated onto CMA and incubated at 25 ± 2°C in the dark for 2 weeks to assess putative contaminations prior to being used. The viability of the chlamydospores was assessed as in Escudero et al. (2017).
J2 of the isolate Agropolis of Meloidogyne incognita were used as inoculum. Eggs were extracted from tomato roots by blender maceration in a 5% commercial bleach (40 g L −1 NaOCl) solution for 5 min (Hussey and Barker, 1973). The egg suspension was passed through a 74-μm aperture sieve to remove root debris, and eggs were collected on a 25-μm sieve and placed on Baermann trays (Whitehead and Hemming, 1965) at 25 ± 2°C. Nematodes were collected daily using a 25-μm sieve for 7 days and stored at 9°C until their use.
Induction of Systemic Plant Resistance by P. chlamydosporia Isolates Against Root-Knot Nematodes
Tomato and cucumber were grown in a split-root system as described in previous studies (de Medeiros et al., 2017;Martínez-Medina et al., 2017). In this system, the root is divided into Frontiers in Plant Science | www.frontiersin.org two halves transplanted in two adjacent pots: the inducer, inoculated with the antagonist, and the responder, inoculated with the nematode. Briefly, the main root of 5-day-old seedlings was excised and plantlets were individually transplanted in seedling trays containing sterile vermiculite and maintained under the same conditions for 2 weeks for cucumber, and 3 weeks for tomato plants. Afterward, plantlets were transferred to the split-root system by splitting roots into two halves planted in two adjacent 200 cm 3 pots filled with sterilized sand. Four treatments were assessed for each fungal isolate: Fungi-RKN and None-RKN, to assess the capability of each fungal isolate to induce plant response against RKN, and Fungi-None and None-None, to assess the effect of each fungal isolate on plant growth. Each treatment was replicated 10 times, and the experiment was conducted two times. The inducer part of the root of the treatments containing Fungi was inoculated with 10 5 viable chlamydospores of P. chlamydosporia just before transplanting. One week later, the responder part of the root of the treatments containing RKN was inoculated at a rate of 1 J2 per cm 3 of soil. The treatment None-None, with no inoculation with either fungi or RKN received the same volume of water. The plants were maintained in a growth chamber in the same conditions described previously in a completely randomized design for 40 days. The plants were irrigated as needed and fertilized with Hoagland solution twice per week. Soil temperatures were recorded daily at 30-min intervals with a PT100 probe (Campbell Scientific Ltd) placed in the pots at a depth of 4 cm. At the end of the experiments, both inducer and responder root fresh weight and the shoot dry weight of each single plant were measured. Roots from the RKN-inoculated responder were immersed in a 0.01% erioglaucine solution for 45 min to stain the egg masses (Omwega et al., 1988) before counting them. Afterward, the eggs were extracted from the roots as in Hussey and Barker (1973)'s method and counted. The number of egg masses was considered as the infective capability of the nematode because it indicates the number of J2 able to penetrate, to infect the root, and to develop into egg-laying females. The number of eggs was considered the reproductive capability of the nematode, and the female fecundity was calculated as the number of eggs per egg mass.
The tomato and cucumber root colonization by each fungal isolates was estimated by quantifying the fungal DNA by qPCR at the end of the second experiment. The inducer part of the root was washed three times in sterilized distilled water for 10 s each and then blotted onto sterile paper. Per each fungal isolate three biological replicates were assessed. Each biological replicate consisted of the inducer part of the roots from three plants pooled together. The DNA was extracted from each biological replicate following the Lopez-Llorca et al. (2010)'s procedure. qPCR reactions were performed using the FastStart Universal SYBR Green Master (Roche) mix in a final volume of 25 μl containing 50 ng of total DNA and 0.3 μM of each primer (5′ to 3′ direction) VCP1-1F (CGCTGGCTCTCTC ACTAAGG) and VCP1-2R (TGCCAGTGTCAAGGACGTAG) (Escudero and Lopez-Llorca, 2012). Negative controls containing sterile water instead of DNA were included. Reactions were performed in duplicate in a Stratagene Mx3005P thermocycler (Agilent Technologies) using the following thermal cycling conditions: initial denaturation step at 95°C for 2 min, then 40 cycles at 95°C for 30 s, and 62°C for 30 s. Genomic DNA dilutions of the fungal isolate M10.43.21 were used to define a calibration curve from 5 pg to 50 ng. After each run, the specificity of the PCR amplicons was verified by melting curve analysis and agarose gel electrophoresis. The fungal DNA biomass of each isolate was referred to the total DNA biomass (50 ng) and expressed as a proportion.
Dynamic Regulation of the Jasmonic and Salicylic Acid Pathways by P. chlamydospora and M. incognita Tomato seeds were sterilized as previously described. Threeweek-old tomato seedlings were transferred to 200 cm 3 pots with sterilized sand and maintained in a growth chamber as previously described. The fungal isolate M10.43.21 was selected for this experiment because it reduced nematode infectivity and reproduction in the split-root system experiments. The experiment consisted of two treatments: non-inoculated and co-inoculated to determine the expression of genes related to the salicylic acid and jasmonic acid pathways. In the co-inoculated treatment, the soil was inoculated with 10 5 viable chlamydospores just before transplanting and with 1 J2 of M. incognita per cm 3 1 week after transplanting. Each treatment was replicated 40 times. An additional treatment only inoculated with the nematode was included to determine the effect of the fungal isolate on nematode reproduction. The fungus and nematode inoculation procedure was as previously stated.
The expression of the pathogenesis-related protein 1 (PR-1) gene and the lipoxygenase (Lox D) gene the from the salicylic acid and jasmonic acid pathways, respectively, was evaluated at three time points: just after nematode inoculation, that is, at 0 days after nematode inoculation (dani), at 7 dani, and at 42 dani. At each assessment time, roots were washed three times with sterile distilled water, placed onto sterilized filter paper, frozen in liquid nitrogen and stored at −80°C until being used. At 7 dani, the J2 were stained inside roots with acid Fuchsin following the Byrd et al. (1983) procedure to confirm that nematode had penetrated and infected. At the end of the experiment (42 dani), the number of eggs per plant from three plants for each treatment was determined by extracting them as described previously.
Total RNA from roots was isolated using the PureLink RNA Mini Kit (Invitrogen), according to the manufacturer's instructions. Afterward, the DNA-free kit (Invitrogen) was used to remove the remaining DNA from the sample. Total RNA integrity and quantity of the samples were assessed by means of agarose gel, NanoDrop 1000 Spectrophotometer (Thermo Scientific) and Qubit RNA BR assay kit (Thermo Fisher Scientific).
To assure that the sample was DNA free, a PCR was carried out. Then, the RNA was retro-transcribed with the SuperScript II (Invitrogen) according to manufacturer's instructions. The relative gene expression was estimated with the ∆∆Ct methodology (Livak and Schmittgen, 2001), using the ubiquitin (UBI) gene as a reference gene (Song et al., 2015).
Statistical Analysis
Statistical analyses were performed using the JMP software v8 (SAS institute Inc., Cary, NC, USA). Both data normality and homogeneity of variances were assessed. When confirmed, a paired comparison using the Student's t-test was done. Otherwise, paired comparison was done using the non-parametric Wilcoxon test, or multiple comparison using the Kruskal-Wallis test and groups separated by Dunn's test (p ≤ 0.05). The repetitions of the split-root experiments for each crop were compared using the non-parametric Wilcoxon test, and considered as one experiment when no differences (p ≤ 0.05) were found.
Induction of Systemic Plant Resistance by P. chlamydosporia Isolates Against M. incognita
The split-root system experiments with tomato differed (p < 0.05) between them and were treated separately. But no differences were found between the two split-root experiments with cucumber and thus we considered them as replicates of a single experiment. Both tomato and cucumber fresh root weight of the two halves of the split-root system of the None-None treatment did not differ (p < 0.05) (data not shown), showing that the split-root system did not influence root development. Shoot dry biomass did not differ in any fungal isolate-plant species combination (p < 0.05) (data not shown).
Two of the five P. chlamyodosporia isolates induced resistance in tomato plants in both experiments, but none of them did in cucumber ( Table 1). The fungal isolate M10.43.21 reduced both the number of egg masses per plant (32-43%), the number of eggs per plant (44-59%), and the female fecundity (14.7-27.6%), while the isolate M10.55.6 reduced the number of eggs per plant in both experiments (35-47.5%) but the number of egg masses or the female fecundity in only one.
P. chlamydosporia isolates differed in the level of root colonization estimated by qPCR irrespective of the plant species (Figure 1). The standard curves for qPCR obtained by representing the cycle thresholds (Ct) against the log of 10-fold serial dilution of DNA from isolate M10.43.21 were accurate and reproducible to estimate the DNA concentration of the different treatments (tomato = −3.66x + 19.36; R 2 = 0.9736 and cucumber y = −3.4937x + 21.29; R 2 = 0.9947). Regarding tomato, isolate M10.43.21 was the best root colonizer followed by M10.55.6 ( Figure 1A). In relation to cucumber, isolate M10.55.6 was the best root colonizer followed by M10.43.21 ( Figure 1B). The fungus was not detected in the inducer part of the root from treatments None-None and None-RKN.
Dynamic Regulation of the Jasmonic and Salicylic Acid Pathways by P. chlamydosporia and M. incognita in Tomato
Changes in the expression of genes PR-1 and Lox D from the salicylic acid and jasmonic acid pathways at 0, 7, and 42 dani are shown in Figure 2. The expression of the PR-1 gene in roots inoculated with the fungal isolate M10.43.21 was upregulated at 0, 7, and 42 dani compared to the non-inoculated plants (Figures 2A-C). Regarding the jasmonic acid pathway (Figures 2D-F), the gene Lox D was only upregulated at 7 dani. The nematode reproduction in plants co-inoculated with the fungal isolate was suppressed by 60% compared to plants only inoculated with the nematode (Figure 3).
DISCUSSION
The results of this study provide evidence for the ability of some P. chlamydosporia isolates to induce systemically resistance against M. incognita, and that this induction is dependent on the plant species. The isolate M10.43.21 showed the most consistent response in both split-root experiments with tomato, and was the reason for selecting it to determine the hormone modulation in this plant species. The mechanisms responsible for the endophyteinduced resistance are unclear (Schouten, 2016). Both salicylic acid-and jasmonic acid-dependent signaling pathways have been proposed as responsible for the systemically induced resistance to Meloidogyne spp. in tomato in split-root experiments (Selim, 2010;de Medeiros et al., 2017;Martínez-Medina et al., 2017). Martínez-Medina et al. (2017) reported that Trichoderma harzianum T-78 induced the upregulation of genes related to salicylic acid at early stage of nematode infection, whereas those related to jasmonic acid were upregulated from 3 to 21 days after nematode inoculation. In our study, the P. chlamydosporia isolate primed salicylic acid from the first assessment time (7 days after fungal inoculation and just after nematode inoculation) until the end of the experiment (42 dani). This effect could be responsible for the suppression of nematode infection at early stages, as well as the infection by the J2 produced by the primary inoculum that were able to overcome the plant defense mechanisms. In addition, the upregulation of the Lox D gene, related to jasmonic acid, at 7 dani could affect nematode reproduction and fecundity. Thus, the induction of the salicylic acid and jasmonic acid signaling pathways by the fungal isolate M10.43.21 in tomato counteract the suppression of these phytohormones by the nematode described in the susceptible tomato-nematode interaction (Shukla et al., 2018). The three-phase model proposed to explain the induced protection to RKN by Trichoderma harzianum T-78 in tomato consisting of salicylic acid induction suppressing RKN infection followed by jasmonic acid induction suppressing RKN reproduction and fecundity and finally salicylic acid induction affecting root infection by the next J2 generation (Martínez-Medina et al., 2017) is valid for P. chlamydosporia. Other local plant defense mechanisms induced by P. chlamydosporia against RKN have been reported, including the increase of the peroxidases (POX) and poliphenoloxidases (PPO) enzymes activity at root nematode invasion stage (24-96 h after nematode inoculation) (de Medeiros et al., 2015). However, considering that P. chlamydosporia does not extensively colonize the root (Maciá-Vicente et al., 2009;Escudero and Lopez-Llorca, 2012), even being improved by chitosan irrigation , the effect of local defense mechanisms alone may be insufficient to achieve significant nematode suppression.
Not all P. chlamydosporia isolates induced systemic resistance in tomato. The variability of this attribute among fungal isolates can be added to other observations previously reported, such as the production of chlamydospores, root colonization, plant growth promotion, or egg parasitism (Kerry and Bourne, 2002;Zavala-Gonzalez et al., 2015). In fact, egg parasitism can even differ between single-fungal spore isolates originating from the same field population (Giné et al., 2016). The frequency of occurrence of P. chlamydosporia in horticultural production sites under integrated and organic standards has increased since the 1990s in northeastern Spain (Giné et al., 2012), showing that this fungal species is adapted to environmental characteristics and agronomic practices. Field populations of P. chlamydosporia can contain individuals representing a diversity of functions that are highly beneficial to plants, such as plant growth promoters which enhance plant tolerance; inducers of plant defense mechanisms suppressing infection, development and reproduction of RKN; efficient egg parasites suppressing the RKN inoculum; or saprophytic behavior contributing to the organic matter cycle and plant nutrition. A given proportion of P. chlamydosporia representing some or all of these functions could be present in a given soil and adapted to the plant species involved in the rotation scheme and contributing to their health status. It seems that most of these functions are not interlinked. In fact, none of the five fungal isolates assessed in our study was a plant growth promoter. Thus, molecular tools must be developed to facilitate knowledge of the functional composition of the fungal field population in a given soil.
P. chlamydosporia has been reported as the main biotic factor responsible for soil suppressiveness to RKN in horticultural crops (Giné et al., 2016). In soils with low antagonistic potential, the use of fungal isolates with both direct and indirect action mechanisms could suppress RKN. Indeed, primed plants along with egg parasitism will protect against infection and reproduction of RKN and decrease the inoculum viability. The combination of the two modes of action will result in a decrease of the nematode population growth rate, and consequently lower crop yield losses. Alternatively, combining the use of P. chlamydosporia with plant defense activators can produce a similar effect. Vieira Dos Santos et al. (2014) found a reduction in RKN reproduction when P. chlamydosporia was combined with the application of cis-jasmone, as well as an increase in fungal egg parasitism.
In conclusion, this study proves that some P. chlamydosporia isolates induce systemic resistance to M. incognita in tomato but none of them in cucumber. Thus, this response is plant species dependent. In future studies, the interaction between P. chlamydosporia isolates and selected economically important crops should be characterized to elucidate the mechanisms and genes involved in inducing plant resistance in order to maximize the efficacy of control.
DATA AVAILABILITY
All datasets generated for this study are included in the manuscript and/or the supplementary files.
AUTHOR CONTRIBUTIONS
FS and NE conceived, designed, supervised the experiments, the data collection, and analyses. ZG performed the experiments, analyzed the data, and wrote the draft of the manuscript. NE and ES performed the gene expression analyses. TG provided reagents, materials, and advice. NE, ES, TG, and FS reviewed and wrote the final draft of the manuscript.
FUNDING
This study was supported by projects AGL2013-49040-C2-1-R and AGL2017-89785-R financed by the Spanish Ministry of Economy and Competitiveness (MINECO) and the European Regional Development Fund (FEDER).
ACKNOWLEDGMENTS
Thanks are given to Ms. Sheila Alcala and Ms. Maria Julià for their technical support. | 2019-08-22T13:53:26.435Z | 2019-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "fa88cca43eb85f53eb7628001c9f61bf3c77ebe5",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.00945/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa88cca43eb85f53eb7628001c9f61bf3c77ebe5",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
231640765 | pes2o/s2orc | v3-fos-license | Pilot study of a noninvasive real‐time optical backscatter probe in liver transplantation
Transplantation of severely steatotic donor livers is associated with early allograft dysfunction and poorer graft survival. Histology remains the gold standard diagnostic of donor steatosis despite the lack of consensus definition and its subjective nature. In this prospective observational study of liver transplant patients, we demonstrate the feasibility of using a handheld optical backscatter probe to assess the degree of hepatic steatosis and correlate the backscatter readings with clinical outcomes. The probe is placed on the surface of the liver and emits red and near infrared light from the tip of the device and measures the amount of backscatter of light from liver tissue via two photodiodes. Measurement of optical backscatter (Mantel–Cox P < 0.0001) and histopathological scoring of macrovesicular steatosis (Mantel–Cox P = 0.046) were predictive of 5‐year graft survival. Recipients with early allograft dysfunction defined according to both Olthoff (P = 0.0067) and MEAF score (P = 0.0097) had significantly higher backscatter levels from the donor organ. Backscatter was predictive of graft loss (AUC 0.75, P = 0.0045). This study demonstrates the feasibility of real‐time measurement of optical backscatter in donor livers. Early results indicate readings correlate with steatosis and may give insight to graft outcomes such as early allograft dysfunction and graft loss.
SUMMARY
Transplantation of severely steatotic donor livers is associated with early allograft dysfunction and poorer graft survival. Histology remains the gold standard diagnostic of donor steatosis despite the lack of consensus definition and its subjective nature. In this prospective observational study of liver transplant patients, we demonstrate the feasibility of using a handheld optical backscatter probe to assess the degree of hepatic steatosis and correlate the backscatter readings with clinical outcomes. The probe is placed on the surface of the liver and emits red and near infrared light from the tip of the device and measures the amount of backscatter of light from liver tissue via two photodiodes. Measurement of optical backscatter (Mantel-Cox P < 0.0001) and histopathological scoring of macrovesicular steatosis (Mantel-Cox P = 0.046) were predictive of 5-year graft survival. Recipients with early allograft dysfunction defined according to both Olthoff (P = 0.0067) and MEAF score (P = 0.0097) had significantly higher backscatter levels from the donor organ. Backscatter was predictive of graft loss (AUC 0.75, P = 0.0045). This study demonstrates the feasibility of real-time measurement of optical backscatter in donor livers. Early results indicate readings correlate with steatosis and may give insight to graft outcomes such as early allograft dysfunction and graft loss.
Introduction
Demand for liver transplantation continues to rise with the increase in liver failure seen in the UK [1] being mirrored globally, with over a million people dying of cirrhosis worldwide every year [2]. Liver transplantation remains the only effective treatment for end-stage disease, providing an average of 17-22 years of additional life [1,3,4]. Despite rationing access to the UK waiting list [5], 11% of patients listed in 2016/17 had died and 7% had been removed from the waiting list within 2 years [6]. This figure may increase given the organ shortages occurring currently as a consequence of COVID-19.
The shortfall in donor organs has led to an increase in the use of extended criteria (or marginal) grafts, which are associated with higher rates of early allograft dysfunction and primary nonfunction (PNF) [7,8]. While the concept of these extended criteria livers for transplantation is less well defined than in kidney transplantation, donor factors include graft steatosis, donation after circulatory death (DCD), prolonged ICU stay and older age. The decision about whether to accept a more marginal graft for implantation for a given recipient has to be offset against an increased waiting list mortality from waiting longer for a more optimal graft [9]. Although the visual appearance of the liver (a coarse proxy for steatosis) is known to be associated with poorer outcome [8], predictive models for graft failure have erred away from its inclusion due to its subjective nature and the limitations of any categorical descriptions [8,10].
Transplantation of severely steatotic donor livers is associated with a higher incidence of postoperative complications, early allograft dysfunction (EAD), primary nonfunction (PNF), prolonged ICU stay as well as poorer 1-year graft survival [11][12][13][14][15] Preprocurement ultrasound is unable to accurately or reliably predict the degree of steatosis [22], and while cross-sectional imaging by MRI or CT may allow for more objective quantification of hepatic steatosis [12,23,24], it is difficult to envisage that this will be widely available or cost-effective for the assessment of grafts in such a time-constrained situation. This leaves a surgeon's assessment of steatosis using a combination of visual inspection and palpation, which is unreliable and open to significant bias [22]. An accurate and reproducible real-time test for assessing the degree of steatosis in a donor organ is essential to facilitate safe transplantation, aid research into new models for predicting PNF and EAD accurately, and facilitate informed discussions with patients about risk.
We have previously demonstrated in a preclinical study using optical spectroscopy techniques that backscatter of red and near infrared light from immediately beneath the liver surface showed a correlation coefficient of 0.85 in humans when referenced to clinical haematoxylin and eosin (H&E)-stained biopsies [25]. This led us to develop a portable handheld device which, when placed against the surface of the donor liver, allowed us to evaluate the degree of hepatic steatosis. Here, we report on the pilot study correlating the device's readings with liver transplant outcomes.
Materials and methods
This is a prospective observational cohort study of consecutive patients undergoing liver only transplantation at Addenbrooke's Hospital, Cambridge between August 2011 and May 2014 were recruited to participate in this study; split liver transplants were excluded. All consenting patients who underwent transplantation of a liver alone within the study period were included in the study. Outcome data were collected along with other factors that might predict EAD/PNF such as donor type, donor age and ischaemic time. EAD was defined using both the binary Olthoff criteria [26] and the continuous Model of Early Allograft Function (MEAF) scale [27,28]. Primary nonfunction (PNF) was defined as poor graft function necessitating retransplantation or culminating in death within 14 days, excluding rejection and vascular thrombosis. Cold ischaemic time was defined as the time between commencement of cold perfusion in the donor and reperfusion in the recipient.
The implanting surgeon, blinded to the optical backscatter readouts, was asked to grade the degree of steatosis as none, mild, moderate or severe based on visual inspection and palpation during preimplantation benchwork; these categories form part of the returns used by the National Health Service Blood and Transplant organisation in the UK.
The probe
In previously published work, we have described in detail the principles of and technology underpinning red and near infrared light backscatter measurements to assess hepatic steatosis [25]. Briefly, a custom-made diffuse reflectance (DF) optical fibre probe attached to a spectrometer was used to measure tissue absorbance and backscatter by the liver.
For this work, a compact and portable handheld probe was developed (Medicines & Healthcare products Regulatory Agency (MHRA) approval CI/2011/0004). The probe emits red and near infrared light from the tip of the device via two light-emitting diodes (LEDs) linked via optical fibres (see Figure S1) and measures the amount of backscatter of light from approximately 2mm into the liver tissue via two photodiodes, also coupled via optical fibres to the tip of the device. Measurements are taken by placing the device against the surface of the liver, with the amount of backscatter represented on a digital display in arbitrary units. The readings are automatically logged to memory in the device, with a timestamp, for later upload to a PC. To ensure a sterile measurement, the tip of the device includes a disposable cap, which is attached to the probe before use.
The probe was placed against the surface of the liver and readings were taken from 4 pre-specified sites (2 on the right lobe, 2 on the left) from each donor liver during retrieval, pre-implantation benchwork and following reperfusion. The mean reading from the 4 sites was used in the analysis. The absorption largely reflects blood within the liver, while scattering is specific for the size, density and cellular constituents of tissue. We have previously shown that differences in scatter between livers of different patients correlates with differences in the lipid content (steatosis) [25], but equally other cellular processes may affect backscatter.
Histopathology
Pre-implantation core biopsies of the donor livers were taken on the backtable. These biopsies were immediately split with half being snap frozen, stored at À80°C to subsequently allow frozen sections to be cut for oil red O staining and the remainder formalin-fixed and processed to paraffin with sections cut and stained with haematoxylin and eosin (H&E). As previously described [25,29], the extent of steatosis and reperfusion injury were scored by a histopathologist with a special interest in liver disease, blinded to the macroscopic description from the surgeon and optical backscatter results. Macrovesicular steatosis was further subdivided as either large or small droplet in line with others [30,31]. Large droplet macrovesicular steatosis was characterised by a single large fat droplet in hepatocyte cytoplasm, displacing the nucleus to the edge of the cell. This was quantified based on the percentage of large droplet fat occupying the surface area of the parenchyma and given a score 0 to 3 (Table S1). This was subdivided as none or mild (score 0 to 1) or moderate to severe (score greater than or equal to 2). Small droplet macrovesicular steatosis (termed vacuolation by ourselves) consists of multiple small and tiny lipid droplets in the cytoplasm, all being less than size of the nucleus, which retained its central position; this was graded using a score 0 to 2 depending on their extent (Table S1) and termed none to mild (score 0 to 1) or moderate to severe (score 2). Total Fat score was a sum of the small and large droplet scores with none to mild (score 0 to 2) or moderate to severe (score greater than or equal to 3). Microvesicular steatosis refers to the intracytoplasmic accumulation of tiny vesicles within hepatocytes and is a result of severe mitochondrial dysfunction [32]. Given that it highly unlikely that a liver from an affected patient would even be considered as a donor liver [30] and is typically seen in < 1% of transplanted livers [33], we did not formally score the degree of microvesicular steatosis.
Statistics
Groups were analysed with the aid of Prism 8 for Mac OSX (Graphpad Software, La Jolla, USA); statistical methods are referred to specifically in the results section. Briefly, transplant characteristics were compared using Fisher's exact test, chi-squared or the Mann-Whitney test, as appropriate. Concordance was correlated using the methods described by Lin [34]. Unadjusted graft and patient survival were displayed using Kaplan-Meier plots and curves compared by Mantel-Cox analysis.
Ethical approval
Prospective ethical approval was granted for the project by the Regional Ethics Committee (Ref: 10/H0308/94). The study was conducted in accordance with the Declaration of Helsinki and the principles of Good Clinical Practice. No organs from executed prisoners were used in this study. Hospital in the study period and consented to the study. No patients had undergone any form of machine perfusion in this study. Donor and recipient demographics and outcomes are summarised in Supplemental Tables 2 and 3.
Of
The overall 1-and 5-year patient survival was 91.7% and 86.2%, respectively, while the 1-and 5-year deathcensored graft survival was 92.6% and 89.2% in this mixed cohort of DCD and DBD grafts ( Figure S2). The 1-and 5-year graft and patient survival data are summarised in Table 1.
Backscatter readings
There was excellent concordance between optical readings taken in the donor, on the backtable and postreperfusion with concordance greater than 0.85 (data summarised in Table 2). There were no grafts transplanted with a histological large droplet macrovesicular steatosis score of 3 in this study. While there was no significant difference in survival between patient or graft survival depending upon the severity of large droplet macrovesicular steatosis (Table 1 and Fig. 1a, Mantel-Cox P = 0.70 and P = 0.24), there was a significant decrease in survival of patients receiving allografts with more severe small droplet macrovesicular steatosis (Fig. 1b, P = 0.046 and P = 0.041). A higher total macrovesicular steatosis score in the donor organ was associated with poorer graft survival (P = 0.046), but not patient survival (P = 0.098).
Backscatter readings were significantly higher in grafts with more extensive large droplet (P < 0.0001) and small droplet macrovesicular steatosis (P = 0.0001) (Fig. 1). Oil Red O staining also strongly correlated with backscatter readings (Pearson's Rank 0.53 (95% CI 0.35-0.67, R 2 = 0.28, P < 0.0001). In general, increased severity of steatosis as judged by the surgeon was associated with an increase in both the large (P = 0.0002) and small droplet macrovesicular steatosis histological score (P = 0.0007, Figure S3), although the overall concordance between the surgeon and histology was relatively poor (coefficient 0.41).
Graft survival and early allograft dysfunction
In those grafts with EAD defined according to Olthoff criteria, the 1-year graft survival was 86.5% compared 94.5% (Fig. 2a); the backscatter reading was significantly greater in these livers (P = 0.0067).
Acute kidney injury after liver transplantation
The development of acute kidney injury after liver transplantation was associated with a reduction in 1year graft survival from 97.7% to 90.2% and patient survival of 93.1% to 86.8% (see Table 1 and Figure S4). There was not a significantly increased MEAF score between the 2 groups (P = 0.09), but the backscatter reading was significantly higher (P = 0.0027). Simple logistic regression to look at graft loss, using only backscatter on its own performed similarly at predicting graft loss (AUC 0.75 (0.58-0.91), P = 0.0045) (Fig. 5). The odds ratio for graft loss was 1.004 (95% CI 1.00-1.01) for every unit increase in backscatter.
Discussion
Here we demonstrate that real-time measurements of backscatter of red and near infrared light from the liver whilst in the donor, on the backtable and after implantation in the recipient is a feasible approach to assessing in real-time the degree of hepatic steatosis in the setting of liver transplantation. As we had previously seen in a preclinical study of murine and human liver specimens [25], backscatter strongly correlated with the extent of hepatic steatosis as determined by Oil Red O staining (Pearson's Rank 0.53, P < 0.0001) and as scored by a transplant histopathologist (Fig. 2). While increased severity of steatosis as judged by the surgeon was associated with an increase in both the large and small droplet macrovesicular steatosis histological score ( Figure S3), the overall concordance between the surgeon and histology was relatively poor (coefficient 0.41). This probe, therefore, may help to overcome the inherent problem of inter-observer bias seen when relying on arbitrary macroscopic inspection by a surgeon or microscopic evaluation by a histopathologist. While both may remain important within an individual centre, they prevent standardisation of reporting the degree of steatosis in research and across clinical trials, where heterogeneity in approach and inter-observer bias can make outcomes difficult to interpret [18,22].
As well as correlating with the extent of steatosis, measurement of optical backscatter correlated with early allograft dysfunction according to both the Olthoff and MEAF parameters (Fig. 2) as well as with acute kidney injury post-liver transplantation ( Figure S4). More complex multiple logistic regression analysis of this data is limited by the sample size, however, we demonstrated in principle how backscatter could be incorporated into a predictive algorithm utilising in this case donor factors identified at the time of procurement/implantation looking at graft failure as the endpoint. We demonstrated that backscatter measurements were predictive of graft loss (Fig. 5). Further evaluation will require large numbers from a multi-centre study to validate or refute these findings and incorporate both donor and recipient factors as well as other novel readouts into a highly predictive algorithm that will help quantify risk the of a given allograft to a particular recipient. Increased backscatter was not predictive of patient survival, in part due to the ready availability of early retransplantation at that time in the UK, and also potentially the small sample size.
While we utilised a categorical scoring system for assessing the extent of steatosis, others have recently developed a digital algorithm to quantify steatosis in tissue sections, which may make histological evaluation in future clinical studies more sensitive [35], though its usefulness in preimplantation decision-making may be limited by its inherent retrospective nature, and also the time taken to prepare and scan a sample.
Evers et al have also previously demonstrated good concordance with the histological quantification of fat by a similar approach in the context of liver resection surgery [36]. Fibroscan, CT and MRI have also been used successfully as a noninvasive tools for quantifying fat in the field of nonalcoholic fatty liver disease [37], but their role may be limited in the setting of liver transplantation by their portability, availability 24 hours per day across all potential donor hospitals, national laws about pre-mortem interventions in donors and cost. Other groups have also demonstrated the effectiveness of analysis of smartphone photographs and digital analysis software to assess the extent of macrovesicular Figure 2 Early Allograft Dysfunction by Olthoff Criteria and MEAF grouping. In those grafts with EAD defined according to Olthoff criteria, the 1-year graft survival was 86.5% compared to those that did not meet the criteria 94.5% (a); this difference was not statistically significant (summarised in Table 1) although this may be related to sample size. The backscatter readings were significantly higher in those livers (P = 0.0067) meeting the Olthoff Criteria (a). When looking at survival by MEAF grouping, those with a score less than 4 had a 100% 1-year patient and 96.7% 1-year graft survival compared to patient survival of 66.7% and 50%, respectively, for those with a score greater than 9 (b). Increased MEAF was associated with increased graft or patient survival (Mantel-Cox P = 0.031 and P = 0.55, respectively). A higher MEAF score was associated with a significantly greater backscatter (Kruskal-Wallis P = 0.0097) (b). Increased backscatter readings subsequently correlated with increased MEAF scores (Pearson's r = 0.33 (95% CI 0.14 to 0.49), R 2 = 0.11, P = 0.0011) (c); this was particularly true in organs from DCD donors (r = 0.77 (95% CI 0.50 to 0.90), R 2 = 0.59, P < 0.0001) compared to DBD (r = 0.23 (95% CI 0.0019 to 0.43), While this study utilises a custom-built prototype, it is likely that a commercially available device could be developed and would prove to be cost-effective while not causing an increase in cold ischaemia (in contrast to biopsy examination) and could be utilised in centres who do not have 24-hour access to a transplant histopathologist. It also avoids the potential risk of bleeding or bile leak from the biopsy site.
While declining the offer of a steatotic liver has been shown to increase an individual's waiting list mortality [41], the unpredictable response of steatotic Figure 3 Backscatter by surgeon assessment. Surgeon assessment of grafts did not predict differences in 1-year graft (93.8%) or patient (91.1%) survival in those deemed 'healthy' compared to those that were 'suboptimal' 93.0% and 93.0%, respectively (a). There was a significant difference between the backscatter readings between 'suboptimal' and 'healthy' livers (Mann-Whitney P = 0.0004) (a). No grafts deemed to be severely steatotic by the implanting surgeon were implanted during this study. 1-year graft survival for those deemed to not be fatty was 96% compared to 93.1% (mildly steatotic) and 86.7% (moderately steatotic), with corresponding 1-year patient survival of 92.1%, 90.0% and 86.7% (b). The backscatter reading was significantly higher in the moderately steatotic livers compared to those with minimal or no fat (Kruskal-Wallis P = 0.0054) (b).
716
Transplant livers to reperfusion, with an increased severity of ischaemia-reperfusion injury (IRI) and subsequently increased rates of PNF, EAD and post-liver transplant acute kidney injury mean that there is an understandable reluctance to routinely transplant such livers [21,42]. As the demand for organ transplantation continues to remain high and the epidemic of obesity in the west is resulting in higher rates of steatosis in donor organs [43,44], we will inevitably need to implant more steatotic livers in the future. Others have shown that one possible solution to overcoming the excess risk of a steatotic organ is to allocate steatotic organs to 'preferred recipients' (defined as first-time recipients with a MELD 15-34, without primary biliary cirrhosis and not on life support prior to transplantation), as these recipients have no significant increase in mortality or graft loss when receiving steatotic compared to nonsteatotic livers [45]. While their study was necessarily performed retrospectively on biopsies using registry data, one could envisage that backscatter in the donor or on the backtable could be used prospectively to guide these decisions, without the need to wait for biopsy results. The continuous nature and spread of the potential backscatter data is also such that the allocation process could be less dichotomous and identify a greater range of potential recipients that would benefit from a given organ and help identify risk in each individual.
As well as matching the 'high risk' organ to the 'low risk' recipient, another strategy to mitigate the excess risk of steatotic livers is to utilise novel technology or therapeutics to identify which livers are safe to implant and identify those which need some other intervention, such as ex situ machine perfusion [46][47][48][49][50], that is personalising or targeting therapy for a given donor liver. Assessment of backscatter seems to be one such potential objective strategy to stratify risk and rather than being used in isolation, we envisage that optical backscatter measurements would be incorporated in a more complex model utilising all available data on the donor and recipient (Fig. 6) to really inform the patient and surgeon about personalised risk, especially when coupled with better modelling of individual waiting list mortality. With the advent of machine perfusion, these readings could be used by the retrieval centre and/or national organ allocation service to stratify organs into 'safe to transplant', 'not safe to transplant', 'needs further viability testing' [48] and in the future an Figure 4 Backscatter readings and survival. Kaplan-Meier plots of graft and patient survival comparing survival in cohorts of patients backscatter readings greater or less than 100. Backscatter readings > 100 were associated with worse graft (Mantel-Cox, P < 0.0001), but not patient survival (P = 0.085) (summarised in Table 3). additional arm that would recommend directed therapy (see Fig. 6). Furthermore, this technology may help validate the effectiveness of ex situ 'defatting' strategies in a given liver undergoing machine perfusion that are currently being developed [51,52], without the need for serial biopsies.
In conclusion, the data from this pilot study are promising, but needs more extensive validation alongside other novel noninvasive real-time approaches to generate robust data to support their further use and/or generate more accurate predictive models. If further validated, measuring optical backscatter in donor livers may have a role in the safe allocation of livers for transplantation and inform discussions between clinicians and patients about the risk of a given donor organ [53]. In addition, it may allow increased utilisation by helping to determine a subset of livers requiring specific pre-treatments before transplantation, such as targeted drug therapy or defatting during ex situ perfusion [54,55].
Authorship JR, LR, AB, CW and PR have made a substantial contribution to the conception, data collection, analysis and writing of this manuscript. PR designed and produced the probe. SD performed histological analyses and aided with analysis and writing of this manuscript. JM and AF made a significant contribution to the data collection.
Funding
We would like to acknowledge support from the Evelyn Trust who funded this study as well as the National Institute for Health Research (NIHR) Blood and Transplant Research Unit (NIHR BTRU) in Organ Donation and Transplantation at the University of Cambridge in collaboration with Newcastle University and in partnership with NHS Blood and Transplant (NHSBT). JR was also supported by a NIHR Academic Clinical Lectureship. The Human Research Tissue Bank is supported by the NIHR Cambridge Biomedical Research Centre. The University of Cambridge has received salary support in respect of CJEW from the NHS in the East of England through the Clinical Academic Reserve.
Conflict of interest
While working on this study, LR was wholly employed by the University of Cambridge / Cambridge University Figure 5 Logistic regression analysis of graft loss. Multiple logistic regression modelling of factors known preimplantation (donor age, donor BMI, donor type and backscatter reading) was performed and used to generate model to predict graft loss (Logit[P (graft loss)] = (donor age x 0.05853) + (donor BMI x À0.01520) + (Donor type (0 = DBD, 1 = DCD) x À0.01323)+ (Backscatter x 0.009219) -5.817). Receiver operating characteristic (ROC) curves of this analysis were plotted (a), and area under the curve (AUC) analysis showed the model to perform reasonably well (AUC 0.75 (95% CI 0.60-0.90), P = 0.0072). Simple logistic regression to look at graft loss, using only backscatter alone was also performed and ROC curve was plotted (b). This performed similarly at predicting graft loss (AUC 0.75 (95% CI 0.58-0.91), P = 0.0045). Hospitals. She has subsequently moved to work for the machine perfusion device company OrganOx, who have no commercial or other interest in this body of work and as such LR feels that there is no significant conflict of interest to declare. The remaining authors have no other conflicts of interest to declare.
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of the article. Figure S1. Diffuse reflectance (DF) optical fibre probe. Figure S2. Overall Graft and Patient survival. Figure S3. Histological Scoring and Surgeon's Assessment of steatosis. Figure S4. Histological Scoring and Surgeon's Assessment of steatosis. Table S1. Histological Scoring. Table S2. Donor demographics. Table S3. Recipient details. | 2021-01-20T06:16:20.117Z | 2021-01-19T00:00:00.000 | {
"year": 2021,
"sha1": "34931d832874b9336b46563092835acf055974f3",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/tri.13823",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "41eab1697c69ca0a00c2a8cc3d61c92bb05c715d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158549515 | pes2o/s2orc | v3-fos-license | Early Modern Russian State , " Tsar ’ s Discourse " and Russian Orthodox Church in the XV-XVII Centuries
The article examines some special aspects of the centralized Russian state formation as the part of the Pan European process of the early modern state formation. The researchers focus on the problem of the Tsar’s discourse as on the problem of the total claims the true Orthodox monarch should comply with. The Orthodox Church took the active part in the formation of this "discourse", and the compliance with the requirements of it guaranteed to the top authority loyalty of its people and readiness to carry out its requirements. The Church gave Grand Dukes the powerful tool when applying secular practices of legitimation.
Introduction
The second half of the 15th -the middle of the 16th centuries is the time when long process of formation of the early modern Russian State comes to the end.In the Russian historical literature behind it stands the name "centralized", though in recent years among historians both in Russia, and beyond it, doubts concerning this term in relation to realities of the late Middle Ages of early modern times as far as it was "centralized" were strongly expressed, however, this topic should not be closed before our point on real changes in political, administrative, judicial and ideological spheres of life of early modern society and the state is shown -no matter Russian, French, Spanish or any other state be concerned.
These changes promoted change in appearance, and internal structure of the early modern states, and these changes had gradual, evolutionary character, dragging on for quite long time.This evolutionary character is perhaps, one of the most important and characteristic features of these political educations founding.The gradualness, some kind of "obscurity" of changes guaranteed a certain continuity, a perpetuality of political, administrative and legal traditions [See, for example: 1 ; 2].In view of thinking conservatism in society at that time, its suspicious attitude towards unexpected, sharp steps, breaking off the habitual course of life, such slow scenario of unfolding guaranteed (to a certain extent) lack of "great shocks" and, finally, promoted reconciliation of society with gradually growing ripe innovations.
An important role in these processes was played by an ideological factor, ideological justification, legitimation of these gradual changes [See, for example: 3].In realities of that time this function was assigned to religion and those social and political institutes which served it and acted as carriers of its values and ideologems.In early modern Russia this role was carried out by the Russian Orthodox Church providing formation and correct functioning of Tsar's Discourse without which correct functioning of the weak, insufficiently developed state machinery would be extremely complicated, if at all perhaps.
Methodology
Analyzing the forming features of the "centralized" Russian state in the second half of the XV-XVI centuries, we recognized that its "centrality" was rather a certain desirable purpose, but not political, legal and the more so economic reality of that time, and it was characteristic not only of Russia, but also other early modern states.In general we agree with opinion of the American historian N. Kollmann that characterizing features of a political and legal regime of the Russian state during an era of early modern times, she specified that its "state-building experience was part of a broader early modern continuum of change" that "Russia's centralization can be seen in a continuum of normal early modern state-building" in itself [3, p.1].In general, the same "continuum" keeps within a framework, even taking into consideration local regional differences, the "composite state" model offered by the British historians G. Kenigsberger and J. Elliot [4; 5].
Owing to the administrative weakness and some other, the reasons new political, administrative, judicial, legal and other practices could not approve early modern monarchy quickly and effectively, force them to function effectively and were forced to be reconciled with preservation of a set of remnants of "old times".The new order was spread gradually, from above, by means of several "strategy".It is possible to carry "addin" over old, traditional administrative structures.Second "strategy" assumed flirting of the supreme power with local political elite, seeking to keep their loyalty by means of preservation of a considerable part of their "liberties" and attraction to government both on local, and on central levels.As result, "composite monarchies were built on a mutual compact between the crown and the ruling class of their different provinces which gave even the most arbitrary and artificial of unions a certain stability and resilience" [5, p.57].
So, key situation in this quote -"mutual consent between a crown and ruling elite".However, when we speak about local elite, we should not forget that it is not only about secular elite, but it is always necessary to remember also that early modern society in many respects remained (especially in the mental sphere) society medieval, and the religion played in his lives, and not only spiritual, but also political and legal, very essential role.From here third "strategy" which essence N. Kollmann in relation to such "empires" (understanding as them the multinational states including territories with the different level of social and economic, political and cultural development) as Habsburg, Ottoman or Russian, defined as the aspiration of the Supreme power to strengthen the legitimacy, to guarantee loyalty of the subject population through active application of ritual, ideological and symbolical "discourses" [3, p.21].
The American historian described this "discourse" in the following categories: "The ruler provided justice, protected his people from harm and patronized the church" [3, p.21] and, developing the thesis further, shows as it worked, on examples of the Moscow city revolts of the 17th century.The sovereign had to correspond to an image of fair, pious, truly orthodox tsar, and this compliance gave it additional legitimation: "His (the tsar) legitimacy depended upon his acting on that authority", and that "the relationship of violence, autocracy and ideology in Muscovy is profound: precisely because the tsar was good, pious and benevolent -not a tyrant, not a despot -he was required to respond to his people" [3, pp.401-402].
From that how Moscow sovereign and his actions corresponded to the criteria of Tsar's Discourse occurring in society, his authority, his influence, his life and the fate of its dynasty depended, at last.The fate of Boris Godunov and his son Fedor and also False Dmitry I, who tried to be beyond this model, make a striking example.But for this "discourse", first of all, the Russian Orthodox Church in the person of its administrative and intellectual elite was a creator.
The Supreme power was forced to consider a position of Church, making these or those decisions both in external, and in domestic policy because church hierarches made a part of ruling elite of the Russian land integral (for a long time) with which opinion the Supreme power could not but reckon.Therefore, studying the processes connected about formation of the "centralized" Russian state, development and distribution new political, administrative, judicial, legal and others practices, we cannot but take into consideration a position of Church and degree (and character) its participations in development of these practices.Churches, both ideological (first of all), and technical (ensuring literacy anyway, but was in hands of Church) and other support these practices could not but take place, function properly.
Results and Discussion
Considering interaction of the Supreme power and church in Russia in the context of formation of the early modern state and institutes, characteristic of it, and the practices, first of all we will note that cooperation of the prince as secular lord and metropolitan as lords spiritual had old and strong roots.The fact that medieval Russian scribes, modeling character of the relations between the Supreme secular and spiritual authorities, proceeded from the known Byzantine symphonies model as cooperation of two authorities in achievement of the uniform purpose -to preservation of the orthodox world, an orthodox "terrestrial hail" and creation of necessary conditions for final rescue "communities true" does not raise doubts.It is natural that this "doctrine" carried to a certain extent informal, unwritten character and was not accurately and it is unambiguously recorded in any acts charters which are officially certified by the Supreme power (like a notorious "Great charter of liberties").
However the lack of such charter did not mean at all that the power could neglect opinion of Church and its hierarches not take into account their position in these or those questions.Moral and ideological support of Church always meant to the Supreme power very much and a lot of things, and whether taking these or those actions in the sphere external, whether internal policy, the power sought to get such support.For the sake of it the Supreme power could even go for rough intervention in internal affairs of Church, despite of the fact that such steps could cause a certain tension in the relations with it.The church, in turn, believed possible to make impact on the Supreme power and to insist on cancellation of these or those decisions if it believed its certain actions inappropriate to Orthodoxy canons.And it if not to speak about the right of so-called "pechalovaniye" (scolding) according to which the highest hierarches of Church could stand up for disgraced and undergone punishment from the Supreme power.
Examples of such interaction, it is possible to bring much.So, at a boundary of the 70th -the 80th of the 14th century the attempt of the grand duke Dmitry Ivanovich needing support of Church at the time of strain of relations with the Horde and Lithuania to put on metropolitan department of the candidate led to the serious conflict between it and the metropolitan Kiprian.Later nearly fifty years the firm position of the metropolitan Photius who supported the right of the grandson Dmitry Ivanovich for the Moscow table cooled passions and prevented (at least for a while) ready to burst just about internal quarreling.The church played a significant role legitimations of process of release of the Russian state from the Horde dependence, and subsequently -legitimations of conquest of Tatar "yurt" -the states of the Volga region and Siberia which arose after disintegration of the Golden Horde.
These and other similar incidents should not cover, however, before our look more important contribution which made Church in formation of the "centralized" Russian state.It is about notorious Tsar's Discourse that which we mentioned before and which played important if not defining role in legitimation of the Supreme power at this turning point of the Russian history.
Concerning this "discourse", the Russian historian I.B.Mikhaylova noted that "in the XV-XVI centuries the Russian scribes on the basis of a wide range of written sources, early Christian, Byzantine and domestic monuments taking into account traditional representations of contemporaries, developed the doctrine about the Moscow orthodox kingdom led by Christ's protégé which had a body of the mortal person and in it divine soul".Let's note, by the way, that representations of contemporaries played extremely important role because that text base on the basis of which Tsar's Discourse was developed in Russia was significantly already, than in the same Byzantium.It is connected with features of broadcasting and reception of the Byzantine cultural religious heritage in Russia when initially the volume of information transferred to newly converted Christians from "Greeks" was significantly limited.
"After the Byzantine thinkers, -the historian continued further the thought, -they (the Russian medieval intellectuals -the Bus) urged the sovereign to imitate God, to improve God the virtues granted to him, exercising the extensive power, to try to obtain prosperity of the Russian power, it is fair to operate it and it is reliable to protect it from external and internal enemies" [6, p.19].At the same time the Church conferred responsibility not only for the acts, but also for acts and sins of the citizens on the sovereign.In this plan some kind of "best-seller" of the Russian books of the late Middle Ages -the "Punishment" (meaning "lecture, order") attributed to the Tver bishop Simeon is submitted very curious (died in 1289).This text is remarkable the fact that in it the thesis concerning responsibility of the prince for acts of the administrators is accurately checked.On a question of the prince where to stay to the unjust princely trustee, the bishop replied that in the same place where also to his mister."Why?" -the prince was surprised, and Simeon explained to him in what business.If the prince, the hierarch said, he is kind, godfearing, loves the truth and people favors, then and he by all means will elect the trustee same, is a match for himself -the husband God-fearing and just, fair and kind to people, reasonable and operating people subject to it agrees God's to precepts.And when the time comes to appear to the prince and his trustee before the Highest Judge, both of them by right will deserve the place in paradise.In an opposite case if the prince has no fear of the Lord, the citizens do not favor, oppresses orphans and widows and to crown it all appoints the trustee of the husband of the evil who finds only one princely income, without thinking of justice, then to both, both the prince, and the trustee one road -to hell [7, p.376].
The thesis about responsibility of the prince for actions of his administrators is not less accurately checked, for example, and in the writing by famous Joseph of Volotsk.Addressing the grand duke (Ivan III), the Saint specified that the Lord allocated him, the prince; the great power in order that he firmly adhered to laws and fought against any lawlessness.In your care, Josef said, addressing the prince, care about corporal wellbeing of citizens, meaning fight against robberies, murders, any attempts at property, honor and advantage and, by itself, with wrong court, "lie" [8, pp.183-184.].
On the same position firmly there was also Maximus the Greek.In one of the compositions it, comparing the sovereign to the sun, wrote that just as the sun lights with the beams and warms the world around, and the soul of the grand duke executed mildness, generosity, purity and the truth lights, decorates and on commission of good deeds moves the citizens.On the contrary, if the sovereign leads an unseemly life if his soul is overcome by rage, anger and passions other similar unseemly to an imperial rank, then and all the rest will also be dulled, will fall into a mutiny and violence [9, pp.164-165].
That the Supreme power seriously perceived such addresses is shown by one very indicative example.In 1479/80 the grand duke "brought together" from Velikiye Luki the deputy prince Ivan Lyko Obolensky, and Luciana, having exercised the right, asked humbly on the former deputy, having accused him and his people of all abuses and usury made in Lukovy and suburbs.The investigation held by order of Ivan III confirmed the facts stated in the petition, and the grand duke ordered to Lyko to Obolensky to indemnify the caused damage and to pay all court costs and other.When the former deputy offended in the best feelings drove off to the brother of the grand duke prince Boris Vasilyevich, Ivan did not stop breaking the contract with the specific prince to punish responsible though Lyko Obolensky after departure formally was subject to jurisdiction of the specific prince.Other, not less indicative example took place in the winter of 1557.The prince M.V. Glinsky with a host, going to be at war on a monarchic order Livonia, assumed that on the road his soldiers made excesses and robberies.Victims of their actions residents of Pskov asked humbly Ivan the Terrible that punished guilty persons and indemnified the caused loss.And again, as well as in the previous case, the made investigation confirmed the facts of robberies and violence, the prince Glinsky got to disgrace, and residents of Pskov received compensation [10, p.86].
Attracts attention and other requirement of Joseph of Volotsk -the requirement be peeped about spiritual health of the citizens.In it the aged man Filofey, the author of the well-known message to Vasily III in whom the idea that Moscow is the III Rome was formulated is solidary, for example.He also pointed that fight against heretics by all means belongs to duties of the grand duke [11, p.359].
Not less if, also other duty which was imposed on the sovereign by the Russian scribes -protection, defense of small these from foreign, protection of orthodox belief and church against gentiles was no more important.Two Rome fell, the aged man Filofey, and the third taught Vasily III, Moscow, stands, the fourth not to happen.And therefore, it continued, extremely responsible and at the same time heavy burden and a duty of preservation of the last true Christian kingdom is assigned to shoulders of the orthodox sovereign.This motive is well looked through in the message of the Rostov archbishop Vassian to Ivan III, written in the fall of 1480 when on the Ugra River in opposition of the Russian and Tatar army the destiny was decided -to be or not to be Russian to the state and further depending on the Horde.Vassian wrote the grand duke, calling him for hardness.Your debt as good shepherd, he wrote, to protect the herd handed to it by the Lord from "the future wolf", warning at the same time it that if the prince is frightened of the duty assigned to it and will evade from opposition, then the Lord will ask it on all severity for the killed incorrect Christians, the destroyed and profaned temples and monasteries.Where, did the bishop exclaim, you will be able to sit on a throne again if you ruin handed to you became? | 2018-12-18T12:01:44.079Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "9199434d87d95f30a2bc8d7750f0ad58a56808ca",
"oa_license": "CCBY",
"oa_url": "http://kutaksam.karabuk.edu.tr/index.php/ilk/article/download/1606/1148",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9199434d87d95f30a2bc8d7750f0ad58a56808ca",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Political Science"
]
} |
255864478 | pes2o/s2orc | v3-fos-license | Translation of liver stage activity of M5717, a Plasmodium elongation factor 2 inhibitor: from bench to bedside
Targeting the asymptomatic liver stage of Plasmodium infection through chemoprevention could become a key intervention to reduce malaria-associated incidence and mortality. M5717, a Plasmodium elongation factor 2 inhibitor, was assessed in vitro and in vivo with readily accessible Plasmodium berghei parasites. In an animal refinement, reduction, replacement approach, the in vitro IC99 value was used to feed a Population Pharmacokinetics modelling and simulation approach to determine meaningful effective doses for a subsequent Plasmodium sporozoite-induced volunteer infection study. Doses of 100 and 200 mg would provide exposures exceeding IC99 in 96 and 100% of the simulated population, respectively. This approach has the potential to accelerate the search for new anti-malarials, to reduce the number of healthy volunteers needed in a clinical study and decrease and refine the animal use in the preclinical phase.
Background
Despite the availability of various treatments for malaria, morbidity and mortality associated with this disease remain high [1]. The aim of the World Health Organization (WHO) is to reduce mortality, incidence and reestablishment of malaria by 90% by 2030 [1].
Malaria is caused by protozoan parasites of the genus Plasmodium. The initial site of mammalian infection following a mosquito bite is the liver in which sporozoites (spz) mature into merozoites that are eventually released into the bloodstream to invade erythrocytes leading to the pathology [2]. Consequently, targeting the asymptomatic liver stage of infection through chemoprevention could become a key intervention to prevent symptomatic malaria and achieve the WHO objective [3].
Model informed dose selection to initiate early phase of clinical development has become a powerful tool. In malaria, volunteer infection studies (VIS) are widely used for drug and vaccine development to achieve early proof-of-principle [4]. For the liver stage of Plasmodium infection, the generation of data to feed Population Pharmacokinetics (PopPK) models to set the clinical starting dose essentially relies on in vivo mouse models, due to limited availability of human-based in vitro models [5][6][7].
Here, a first attempt was made using a recently developed in vitro 3D infection platform that employs hepatic cell line-derived spheroids to generate an effective average concentration (C av ) of M5717, a new anti-malarial that acts as a Plasmodium elongation factor 2 inhibitor, which is also active against the liver stage of infection. The in vitro derived values were set as a target concentration to prevent liver stage infection. Complementarily, those concentrations were employed in an in vivo model to assess the ability of M5717 to prevent the appearance of a blood stage infection in mice challenged with Plasmodium spz. A previously developed PopPK model based on clinical Phase 1 data was used to simulate doses that would exceed the target in vitro effective concentration [4,8,9]. M5717 has entered a liver stage Plasmodium spz induced VIS (NCT04250363) in 2021 employing the predicted doses described in this article [8,10].
In vitro hepatic Plasmodium infection
HepG2 cells (ATCC) were cultured in an incubator with humidified environment at 37 °C with 5% CO 2 . Cell expansion was performed in 2D static culture, in low glucose DMEM (Thermo Fisher Scientific) supplemented with 1% (v/v) penicillin/streptomycin and 10% (v/v) FBS. Cells were passaged twice a week, at 5 × 10 4 cell/cm 2 . For 3D culture, cells were suspended in the same culture medium, supplemented with 10% filtered FBS (v/v) and inoculated as single cell suspensions (3 × 10 5 cell/mL) in 125 mL (from Corning, Merck KGaA). After aggregation medium exchanges were performed with 5% filtered FBS culture medium. Agitation was induced by magnetic stirring and adjusted to promote spheroid formation and sustain their long-term culture (up to 30 days), as described before [9].
Infection of HepG2 spheroids in dynamic conditions was performed in 30 mL spinners (from ABLE Biott Corporation), at a cell concentration of 5 × 10 5 cell/mL and a cell/spz ratio of 1:1. For infection in static conditions, spheroids were transferred to 96-well plates (2.5 × 10 4 cell/well) and exposed to P. berghei-Luc in a 1:2 cell/spz ratio; Cultures were challenged with the drugs during the development phase of the parasite, from 24 h up to 48 h post-infection (pi). For dose-dependence assays, infection rate and metabolic activity were assessed at 48 h pi, whereas for the time-dependence assays, infection rate and dsDNA concentration were assessed up to 84 h pi. P. berghei-luc was detected by bioluminescence, with the Firefly Luciferase Assay Kit (Biotium), following the manufacturer's instructions, as previously described [9]. The normalized data were fitted to dose-response curves by nonlinear regression analysis and IC 50 values determined using GraphPad Prism version 6 for Windows (GraphPad Software). The lowest drug concentration that inhibited infection by 99% of the DMSO-treated controls was considered the minimal inhibitory concentration or IC 99 .
Liver stage Plasmodium infection in vivo
Naïve female NMRI mice were infected with 1 × 10 5 spz of P. berghei mCherry ANKA-Luci-GFP by intravenous (i.v.) injection in the tail vein. At + 24 h pi, single oral (p.o.) dose drug treatment with M5717 (0.3, 0.6, 1.5, 3 and 30 mg/kg) was administered. Compounds were dissolved in a vehicle of 7% Tween 80 and 3% ethanol. Plasmodium liver infection was assessed in vivo at 23 and 48 h pi in anesthetized animals employing an IVIS Lumina II system after following i.v. injection of 5 mL/kg D-Luciferin (30 mg/mL). Blood-stage parasitaemia was measured 7, 14, 17, 21, 24, 28, 31 and 35 days pi by light microscopy on Giemsa-stained blood smears, and blood-stage positive mice were euthanized. Mice without detectable parasitaemia until day 35 pi (33 days post treatment) were considered as cured and were euthanized. Blood samples were collected to assess the drug exposure level to determine the C av24h . One mouse received 10 mg/kg atovaquone (ATO) as a positive control and three mice served as untreated controls.
PopPK modelling
PopPK modelling was performed with the nonlinear mixed effect modelling software NONMEM (version 7.4.3, ICON Development Solutions, Dublin, Ireland) using the first order conditional estimation method with interaction (FOCEI) supplemented with Perl-speaks-NONMEM (PsN) (version 4.9.0) [8,11]. Microsoft R Open (version 3.5.1, Microsoft, Redmond, Virginia, USA) was used for general scripting, data management, goodness of fit analyses and model evaluation.
Dose simulations for prevention of the development of blood-stage parasitaemia
The final PopPK model was used for simulations to identify doses for prophylaxis in R using mrgsolve [8].
Using the variance-covariance matrix obtained from NONMEM, 1000 sets of parameters were generated. For each set, 1000 sets of individuals were generated using between-subject variability estimates. From this pool, 10,000 individuals were randomly sampled without replacement and used for simulations. Doses were tested as combinations of 100 and 30 mg of M5717 free base as these were the available capsule strengths of the formulations.
Liver stage Plasmodium infection in vitro
M5717 showed hepatic anti-plasmodial activity when incubated with luciferase-expressing Plasmodium berghei-Luc-infected HepG2 spheroids from 24 to 48 h post-infection (pi) [9]. The drug showed remarkable potency against P. berghei-Luc-infected HepG2 spheroids, with an IC 50 of 1.0 ± 0.1 nM and an IC 99 of 9.9 ± 1.6 nM ( Fig. 1) [10]. Therefore, the latter concentration, effective in low-metabolizing HepG2 hepatic spheroids, was used for translation to liver stage activity in vivo. Given the large excess of FBS-derived proteins present in the in vitro medium compared with the drug concentration, only the human blood-to-plasma ratio (B/P = 1.6) was used to convert the IC 99 , an effective average concentration over 24 h, to the corresponding area under the curve (AUC) drug exposure of 180 ng•h/mL in human plasma.
Liver stage Plasmodium infection in vivo
The P. berghei-Luc liver stage mouse model was used to validate the IC 99 value obtained in vitro [9]. Our results show that the 30, 3 and 1.5 mg/kg single oral doses of M5717 were fully effective in preventing the appearance of blood stage infection in all treated mice, while 0.3 mg/ kg and 0.6 mg/kg did not, or only partially prevented the development of parasitaemia (Table 1).
To correlate the in vivo exposures obtained at various oral doses with the effective concentrations previously determined in our in vitro 3D hepatic cell model, the AUC 0−24 h was converted to an average plasma concentration of the drug (C av0−24 h ) over 24 h and was compared to the compound's IC 99 in vitro (Table 1). Importantly, both models employed a similar protocol, with a 24 h exposure time, between 24 and 48 h pi (Fig. 1). All mice for which the drug was fully effective in preventing the appearance of blood stage parasites achieved systemic plasma C av0−24 h > 10 nM. With the non-preventive dose of 0.3 mg/kg, a C av0−24 h of 4 nM was observed, which corresponded to only half of the IC 99 and did not prevent the development of blood-stage parasites. A 0.6 mg/kg dose produced a C av0−24 h close to the IC 99 value, preventing only partially the development of blood-stage parasitaemia.
Therefore, the use of the in vitro IC 99 concentration of 10 nM as a relevant parameter for M5717's efficacy in P. berghei-infected liver stages was supported by the average in vivo blood concentration of M5717 over 24 h, as inferred from the mouse model. Consequently, the IC 99 value was converted into a target exposure value, i.e. AUC 0 − 24 h of 180 ng•h/mL that can be used for a PopPK model developed based on Phase 1 clinical trial exposure data to simulate doses that would exceed the aforementioned AUC value.
PopPK modelling
The model was derived from results obtained from the completed Phase 1 clinical trial [4,8]. Briefly, M5717 PK was three-compartmental, with first-order elimination, a transit absorption model in combination with first-order absorption, and a recirculation model to account for a secondary peak between 24 and 30 h. Body weight was included a priori on clearance and volume parameters. The model adequately regenerated the data used to create it as evidenced by visual predictive checks; standard diagnostic plots revealed no unacceptable trends.
Dose simulations for prevention of the development of parasitaemia
Assuming a human target exposure of AUC 0−24 h of 180 ng•h/mL, simulations for the dosing required to prevent blood stage infection were made assuming (i) full and direct extrapolation of in vitro results to humans, (ii) similar susceptibility between Plasmodium species and to M5717 due to high sequence homology (i.e., 98.2% BLAST P. falciparum PF3D7_1451100 versus P. berghei PBANKA_1314800), (iii) similar relevance of the IC 99 (C av ) parameter, and (iv) a blood-to-plasma ratio in humans of 1:6. The results suggested that a 30 mg dose would not provide sufficient exposure, while 80, 100, 130 and 200 mg of M5717 would be required for parasite clearance to be observed in 81.5, 96.1, 99.7 and 100 of subjects, respectively (Fig. 2).
Discussion
A recently developed in vitro 3D Plasmodium infection platform using hepatic cell line-derived spheroids was used as a translational tool for the selection of active molecules against the liver stage of Plasmodium infection. Here, the exposure-efficacy relationship obtained from the in vitro model was supported by in vivo data. A starting dose for a chemoprophylactic clinical trial was predicted by combining the in vitro data with PopPK models simulations derived from Phase 1 clinical data.
Using M5717 as an example, a four-step approach to validate the preclinical translatability of the in vitro model was setup by (i) determining the IC 99 in the in vitro platform over a given period of time (ii) cross validating the C av with the in vivo P. berghei model (iii) feeding a PopPK model, and, (iv) defining experimental doses to be evaluated in a spz-induced VIS.
M5717 is well tolerated in humans and exhibits a pharmacokinetic profile characterized by a long halflife (146-193 h at doses ≥ 200 mg). The drug showed potent activity against the hepatic stages of P. berghei in both in vitro and in vivo models, after infection has been established indicating that M5717 acts during the parasite's intrahepatic developmental process. As the emphasis was placed on the patent liver infection, drug treatment only occurred between 24 and 48 h pi with P. berghei-Luc spz, to ensure that the drug was present throughout the P. berghei liver stage development phase. Upon validation of the relevance of the IC 99 by comparing with the C av arising from the corresponding in vivo experiments, and assuming a similar susceptibility of Fig. 2 Probability of exceeding (purple) AUC 0 − 24 of 180 ng•h/mL (IC 99 ) (yellow bar). X axes represents the total dose of M5717 as a free base and the Y axes indicated the simulated M5717 area under the curve P. berghei and P. falciparum to M5717 due to the high sequence homology of the drug target, the in vitro IC 99 was used as the target concentration that would lead to prevention of blood stage infection. In both models, the data showed that over a period of 24 h, a C av0−24 h of 10 nM was required for M5717 to completely clear an ongoing P. berghei-Luc hepatic infection and could be converted into a target AUC 0−24 h of 180 ng•h/mL.
Assuming that an exposure of 180 ng•h/mL would be required to achieve full protection, the next step was to use the PopPK model to simulate doses that would exceed the target AUC in majority of the subjects. The data suggested that a dose range of 100-200 mg would be effective for prophylaxis in humans as they exceed the target exposure in 96% and 100% of the simulated population, respectively.
In contrast to blood-stage induced VIS, where 800 mg M5717 led to clearance of the parasitaemia, the simulations based on the current study suggested that doses for prevention may be much lower than curative doses [4].
If the predicted dose and relevant exposure obtained using the 3D hepatic cell platform for Plasmodium infection are confirmed in the spz-induced VIS conducted with M5717 (NCT04250363), the translatability of the platform will be strengthened, paving the way for the reduction and refinement of animal use, ultimately potentially replacing in vivo experiments.
Nevertheless, this approach requires careful consideration for other drugs in development as limitations could occur if the anti-plasmodial target displays poor homology between rodent and human parasites. Alternatively, more onerous P. falciparum-based models may have to be used. Also, depending on its mode of action, drug exposure times during in vitro experiments may also need to be adjusted. Therefore, the relevance of the IC 99 or C av parameters may need to be addressed on a case-by-case basis. Availability of a population PK model based on human data is crucial for the simulation of doses for chemoprevention and non-availability of these data and model hinders the application proposed approach to compounds that have not yet entered Phase 1 clinical evaluation.
Conclusions
For the first time in drug development against the liver stage of infection by malaria parasites, PK and PD data emerging from an in vitro platform combined with a PopPK modelling and simulation were employed to predict the starting dose in humans to be used against the pre-erythrocytic stage of a new anti-malarial compound. The translatability of the in vitro platform has been compared with PK and PD data obtained from in vivo mice experiments and will be further validated when the results of the spz-induced VIS will be available. Therefore, this non-traditional method has the potential to accelerate the search for new anti-malarials, to reduce the number of healthy volunteers needed in a clinical study and decrease and refine the animal use in the preclinical phase, thus aligning research to the National Centre for the 3Rs principle (NC3R) [7].
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ? Choose BMC and benefit from: ? Choose BMC and benefit from: | 2023-01-17T14:47:34.906Z | 2022-05-15T00:00:00.000 | {
"year": 2022,
"sha1": "e0f934b6fba97fa95db0de4ab22e6fe7f90e96b1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12936-022-04171-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "e0f934b6fba97fa95db0de4ab22e6fe7f90e96b1",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": []
} |
253019982 | pes2o/s2orc | v3-fos-license | Change the Framework for Pulse Oximeter Regulation to Ensure Clinicians Can Give Patients the Oxygen They Need
ERS Global Lung Function Initiative. Multi-ethnic reference values for spirometry for the 3-95-yr age range: the global lung function 2012 equations. Eur Respir J 2012;40:1324–1343. 11. Bowerman C, Bhakta NR, Brazzale D, Cooper BR, Cooper J, Gochicoa-Rangel L, et al. A race-neutral approach to the interpretation of lung function measurements. Am J Respir Crit Care Med 2023;207: 768–774. 12. Global Initiative for Chronic Obstructive Lung Disease. Global strategy for prevention, diagnosis and management of COPD: 2023 report. 2022 [updated 2023 Jan 7; accessed 2023 Jan 23]. Available from: https:// goldcopd.org/wp-content/uploads/2023/01/GOLD-2023-ver-1.27Jan2023_WMV.pdf. 13. Raghu G, Remy-Jardin M, Richeldi L, Thomson CC, Inoue Y, Johkoh T, et al. Idiopathic pulmonary fibrosis (an update) and progressive pulmonary fibrosis in adults: an official ATS/ERS/JRS/ALAT clinical practice guideline. Am J Respir Crit Care Med 2022;205:e18–e47. 14. Department of Labor Occupational Safety and Health Administration. Occupational exposure to cotton dust. 1978 [updated 1978 Jun 23; accessed 2022 Nov 28]. Available from: https://www.govinfo.gov/ content/pkg/FR-1978-06-23/pdf/FR-1978-06-23.pdf. 15. Department of Labor Occupational Safety and Health Administration. Standards improvement project—phase iv, 2019, pages 21490 (Cotton Dust Medical Surveillance, paragraph [h]) and 21517 (Cotton Dust Technical Appendix D). 2019 [updated 2019 May 14; accessed 2022 Dec 20]. Available from: https://www.govinfo.gov/content/pkg/FR-201905-14/pdf/2019-07902.pdf. 16. Social Security Administration. Disability evaluation under Social Security: 3.00 respiratory disorders—adult. [accessed 2022 Nov 28]. Available from: https://www.ssa.gov/disability/professionals/bluebook/3.00Respiratory-Adult.htm#3_02B. 17. Lee HM, Le H, Lee BT, Lopez VA, Wong ND. Forced vital capacity paired with Framingham Risk Score for prediction of all-cause mortality. Eur Respir J 2010;36:1002–1006. 18. Graham BL, Steenbruggen I, Miller MR, Barjaktarevic IZ, Cooper BG, Hall GL, et al. Standardization of spirometry 2019 update. An official American Thoracic Society and European Respiratory Society Technical Statement. Am J Respir Crit Care Med 2019;200: e70–e88. 19. Benincasa G, DeMeo DL, Glass K, Silverman EK, Napoli C. Epigenetics and pulmonary diseases in the horizon of precision medicine: a review. Eur Respir J 2021;57:2003406. 20. Beydon N, Leye F, Bokov P, Delclaux C. Prediction of height using ulna length in African-Caribbean children. Pediatr Pulmonol 2022;57: 2032–2039. 21. Graham BL, Miller MR, Thompson BR. Addressing the effect of ancestry on lung volume. Eur Respir J 2022;59:2200882.
Recent studies have identified pulse oximeter inaccuracy associated with self-identified race. Yet, race is a social, not biological, construct. The hypothesized mechanism for this inaccuracy is differences in skin tone, which potentially impacts light transmission by the device (9). However, because recent data identified hidden hypoxemia based on patient race and not skin tone, we use terms related to race throughout this editorial and believe the focus should remain on racial differences in accuracy until skin tone has been confirmed as the underlying mechanism.
To correct this insidious problem will require a concerted effort by regulators, device manufacturers, and purchasers. In this editorial, we describe current regulatory practices for pulse oximeters and actions that could be taken by the U.S. Food and Drug Administration (FDA) and other members of the International Medical Device Regulators Forum to improve device evaluation (19). We also outline steps that can be taken by manufacturers, purchasers, and clinicians to drive equity in pulse oximeter performance.
No measurement device is perfectly accurate when used in the real world; therefore, clinicians and regulators must decide how to evaluate the accuracy, safety, and effectiveness. Two metrics are commonly used for pulse oximeters: bias and accuracy root mean square (ARMS). Both quantify how closely pulse oximeter saturation (Sp O 2 ) readings agree with true Sa O 2 measured by an arterial blood gas co-oximeter.
Bias quantifies the net magnitude and direction of the pulse oximeter error. A pulse oximeter with 12% bias systematically overestimates Sa O 2 by 2% on average. However, a device with no bias could still be highly inaccurate-just equally inaccurate above and below the Sa O 2 . A device's precision quantifies this undirected error, with a higher value of precision indicating more random error. The ARMS metric captures both bias and random error. Mathematically, ARMS is calculated as the square root of the average squared difference between Sp O 2 and Sa O 2 (20). It is also related to bias and precision through the formula: ARMS 5 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi bias 2 1precision 2 p . A pulse oximeter with a higher ARMS has worse accuracy, and the FDA sets an ARMS , 3% standard for approved devices, measured across the saturation range of 70-100%. ARMS is sometimes interpreted analogously to a standard deviation, such that for a pulse oximeter with an ARMS of 3%, a reading of 92% represents a true saturation of 63% (between 89% and 95%) in 68% of readings. This interpretation is only correct if there is no bias and random error is the sole source of device error. When a pulse oximeter also has systematic bias, as in Black patients, the ARMS is not easily interpretable (21). Current FDA guidance recommends that device manufacturers conduct a clinical study of 10 or more subjects who vary in age and gender when evaluating pulse oximeter performance (22). They also recommend that at least two subjects, or 15% of the sample (whichever number is greater), should be "darkly pigmented," because skin tone is understood to impact performance. The FDA does not require device performance to be reported by patient subgroups. To illustrate why distinct subgroup reporting could matter, we performed statistical simulations asking: "How much bias could exist in a subgroup without significantly impacting overall ARMS?" Technical details are described in the online supplement.
We started with a hypothetical pulse oximeter with an ARMS of 2% and no bias among individuals with light skin tone-the population, as suggested by several authors, used to develop pulse oximeters (9,23,24). We then determined how much bias could be present among the FDA-recommended 15% of individuals with dark skin tone in the study but still meet the minimum ARMS standard of ,3%. A 3% bias in this subgroup would minimally impact overall ARMS, increasing from 2% to 2.3%. Even if the pulse oximeter had a 5% bias in the subgroup, the overall ARMS would still be 2.8%. However, ARMS among individuals with dark skin tone would be 3.6% and 5.4%, respectively, but unreported currently.
Proposal #2: Align Pulse Oximeter Evaluation with How Pulse Oximeters Are Used in Clinical Practice
Device manufacturers, following the FDA and International Organization for Standardization (ISO 80601-2-61) guidance, typically calculate ARMS using data collected from healthy individuals who undergo controlled desaturation testing in a laboratory setting to achieve a saturation of 70-100% (22). ARMS inherently weighs all over-and underestimation of arterial saturation equally across this large range. Clinical experience suggests that most patient care decisions occur within a much narrower range around certain critical thresholds (e.g., 88% or 92%).
Given the poor alignment of ARMS with actual decisions made by clinicians in practice, recent studies on pulse oximeters have reported a different metric, termed "occult" or "hidden" hypoxemia: the rate in which normal Sp O 2 readings hide low Sa O 2 . In diagnostic testing, hidden hypoxemia is described as the false omission rate, or the percentage of individuals with a condition who falsely test normal. In contrast, ARMS weighs clinically irrelevant pulse oximeter errors the same as potentially devastating misclassification of patients with hypoxemia as not needing supplemental oxygen.
Proposal #3: Power Pulse Oximeter Tests to Detect Small, Clinically Important Differences in Performance Between Racial Groups Because some clinicians believe that small differences in pulse oximeter accuracy are ignorable, we calculated how minor increases in pulse oximetry error impact hidden hypoxemia (calculation details described in the online supplement). Figure 1 illustrates distributions of possible Sa O 2 when an Sp O 2 is reading at 92%. In Figure 1A, the pulse oximeter error is solely due to 2% random error without bias. In this scenario, 2% of Sp O 2 readings of 92% would represent a true Sa O 2 reading ,88%. If a pulse oximeter also has a small bias, overestimating Sa O 2 by just 1%, hidden hypoxemia increases to 7% ( Figure 1B). However, rates of hidden hypoxemia increase to 12% if a pulse oximeter has both a 1% bias and an additional 0.5% random error ( Figure 1C). Contemporary research suggests that pulse oximeters have both higher bias and more random error in Black patients (6,10). This analysis assumes a highly accurate pulse oximeter. Yet, many low-cost devices are less accurate (e.g., up to 5% bias and 4% precision within Sa O 2 range of 80-90%) (25).
To detect a small but clinically meaningful difference in pulse oximeter accuracy across patient groups, laboratory-based clinical studies will need larger studies than current guidelines (online supplement). Underpowered studies could falsely conclude that a device performs safely in all patients. Recent results demonstrate the feasibility of using large-scale ICU electronic data sets to conduct ongoing postmarketing surveillance of device performance in the wide range of real patients for whom these devices must function accurately. Targeted partnerships with large health systems could improve the recording of exactly which devices were used, and of synchronizing the timing of Sa O 2 and Sp O 2 measurement recording. More ambitiously, prospective testing is already recommended by the FDA for neonatal pulse oximeters and should also be a component of device evaluation in adults (22).
Conclusions: The Roles of Regulators, Clinicians, and Purchasers in Equity
When faced with the above data, clinicians have few good bedside solutions. For now, our opinion is that clinicians should follow the guidance of the recent FDA safety communication and consider a pulse oximeter's limitations when using the device to assist in diagnosis and treatment decisions. On November 1, 2022, the FDA will hold a public meeting to discuss the available evidence on device accuracy. We believe the FDA and other regulatory bodies must update their regulatory standards. In addition, manufacturers should update their device labeling to report performance differences across relevant subgroups. Clinicians and professional societies can also play a critical role in highlighting an unacceptable and racially biased patient safety issue (26,27). Health systems and other purchasers can change market incentives by purchasing only the most accurate devices that are proven to work equitably. Recent estimates suggest more than $2 billion is spent annually on pulse oximeters in a market dominated by a small number of highly profitable companies (28). Purchases must reward companies that innovate to improve device accuracy for everyone. | 2022-10-21T06:18:03.842Z | 2022-10-19T00:00:00.000 | {
"year": 2022,
"sha1": "1734505286ece3d7235396845ea88904f2cb160b",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cb7e2c6127450560cac934316ffbd197849db639",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119070197 | pes2o/s2orc | v3-fos-license | On the Hall-mediated resistive tearing instability of highly elongated current sheets
The present paper provides comprehensive description of various regimes involved in the two-fluid model of the resistive tearing instability. These include two novel regimes of this instability, which correspond to the long-wave modes that can develop in a highly-elongated current sheet. This issue is relevant to the study of fast magnetic reconnection and magnetic turbulence in magnetohydrodynamic (MHD) objects with a large value of the Lundquist number.
Introduction
A renewed interest in the properties of a long wave-length tearing modes has been prompted by the observation that the so-called plasmoid instability (the tearing mode developing in evolving current sheets) can play significant role in fast magnetic reconnection [1][2][3][4] and magnetic turbulence 5,6 occurring in a highly conducting medium. In this context an important issue is revealing the fastest mode (a mode with the maximal linear instability growth rate), which kick-starts the subsequent process of nonlinear reconnection. It is well-known from the classical Furth-Killeen-Rosenbluth (FKR) theory of the tearing instability 7 that its growth rate increases with the mode wave-length. However, this theory is not applicable to a highly elongated current sheet, which may form in a system with a very large Lundquist number. The respective generalisation of the FKR theory has been developed in Ref. 8 (see also Refs.9.10). It should be noted, however, that all the results presented in Refs.7-10 were obtained in the framework of the standard single-fluid MHD. On the other hand, as it was originally pointed out in Refs.11,12, the growth rate of the tearing instability can be significantly increased by the Hall effect, which originates from the two-fluid (electrons and ions) description of the plasma. Then magnetic field is advected by the flow of the light electron plasma component rather than by the bulk flow of matter associated with the much heavier ions (as it is the case in the single-fluid MHD). Although this issue has been already discussed in numerous publications (see, e.g., Ref. 13 and the list of references therein), a complete explicit verification of all possible Hall-mediated instability regimes and transitions between them is still lacking. Therefore, the present paper aims to fill this gap by providing a unified description of the two-fluid resistive tearing instability in an inviscid highly conducting plasma. The suggested scheme is valid for arbitrary values of the main relevant parameters, which are the plasma , the ion skin Email: g.vekstein@manchester.ac.uk A value of the latter determines whether the mode growth can be described by the FKRtype theory (the so-called constant- regime 7 ), or it requires the non-constant- approach (the so-called Coppi regime 8 ), which is relevant to a very long-wave modes. Thus, this article presents a generalisation of Ref.14, which dealt only with the constant- case.
As an example, here we consider a particular case of the well-known Harris-type force-free magnetic field, though the obtained results can be easily modified for any other initial magnetic equilibrium. Thus, a uniform plasma of density 0 n n n e i , and thermal pressure For tearing modes with a wave-vector k directed along the y -axis the system remains invariant along the z -axis. Hence, it what follows it is convenient to introduce the flux so that y B Thus, the field (1) corresponds to the flux function The tearing perturbation results in ), , where for a single Fourier Following the standard procedure 7 , the first step is derivation of the so-called "external" equilibrium solution, which in a low- plasma is a new force-free magnetic configuration.
The latter should satisfy the Grad-Shafranov equation In the case of the Harris equilibrium (1) . In the linear approximation with respect to retains its form 15 , hence, so the linearized version of (5) yields the following equation for the function : Its appropriate solution (the one that tends to zero at infinity) reads 7 It is singular at 0 x , which results in the following expression for the tearing instability parameter 7 : Therefore, as seen from (7), the tearing instability ) 0 ( corresponds to the long-wave perturbations with 1 kL . Note also that perturbation (6) The tearing instability growth rate, , is determined by the plasma dynamics inside the current sheet, which forms due to the singularity of solution (6) at 0 x . The respective solution, the "internal" one, is governed by the equation of plasma motion: and the magnetic induction equation: where the last term on the r.h.s. of Eq.(10) accounts for the Hall effect. For the sake of simplicity, we assume that the ion plasma component is cold, hence, the effect of the ion gyro-viscosity is not included in Eq. (9). Then, the first task is the linearization of Eqs. (9,10) in respect to small perturbations (note that plasma is initially at rest). The latter can be represented as where the stream-function and the velocity potential correspond, respectively, to the rotational and compressional components of the plasma flow. Thus, by using the continuity equation, one can express the pressure perturbation in terms of , which in the linear approximation yields is the adiabatic index. By defining the plasma beta as , this pressure perturbation can be re-written as Furthermore, since the width of the central current sheet is much smaller than the characteristic length scale L , in the internal solution one can put After that Eq.(9) yields the following set of equations for , and z V : Similarly, the linearized magnetic induction equation (10) results in two equations for As seen from the last term in Eq. (15), the Hall effect is associated with the magnetic field component ( ) with an odd () bx . Thus, this a quadrupole magnetic field, which is generated by the last, Hall term, in Eq. (16). The symmetry of all other functions becomes then transparent from Eqs. (12)(13)(14): where , are odd, and v is an even functions of x .
After introducing non-dimensional variables and functions by the re-scaling: one can reduce Eqs. (12)(13)(14)(15)(16) to a simplified set of ordinary differential equations for is the global Alfven transit time defined by the Alfven velocity is the scaled ion skin depth (for most of applications it is small, Since under a large value of the Lundquist number the width, () x , of the internal resistive current is small, ( ) 1 x (see below), and for the unstable modes In what follows we first briefly re-consider the FKR-type (the constant-psi) regimes of the tearing instability (Section 2), leaving the Coppi-type (the non-constant-psi) regimes for Section 3. Implications of these results on the issue of the Hall-mediated plasmoid instability is discussed in Section 4.
The FKR-type (the constant-psi) regimes.
In this case, without loss of generality, one can put 1 , so the matching condition of the internal and external solutions then reads dx For the current sheet of width () x this integral can be estimated as () x , which yields Consider now the magnetic induction equation (18), assuming first that the Hall parameter d is small enough (see below for the exact criterion) to make the last, Hall term, in (18) insignificant. Then, the ongoing magnetic reconnection, the pace of which is defined by the l.h.s. of this equation, is supported by both the plasma resistivity [the second term on the r.h.s. of (18)] and the magnetic field advection into the current sheet by the bulk plasma flow [the first term on the r.h.s. of (18)]. Therefore, all relevant three terms in Eq. (18) should be of the same order of magnitude. Thus, by comparing the first two with help of (22) and (23), one gets and equating the first and the third terms yields It follows then from (24-25) that re-producing the well-known scaling of the FKR theory 7 .
Consider now what restrictions apply to this solution by the imposed "constant-psi" assumption. Clearly, the variation of the flux function across the current sheet, which can be estimated as and ( x) k given in Eq.(26), this requirement takes the form The case of the long-wave modes with * kk is discussed below in Section 3.
Consider now what value of the parameter d is required to make the Hall effect coming into play. Such a situation occurs when the Hall term in Eq. (18), which describes additional advection of the magnetic field into the current sheet by the flow of electrons caused by the quadrupole magnetic field perturbation b , becomes comparable with the first term on the r.h.s. of Eq.(18), which is due to the standard magnetic field advection by the bulk plasma flow. This quadrupole field is, in its own turn, generated by the Hall term in Eq.(19), and the resulting magnitude of b is determined by the balance between this source term and the other terms on the r.h.s. of (19). They are due, respectively, to the plasma compression, the bulk velocity component z V , and the resistive field diffusion. Therefore, in order to evaluate their relative role, it is necessary to invoke Eq.(21), which describes the compressional part of the plasma flow. The latter is driven by the magnetic force due to the quadrupole field b , which is balanced either by the thermal pressure force ( Altogether, (25) and (31-32) yield so finally one gets for this regime that This is the well-known scaling originally derived 16 14) and (16)]. Thus, with help of (37), one gets In summary, all three constant-psi Hall-MHD regimes of the tearing instability are depicted in Fig.1 It's worth noting that since the Hall effect brings about the additional advection of the magnetic field towards the reconnection site, it results in the reduction of the width () x of the resistive current sheet. This is favourable for the applicability of the constant-psi approximation. Therefore, in the diagram of Fig.1, which is drawn for the case of , all three regimes are well in the realm of the constant-psi simplification.
The Coppi-type regimes (the long wave-length tearing modes)
For a long-wave mode with * kk the constant-psi approximation becomes non-applicable. Therefore, in this case one has to distinguish between the In what follows, we put, as before, . Hence, instead of (24), one gets Equating [with help of (22)] the first and the third terms of Eq.(18) yields Furthermore, since inside the current sheet the flux function is not a constant, its total variation across the current sheet, which is equal to 1 which reproduces the results originally obtained in Ref. 8. It turns out that the scaling (43) has a universal character 10 , which does not depend on the particular form of the tearing parameter () k (providing that the latter is large enough to bring the mode into the nonconstant-psi regime: ). An obvious requirement, ( ) 1 x , yields the following validity range for this regime: so this condition is assumed to hold in what follows.
As seen from (43) and (44), the reconnected magnetic flux is large indeed: 1 i . Note also that the restriction (44) is necessary for justifying the quasi-static assumption imposed on the external solution. The point is that the characteristic spatial scale for a mode with a wave-number 1 k is equal to , which makes the quasi-static criterion more restrictive, namely As seen from (43), it is satisfied under the condition (44).
This solution can be used as a starting point for the Hall-mediated regimes. The first task, similar to that in Section 2, is to find out, by using now scaling (43), the threshold value of the Hall parameter d .Thus, if the plasma beta is not too small (as shown below, it requires 1 ), the plasma can be assumed incompressible. Then, in Eq. (19) for b , the generating Hall term is balanced by the resistive one, so the established magnitude of the quadrupole field is given in Eq. Similarly to the constant-psi case, this Hall-mediated regime is also associated with a double structure of the internal solution. Thus, instead of (37), we now get from Eq. (18) ( the case of It is worth mentioning that the regime 2(a), similarly to the regime 3(a), does not involve the bulk plasma motion. Therefore, the instability scalings for these regimes, (48) and (52) respectively, can be re-formulated in terms of the Electron MHD. Thus, since in the EMHD the plasma dynamics is associated with the whistler mode 16 Consider now the effect of the plasma compressibility, which becomes significant at 1 . First, note that in the constant-psi domain one can still use here the results of Section 2.
Thus, as seen in Fig.1 Altogether, these equations yield: As seen from (57) This is a very long-wave mode. Therefore, in the context of the plasmoid instability, the issue is not only the fastest tearing mode per se, but also whether the aspect ratio of the current sheet under consideration is large enough to accommodate such a mode 1 , are more favourable environments for the Hallmediated magnetic reconnection. However, for the very same reason, the collisionless model of reconnection, rather than the resistive one, seems to be more appropriate in such a case.
ACKNOWLEDGMENTS
This work was completed during the author's visit to the National Aviation Academy (NAA) in Baku. Thus, GV is grateful to R. Z. Sagdeev for the invitation to Baku, and to the whole staff at NAA for a very warm hospitality. A special thanks is also due to RZS for making helpful comments on the present paper. | 2018-11-28T06:07:29.000Z | 2018-11-28T00:00:00.000 | {
"year": 2019,
"sha1": "c7844bef99a155aa4fb8b4ae98a92bbffe5f3e37",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1811.11401",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c7844bef99a155aa4fb8b4ae98a92bbffe5f3e37",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219543531 | pes2o/s2orc | v3-fos-license | What scans we will read: imaging instrumentation trends in clinical oncology
Oncological diseases account for a significant portion of the burden on public healthcare systems with associated costs driven primarily by complex and long-lasting therapies. Through the visualization of patient-specific morphology and functional-molecular pathways, cancerous tissue can be detected and characterized non-invasively, so as to provide referring oncologists with essential information to support therapy management decisions. Following the onset of stand-alone anatomical and functional imaging, we witness a push towards integrating molecular image information through various methods, including anato-metabolic imaging (e.g., PET/CT), advanced MRI, optical or ultrasound imaging. This perspective paper highlights a number of key technological and methodological advances in imaging instrumentation related to anatomical, functional, molecular medicine and hybrid imaging, that is understood as the hardware-based combination of complementary anatomical and molecular imaging. These include novel detector technologies for ionizing radiation used in CT and nuclear medicine imaging, and novel system developments in MRI and optical as well as opto-acoustic imaging. We will also highlight new data processing methods for improved non-invasive tissue characterization. Following a general introduction to the role of imaging in oncology patient management we introduce imaging methods with well-defined clinical applications and potential for clinical translation. For each modality, we report first on the status quo and, then point to perceived technological and methodological advances in a subsequent status go section. Considering the breadth and dynamics of these developments, this perspective ends with a critical reflection on where the authors, with the majority of them being imaging experts with a background in physics and engineering, believe imaging methods will be in a few years from now. Overall, methodological and technological medical imaging advances are geared towards increased image contrast, the derivation of reproducible quantitative parameters, an increase in volume sensitivity and a reduction in overall examination time. To ensure full translation to the clinic, this progress in technologies and instrumentation is complemented by advances in relevant acquisition and image-processing protocols and improved data analysis. To this end, we should accept diagnostic images as “data”, and – through the wider adoption of advanced analysis, including machine learning approaches and a “big data” concept – move to the next stage of non-invasive tumour phenotyping. The scans we will be reading in 10 years from now will likely be composed of highly diverse multi-dimensional data from multiple sources, which mandate the use of advanced and interactive visualization and analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts with a domain knowledge that will need to go beyond that of plain imaging.
(Continued from previous page) analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts with a domain knowledge that will need to go beyond that of plain imaging.
Cancer and imagingan introduction
Cancer describes a wide range of oncological diseases that can affect all levels of a living organism and that have, in common, the risk of becoming systemic. Cancer is the second leading cause of death of people with about 17 million new cases worldwide per year. Just under 10 million people succumb to cancer every year [1], and one in three people is affected by cancer throughout their lifetime. The economic impact of cancerfrom diagnosis, treatment, patient work-up and care, to loss of work force and other societal impactsis thus significant and increasing; in 2010, for example, the worldwide direct and indirect costs were estimated to be 1.6 trillion USD, which amounted to 230 USD per human capita [2]. Typically, overall costs of cancer account for 10%, or more of the gross domestic product (GDP) with marked variations across countries. Given the severe implications of disseminated cancer, early and accurate diagnosis is of the essence.
Imaging by means of different, and frequently complementary imaging methods provides fundamental data for diagnosing patients, studying diseases, discovering and monitoring new therapies, and improving human health care. As such, the adoption rate of non-invasive, medical imaging has been increasing continuously over the past decades [3]. These increases can be attributed to expanded demand from referring physicians and patients, as well as from technical improvements and wider availability. More specifically, modern biomedical images reveal structural and functional information of subjects in vivo (Fig. 1). At a different scale, biological microscopy images and molecular pathways provide further insight into tissues and living organisms, and into processes and structures of cellular compartments [4]. Given the different types of data generated and scale of information, new ways of integrating and using biomedical information must be found [5]. In this paper, we describe important types of biomedical imaging methods and hypothesize on their short-to mid-term developments with a focus on oncological imaging. Figure 2 depicts a generic view on the work-up of a patient suspected with cancer: the patient presents with a suspicion of cancer and is referred for an imaging examination. Here, imaging denotes the acquisition of non-invasive visual data of extended ranges or volumes of the subject. Conventional imaging, as indicated in the schematics, typically includes X-ray imaging, Computed Tomography (CT) imaging or Ultrasound (US) or Magnetic Resonance Imaging (MRI), and, thus, yields anatomical information, which can be employed to detect, Fig. 1 Imaging modalities together with their colour-coded ability to depict anatomical and/or functional information. For example, X-ray imaging is purely anatomical imaging modality, PET imaging provides functional (and molecular) data, while MRI and optical imaging are capable of providing anatomical and functional information depending on the choice of the protocol or mode of operation localize and describe cancerous tissue in-vivo. However, cancer is frequently not detectable from plain anatomical images because of the lack of morphological alterations, but can be identified by virtue of molecular and metabolic perturbations [4].
Nuclear medicine techniques, such as Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT), that rely on the tracer principle, have found an important place in the diagnostic management of cancer [6]. The tracer principle describes the ability to label minute amounts of a biomolecule of choice (e.g., glucose) with a radioactive isotope, which enables tracing the labelled biomolecule (and, thus, the pathways of the unlabeled biomolecules) by means of the emitted radiation without disturbing normal tissue function. Likewise, optical imaging methods make use of fluorescent molecular probes and ultrasound of targeted microbubbles to highlight signaling pathways and differential anatomies, respectively [7]. Nuclear medicine imaging can be both highly-sensitive and highly-specific, but yields images of the tracer distribution that are of lower spatial resolution than that from CT or MRI, given the fundamental differences in the detection principles of molecular and anatomical imaging. It is this difference, and the general challenges of localizing focal uptakes of highly-specific tracers that have led to the development of hybrid imaging methods, including the physical combination of PET and CT (PET/CT), SPECT and CT (SPECT/CT), or PET and MRI (PET/MRI) [8].
Imaging is frequently followed by bioptic sampling to further substantiate a diagnosis (Fig. 2). Both imaging and bioptic information are then combined to make a clinical decision on a preferred therapeutic option, many of which come with a serious cost to the healthcare system. An ideal scenario for a cancer patient work-up would include a timely and accurate diagnosis, and a full understanding of the cancerous phenotype while performing a minimal number of tests, which, in turn, should be minimally invasive, and ultimately, lead to the most appropriate choice of a therapy depending on the stage and biological features of the disease (also called "precision medicine"). Ideally, the use of confirmatory bioptic procedures should be limited since they are costly, time consuming, and invasive. However, biopsies are required to determine the genetic cause of a disease and, thus, help adjust subsequent treatment regimens. Should biopsies be avoided in the future, then imaging methods must be as good as deducing the phenotype of the cancerous disease so as to planning the most efficient, and personalized treatment. Here, molecular or hybrid imaging techniques may be better placed as noninvasive tools than other techniques.
Furthermore, the value of image parameters (also called "biomarkers") is often sub-optimal due to physical and methodological limitations of the diagnostic methods employed: anatomical imaging, for example, may not detect or characterize a lesion because it is not manifested as a morphological alteration even though Fig. 2 Schematics of a generic management of cancer patients: frequently, a conventional imaging exam (X-ray, CT, and alike) is performed before an additional, combined imaging exam (e.g., PET/CT) is ordered. Imaging may be followed by a biopsy procedure or genomic analysis to further differentiate the disease. A report is created and the oncologist decides on the choice of an appropriate therapy. It is evident that imaging is the gatekeeper to important clinical decision making and therapy deliveries. This cycle can be re-iterated multiple times during patient management and follow-up. The actual implementation of this patient management chart will be affected by the availability of imaging modalities, reimbursement guidelines and other locally variant healthcare regulations cancerous tissue is present. Molecular imaging, on the other hand, may miss even hypermetabolic lesions because they are too small to be detected in view of the partial volume effects (PVE) arising from the limited detector resolution [9] or because the lesion resides in an area of the body that is affected by motion (e.g., respiration), which leads to a smearing of the tracer uptake and a resulting contrast that is insufficient to delineate the lesion. Additionally, high uptake in adjacent tissues can diminish contrast and, therefore, sensitivity for lesion detection. The strengths and weaknesses of each imaging modality, with regards to depicting anato-metabolic details or plain lesion detection will play into the adoption of these modalities as standards of care along the cancer management pathway (Fig. 2). Supplementary Figure 1 further details the use of all seven key imaging modalities presented here for diagnosis, staging, restaging, and follow-up.
However, imaging technologies have improved greatly over the years. This is through the use of improved detector elements or the adoption of model-based image reconstruction methods for increased lesion-to-background contrast in the images of the tracer distribution. In the early years of imaging innovation, much effort went into the design and optimization of techniques that increased the sensitivity of an imaging system, while only recently, attention has shifted to optimizing imaging technologies to a higher specificity as well, so as to provide means to non-invasively assess the phenotype of tumours that present in most cases as heterogenous masses with intraand inter-lesion variations in molecular and signaling pathways [10]. Thus, the role of imaging has remained ever so important but the demands on the imaging technologies have become more complex. The objective of this perspective is to highlight key developments of imaging methods used frequently in cancer patient management ( Fig. 1) and to carve out potential areas of improvement that are foreseen in the near future.
Positron Emission Tomography (PET) imaging and instrumentation trends
PET is a non-invasive imaging technique that provides visual and quantitative information on molecular pathways. Imaging is performed post injection of a radiotracer, whichin the case of PETis a biomolecule labelled with a neutron-deficient radioisotope [11]. During the subsequent positron decay (β+) a proton is converted to a neutron thereby releasing a positron. The β + travels a short distance -termed the positron rangebefore it annihilates with an electron from the surrounding tissue so as to form two annihilation photons of 511 keV each that travel in nearly opposite directions. The spatio-temporal distribution as well as the absolute concentration of the tracer can be determined through the detection and reconstruction of annihilation events ( Fig. 3). Today, PET imaging has been partnered with CT and MR imaging, namely as PET/CT and PET/MR hybrid imaging systems to provide more relevant information in routine clinical practice [12].
Status quo
With more than 6000 PET/CT systems and approximately 250 PET/MRI systems operational worldwide, the key application of PET imaging is for oncological indications [13]. Here, [18F]FDG, an [18F]-labelled glucose analogue, is the tracer-of-choice to assess the increased glucose turnover in cancerous tissues compared to normal tissue (Warburg effect). The key challenges when diagnosing cancer by means of PET are related to cancer biology and the actual ability to visually circumscribe the cancerous lesion using reconstructed PET images. Cancer biology is addressed by the a priori choice of a suitable tracer molecule to probe the existence of a suspected disease and to further phenotype it. The availability of molecular probes and the choice of the most appropriate tracer will be the focus of a companion contribution to this series.
Image resolution, which drives lesion detection, is determined by the size of the imaging detector elements, which is on the order of 3 mm in clinical PET systems [14]. The resulting imaging resolution is somewhat lower due to detrimental effects from positron range effects, which vary depending on the energy of the positron emitted, detector penetration and depth-of-interaction (DOI). It can also be influenced by image reconstruction and respiratory physiological movement during acquisition. Despite the final image resolution of PET being lower than that of MR or CT, lesions with a diameter less than the theoretical image resolution can still be detected if their contrast is sufficiently high; this is frequently seen in applications of [124I]iodide and [68Ga]-somatostatin and -PSMA ligands due to the combination of high binding affinity and low background uptake [15]. In addition to the detrimental effects of limited spatial resolution (image blurring, partial volume effects), attenuation and scatter of the annihilation photons cause visual distortion, bias and lower contrast as well as limited count-rates, all of which contribute to increased image noise and reduced image contrast (Suppl Figure 2) [16][17][18][19][20][21][22][23].
The standard PET detector is composed of the scintillation material that stops/detects the annihilation photons and an electronic devicea photomultiplier tube (PMT)coupled to the end of the detector material, that, in turn, transforms the detected scintillation photon into an electrical signal, which typically involves signal amplification and processing. The ideal PET scintillator material has a high stopping powersufficient to attenuate the 511 keV annihilation photons, high scintillation light outputto facilitate accurate detection of the annihilation photons, fast decay time -to minimize deadtime, high energy resolution -to better discriminate against scattered photons, and low cost (Suppl Table 1) [14].
Recently, digital PMTbetter known as silicon PMT (SiPM)have been introduced as a replacement to analogue PMTs. SiPMs function as an ON/OFF switch, such that if scintillation light hits the SiPM an electrical pulse with sufficient amplitude is generated. With SiPM the limitations of light absorption and conversion as well as spatial resolution are drastically reduced when compared to an analogue PMT. Furthermore, their small and compact form factor and their magneticsusceptibility tolerance make them ideal devices for use in PET/MR systems compared to their PMT counterparts, which are not MR-compliant.
The introduction of LSO-and LYSO-based detectors with their characteristic fast scintillation light decay time (Suppl Table 1) have enabled a form of PET image acquisition and reconstruction known as time-of-flight (ToF) imaging [24]. With ToF imaging, information about the minute difference in arrival time of the annihilation photons at opposing detectors is captured. This knowledge is subsequently used during image reconstruction to more precisely inform on where the annihilation event occurred within the PET field-of-view (FOV) as compared to conventional PET imaging. The knowledge about the arrival time has been shown to improve the resultant signal-to-noise ratio (SNR) of reconstructed PET images [14]. For example, a 300 ps timing resolution improves the SNR of ToF images by about 30% when compared to non-ToF images. Furthermore, larger patients benefit more from ToF imaging than smaller subjects [25].
Following the detection of the annihilation photons, corrections for self-absorption and scatter must be applied prior to image reconstruction. In PET/CT, the use of CT has been utilized for the purpose of CT-based attenuation correction (CT-AC) and scatter correction [26]. Studies have demonstrated a diagnostic benefit when performing the CT portion of the PET/CT at a radiologically-equivalent quality [27]. Some users advocatewith promising results -an adapted, mixed-phase contrast administration that can be used for both diagnostic [28] and CT-AC purpose [29].
A key challenge to high-quality and quantitative PET is physiological motion, including respiratory and cardiac motion, that may cause noticeable mismatches between the PET and CT images and blurring due of the PET images. Conspicuity of tracer uptake in lesions that are located in regions affected by physiological motion can be reduced through the use of gating or more complex motion compensation techniques. Gating describes the partitioning of a standard list-mode stream of emission data acquired during a PET scan by means of external or datadriven trigger schemes [30]. Several techniques have been developed to address these distortions, such as 4D PET/ CT [31] and quiescent phase/amplitude gating [32] with various levels of success and limited adoption into routine clinical work given the complexity of the workflows (Fig. 4). Motion compensation benefits from the hardware combination of PET and MRI, since a variety of MR sequences can be exploited to estimate motion vector fields without the requirement for ancillary markers or increased radiation burden [33,34]. Positron Emission Tomography (PET) imaging: a neutron-deficient, radioisotope (positron emitter) is used to label a biomolecule of choice (here: glucose; Tracer,) which is injected into the patient. The resulting annihilation radiation is detected in a PET system, the data are sorted into sinograms and stored in sonograms prior to image reconstruction. In case, post-acquisition scatter and attenuation correction is performed., tracer uptake in the PET images can be quantified. Materials courtesy of David W Townsend, Singapore Image reconstruction in PET has evolved from the original 2D analytic techniques, such as filtered back projection (FBP) to the more recent 3D iterative reconstruction (IR) techniques, such as ordered subset expectation maximization (OSEM) (Fig. 5). Iterative reconstruction techniques have the added advantage of more accurately modelling the physics of the imaging process compared to analytical methods but they are also computationally more expensive and require a relatively longer time to reach convergence (i.e. solution). Additionally, when they reach convergence, the resultant images are characterized by high noise. In an effort to overcome this challenge, recent advances in this area have introduced regularized reconstruction approaches, which have the ability to minimize the noise while reaching convergence.
Lesions with comparatively low contrast to the surrounding tissue can be hard to detect, particularly in noisy imaging conditions originating, for example, from very short scan times, very low amounts of tracer activity injected or from scanning obese patients, which increases the scatter fraction. Thanks to the use of Fig. 4 [18F]-FDG PET images of a 60-y/o male (362 MBq, BMI = 28) with colorectal cancer. Images show lewsions in the liver and right lower lung that are affected by motion-induced blurring. Left: without motion compensation, middle: with amplitude motion compensation that utilizes 35% of the acquired data resulting in noisy images but reduced motion artifact, and right: with amplitude motion compensation that utilizes all the acquired data resulting in lower image noise and motion artifact advanced scanner designs that allow for larger coverage of axial FOVs, new PET/CT systems permit shorter acquisition times, lower administered radiotracer amount, and easily, repeated imaging of any region of the body. These repeated whole-body PET scans can then be used to derive radiotracer uptake rate constants (K i -values) and, thus, estimate parametric images in addition to plain standardized uptake value (SUV) images (Fig. 6).
In summary, PET imaging today comes in flavours of combined PET/CT and PET/MRI with both design concepts being geared towards whole-body oncology imaging and less so full quantification. This visualizationdriven approach is complemented with user software that generally limits data handling to 3D slicing of reconstructed PET images and basic quantification of static PET data.
Status go PET detectors
Most state-of-the art PET/CT systems use LSO or LYSO detector materials coupled to SiPM, and unless other, faster scintillation type detectors can be marketed, these will remain the material of choice for the years to come. Technological progress is expected in the advances of SiPM technologies, so as to making these amplifiers smaller and more robust. Furthermore, their small size should aid in the positional accuracy of the detected annihilation photons thereby improving overall image resolution and further reducing partial volume effects.
Time-of-flight PET
Current timing resolution of PET systems is on the order of 250-500 ps. If the timing resolution improves by a factor of 10 (25-50 ps) then there will be no immediate need for image reconstruction since the image can be directly generated as the data are acquired. Recently, an initiative was launched by a CERN-linked consortium to push for a 10 ps timing resolution in PET in order to bring together enthusiastic stakeholders to advance on this particular technological challenge for a wider adoption of PET imaging [35]. To meet this challenge, detector materials that combine the upsides of LSO or LYSO but with even faster timing resolution, have to be found and manufactured in scale. Another benefit of a faster detection system is better count-rate performance with lower dead-time effects over extended ranges of radioactivity levels, as in situations where the administered radiopharmaceutical accumulates in high concentrations.
Attenuation and scatter correction
It appears unlikely that CT transmission scans will be avoided in PET/CT in the future, unless techniques such as maximum likelihood attenuation and activity (MLAA) [36,37], which can approximate both the attenuation map and the resultant image for the acquired data, or Machine and Deep learning-based AC methods for PET imaging applications are developed. However, little data is available as to the robustness of these approaches and their performance in the presence of varying tracer distributions, morphological alterations or high-density implants.
Motion compensation
Data-driven gating (DDG) techniques have been introduced to address patient motion induced image blur [38]. DDG benefits from using fast PET detectors that yield a high SNR to allow the extraction of the motion waveform from the background noise. It is anticipated Fig. 6 State-of-the-art PET/CT acquisitions allow for the repeated sweeps of extended axial imaging ranges during the emission scan following the injection of the tracer. These data can be reconstructed in contiguous frames (see Frame 1-8 as an example) that are used to extract timeactivity curves of tissues and organs. This information, in turn, can be used to derive parametric images (e.g., k i and distribution volume, DV) beyond static images that yield SUV quantification only. It is assumed that parametric image information supports more accurate diagnosis in the future. ( that with the elimination of external devices to record the motion cycle, the motion compensation workflows will become much easier to use and will be adopted routinely in the PET clinic. In return, PET image quantification will be more accurate and patient management will be more precise [39].
Image reconstruction
Assuming the 10 ps challenge will not be answered in the next few years, iterative reconstruction (IR) will prevail (Fig. 5b). IR methods will be improved with more advanced system models that describe various aspects of the hardware and the subjects (motion, tracer distribution etc) more accurately (Fig. 5c). For example, CT attenuation maps might be compromised by metal artefacts, truncation, and contrast media; or in the case of PET/MR attenuation maps may not be available altogether since MR images do not represent tissue attenuation. In this regard, new reconstruction algorithms, such as MLAA and artificial intelligence along with machine learning approaches that use deep learning algorithms hold a big promise in addressing these challenges [40][41][42]. The performance of these techniques is currently being evaluated under various clinical conditions such as low count-density (shorter scan times, low activity) and varying radiopharmaceutical biodistributions.
New system design concepts
Traditionally, PET systems have been designed as cylindrical structures with detectors dotting the circumference of these structures and oriented towards the central axis of the cylinder. More recently, systems with longer axial lengths have been introduced. By placing detector modules abutting next to one another along the axial extent of the scanner, lengths of 20 cm, 25 cm and 26 cm have been achieved on commercial systems from GEHC and Siemens Healthineers respectively, with one manufacturer (United Imaging) achieving 194 cm [43]. This increase in axial extent has two key advantages. First, it increases sensitivity, which can be translated into higher image quality or traded for a reduced injected activity, and, second, it facilitates increased patient throughput by reducing acquisition time. A total-body PET/CT scan can then be performed in one minute, or less and with a significantly reduced amount of administered radiotracer, which consequently reduces patient radiation exposure, thereby potentially transforming the use of PET imaging from a diagnostic tool to a screening tool. Other advantages of total-body PET with longer axial extents include the ability to perform dynamic imaging of multiple organs simultaneously, which enables the study of the bio-distribution and pharmacokinetics of new radiopharmaceuticals and disease states with images representing true biological parameters, such as tracer uptake rate constants (K i ), blood flow, or receptor density rather than just semi-quantitative values, such as standardized uptake values (SUV) (Fig. 6).
PET/MR systems will continue to evolve albeit slowly when compared to PET/CT with improvements in attenuation correction, truncation, and image uniformity in organ-specific or whole-body applications. Their adoption into routine clinical care will primarily depend on the identification of clinical applications that leverage the advantages of MR (compared to CT) while complementing the diagnostic information gained from PET [44,45].
In summary, PET imaging systems will come primarily in combinations with CT for whole-body oncology applications, unless, key drivers for the adoption of fullyintegrated PET/MRI are found, in which case relatively more PET/MRI systems could be installed. PET detector elements can now be manufactured with very fast scintillation materials and produced in modules fit for use in PET/CT and PET/MRI combinations. These modules, when designed with SiPM, should help improve spatial resolution and reduce partial volume effects. The use of ToF and advanced image reconstruction algorithms will create flexibility in imaging protocol designs and help push the image quality even in low-count situations, which helps in signifying even early time-point measurements shortly after tracer injection, and with parametric imaging. Motion compensation will likely become key for the advancement of the methodology, which, if resolved, will open PET for a wider range of applications and help push it into PET-guided therapy planning. In short, the authors hypothesize a primary push for sensitivity and full quantification, which is further supported by fast and higher-resolution measurements (Table 1).
SPECT imaging and instrumentation trends
As with PET, SPECT is a non-invasive molecular imaging method that generates tomographic images of functional and metabolic pathways in the body. Unlike in PET, SPECT is based on the use of radiotracers that are labelled with radioisotopes that decay via the emission of single or multiple photons of discrete energies. Given the lack of annihilation effects, SPECT relies on the use of physical collimators to generate projection data for localization of the SPECT tracer. SPECT has only recently been made quantitative through the adoption of correction and image reconstruction approaches [46].
Status quo
The availability of SPECT and its ability to image a range of metabolic signals has made it commonplace in oncology imaging for both diagnosis and disease monitoring. From the identification of bone metastases [47] to the assessment of malignant thyroid conditions [48], SPECT is a common feature of oncology imaging. More recently, with the growth of theranostics [49], radionuclide therapies, and image-based dosimetry associated with therapies, there has been renewed interest in SPECT imaging technology.
SPECT systems are typically based on a pair of large (540 mm × 400 mm) energy selective scintillation detector arrays, with positional information derived from the use of lead or tungsten constructed collimators and the Anger principle [4]. The technology is well established and has changed little since its inception in 1956. Nevertheless, driven by the need for more compact and innovative designs initially for cardiac SPECT [50], but more recently for breast [51] and general oncology imaging [52], semiconductor detectors have been introduced as an alternative to traditional scintillation detector systems. The most common semiconductor material used for SPECT is Cadmium Zinc Telluride (CZT) [53]. In addition to its compactness and with its direct translation of gamma photon to signal, CZT detectors offer superior energy resolution compared to traditional gamma cameras [54]. This translates into improved scatter rejection, in addition to sensitivity and contrast gains [55]. CZT is also incorporated into SPECT systems as individual pixelated detectors, which allows the system to work at higher count rates than traditional gamma camera designs [53].
The introduction and development of dual-modality SPECT/CT systems has been another step change in SPECT [56]. Transmission imaging in SPECT was originally introduced to correct for photon attenuation in cardiac studies, but was soon superseded by much quicker SPECT/ CT systems. In addition to attenuation correction from SPECT/CT systems, the advantages of having localization and other complementary CT information superimposed with SPECT was quickly realized [56][57][58]. Co-localisation has been particularly beneficial for oncological applications of SPECT, where for example, CT can help determine whether bone lesions are cancerous or degenerative in nature, and also with radiopharmaceuticals such as mIBG, where the anatomical information available in the SPECT image can be limited (Fig. 7).
In parallel to the growth of CT-based attenuation correction [59], iterative reconstruction algorithms have replaced traditional FBP reconstruction [60]. One advantage of iterative reconstruction is the ability to integrate corrections into the image reconstruction, which provides superior solutions than correction pre or post reconstruction. This approach of integrating corrections into the reconstruction process started with attenuation and scatter correction, but has been developed further by the introduction of resolution modelling into the reconstructions system model [61]. The noise handling nature of resolution modelling is revolutionizing SPECT acquisition protocols because it can provide significant (~60%) Table 1 Progress indicators of methodological and commercial efforts towards improving key imaging technologies and widening their applications in clinical cancer care. System costs relate to costs of manufacturing a next-generation imaging system. Reproducibility relate to efforts made towards increasing reported accuracy of imaging results from the same system generation and across different platforms. Absolute quantification stands for means to extract absolute numeric values from image-based measurements. Radiation exposure describes exposure of subjects to ionizing radiation (or other means of energy deposit, such as in case of MRI). Examination time describes duration that subjects need to remain motionless in an imaging platform for the acquisition to take place. Spatial resolution is the measured resolution of the images. System sensitivity relates to both volumetric sensitivity of an imaging system and diagnostic sensitivity through the use of alternative tracer and contrast applications. The direction of the progress indicators, as relative metrics, corresponds to the consensus of the authors regarding the progress being made in any of these efforts within the next 5 years, or so reduction in acquisition time for the same level of image quality provided by traditional reconstruction methods [62]. This is a great benefit for some oncology-based SPECT studies where acquisition times can be 30 min, or longer. Alternatively, resolution modelling can be used to improve image quality for the same acquisition time as is used traditionally.
In summary, SPECT with its wide range of available radiopharmaceuticals, its low cost and ready availability provides oncologists with images of metabolic disease for diagnosis and disease monitoring. Recently, the technology has also seen growth driven by theranostic applications. The introduction of dual-modality SPECT/CT has helped overcome some the challenges of SPECTonly image interpretation, and developments in detector technology and image reconstruction have helped improve image quality. Nonetheless, imaging times are still long and fully-quantitative SPECT imaging is challengingunlike in PET where it is commonplace.
Status go Hardware
One of the major drawbacks in current SPECT technology is its limited sensitivity, which results in much longer examination times compared to PET. As such, scan times of 40 min to 70 min are typical if multiple SPECT acquisitions are required to image the area of interest. Therefore, for SPECT to prosper, more rapid scan technology is required. A promising method that has been adopted in cardiac SPECT, but which has not yet translated to oncology SPECT yet is the use of very high-sensitivity collimators [54]. One of the key limitations of such collimators is the compromise of spatial resolution. However, if septal penetration is adequately controlled, there is the opportunity to use resolution modelling techniques to recover much of the lost resolution.
A second method of increasing the sensitivity of SPECT systems is to improve the detector geometry by considering alternative designs to the traditional two rotating planar detector arrangement. The return of 3, 4, or more rotating detectors can bring significant gains, but this solution is problematic because of the degraded spatial resolution seen with traditional parallel hole collimators at the distances necessary to accommodate the detectors. However, the growing availability of resolution modelling within the image reconstruction will allow the re-adoption of this design concept for SPECT. Complex collimator designs, such as multiple pin-hole [52] and slit-slat collimators [63], could also bring sensitivity gains while maintaining acceptable spatial resolution. Such efforts could be matched with improvements to intrinsic spatial resolution and semi-conductor readout [64]. While complex collimator designs are an area of interest, whether this technology can be extended to larger fields-of-view rather than small field-of-view application, such as in the brain, is to be determined.
Detectors
Semiconductor detector technology based on, for example CZT, is also moving into oncological SPECT. Initial models have been based on traditional dual detector design with the improved energy resolution offering some clinical advantages [65]. More recently ring detector systems have been produced, which again improve the geometric and overall sensitivity of SPECT systems, while maintaining spatial resolution by using dynamic detector motions [66]. There is no doubt that this ring detector technology will bring benefit to organ-specific imaging, such as the heart, where dynamic tomographic imaging has already shown to be beneficial, and the brain where dynamic imaging and motion correction could offer significant advantages. The ability to easily perform whole-body tomography for oncology applications, such as bone SPECT, and mIBG SPECT will benefit staging and disease progression applications in much the same was as seen in PET. Large volume SPECT will also bring benefits with dosimetry applications, by ensuring that dosimetry of tumours and organ at risk can be performed in a timely manner. However, a disadvantage of current semiconductor detectors is the usable energy range, which is typically below 200 keV. While acceptable for imaging with Technetium-99 m and Iodine-123 radiopharmaceuticals, this threshold is not compatible with common theranostic radionuclides. For example, the threshold sits below the preferred 208 keV imaging window of Lutetium-177 [67], and well below the upper 364 keV energy window of Iodine-131. To allow faster SPECT acquisitions using a ring detector, semiconductors optimal for these higher energies may be required, or possibly using scintillation crystals with traditional or solid-state readout [64,68].
Hybrid SPECT
Another area where we may see hardware changes in the future is in alternative hybrid SPECT imaging. While SPECT/CT already exists, there are also developments in SPECT/MR [69]. In addition to MR providing much better soft tissue contrast for SPECT localization than is available with CT, as with PET/MR there are also opportunities to use MR information to manage patient motion [70], improve image reconstruction [71], and to delineate areas of interest, which in turn could be used to perform partial volume correction of the emisison data [72].
Data processing
The development of contemporary correction and image reconstruction algorithms has brought with it the ability to use SPECT as a quantitative imaging modality in much the same way as PET [73]. Measurement of invivo activity concentrations or standardized uptake values (SUV) are now possible and may become the norm in future SPECT imaging (Fig. 8). Indeed, as with PET, by incorporating segmentation algorithms, measurements will be developed further to help derive metabolic tumour volumes and total tumour burden to better characterize the overall disease status [74]. The basis of this technology already exists [75], and applications of the technology are being investigated for both crosssectional diagnosis prognosis and staging, as well as longitudinal treatment response and disease progression indications [46].
Reconstruction and corrections
Although CT-AC may become the norm with very few SPECT-only studies performed, there will be a drive to reduce CT radiation exposure by the use of improved iterative CT reconstruction and artificial intelligence algorithms [76][77][78]. Indeed, similar techniques will also be applied in SPECT, where available count statistics in oncology imaging are often limited, and there is an inability to extend already long scan times. Improved iterative reconstruction models using penalized likelihood [79,80] and improved resolution modelling will help, particularly if they overcome overshoot 'Gibbs' artefacts [81]. Additionally, developments with scatter correction algorithms will help move the technology from subtractive techniques, which are inexact and increase image noise [82] compared to those incorporated into image reconstruction [60], which do not increase image noise. Artificial intelligence algorithms are already being developed that can improve the levels of noise seen with low count statistic studies, and it is envisaged that in the next 5-10 years such algorithms will make issues from low photon flux imaging less problematic [83].
An issue that can come from the necessary long acquisition times of SPECT is that of patient motion. In SPECT, spatial resolution is typically between 8 mm and 12 mm, so small levels of patient motion are not problematic. However, larger levels of motion will degrade image quality, and frequently require the patient to be rescanned. To overcome these issues, motion detection and improved motion correction software is being developed, which will help both identify motion and correct data where possible, while also alerting the technologist when the data is unusable [84]. It is also envisaged that that there will be significant development in future years in the routine application of Monte Carlo based image reconstruction techniques [75]. Currently algorithms are very slow and mostly used in research labs, but there is huge potential in this form of reconstruction.
Theranostics
While SPECT already has an important role in theranostics, there has been a recent surge of new theranostic applications using Lutetium-177 labelled peptides in neuroendocrine tumours, and PSMA in prostate cancer [85][86][87][88][89] with many other therapeutic areas in development by commercial partners [90]. Key to making the best use of this technology is the need to understand the radiation dose given to various tumour targets and organs at risk. Quantification of radiopharmaceutical uptake together with accurate segmentation of tumours and organs of interest are central to the derivation of accumulated activity curves required for radiation dosimetry. Currently, performing this work is challenging, and often requires additional imaging, characterisation, and home-grown image processing algorithms [91]. Supported by tools to assist in the calibration and standardisation of quantitative SPECT systems, the practice of dosimetry in molecular radiotherapy will become widespread, supporting the vision of personalised precision medicine with theranostics.
Radiopharmaceuticals
A more detailed synopsis of radiopharmaceuticals will be given in an accompanying paper by Blower and colleagues. Nevertheless, with the penetration of SPECT in the market and the wide availability of SPECT radionuclides, there is the possibility of moving PET radiopharmaceuticals to SPECT -as seen with Technetium-99 m labelled TOC, PSMA and glucose analogues [86,92,93]. Clearly, this is an opportunity for low-and middleincome nations where PET and cyclotron technology may not be readily available, while for high-income countries it may depend on whether the sensitivity and spatial resolution limitations of current SPECT technology can be overcome.
In summary, SPECT hardware designs will evolve rapidly, which together with developments with collimation and image reconstruction will improve the sensitivity of SPECT systems and reduce examination time (Table 1).
There is also the potential for improvements in sensitivity and spatial resolution, which will result in SPECT imaging with better quality than we have today. While technological changes will have an impact, it is with the modality becoming truly quantitative that the use of SPECT that will grow. The current growth trajectory of theranostics and the ability to quantify SPECT data in terms of SUV will lead to a greater demand in SPECT for both radiation dose assessment, and the ability to monitor treatment response. Beyond theranostics, quantitative SUV SPECT will help in the interpretation of SPECT data and will lead to more precise diagnosis using the technology.
Computed Tomography (CT) imaging and instrumentation trends
Computed tomography (CT) is an anatomical imaging method. Here, a narrow beam of X-rays is directed to the subject and quickly rotated around the body, thereby producing transmission signals. These signals are processed and reconstructed to form cross-sectional images of the subject under investigation. CT is a tomographic imaging method that generates contiguous axial images, which can be digitally stacked to form 3-dimensional, high-resolution images of the investigated area. Oncologic diseases can be depicted through alterations of standard anatomy or alterations of dynamic processes, such as restricted intravenous contrast enhancement.
Status quo X-ray transmission
Diagnostic CT imaging is based on the measurement of X-ray absorption from a large number of different view angles across the patient. State-of-the-art CT systems offer a sub-second rotation time and are capable of acquiring multiple slices simultaneously, thereby covering a longitudinal range of 2 cm to 16 cm during a single rotation. The most dominant scan mode in CT is the spiral scan mode, which is a continuous data acquisition combining a continuous gantry rotation with a simultaneous translation of the patient through the gantry [94]. The trajectory of the X-ray source relative to the patient is a spiral, or helical trajectory. Spiral acquisitions yield the lowest possible scan times at the highest possible image quality, whereby the latter is due to the high symmetry of the spiral trajectory. In general, scan times range from far below 1 s to several seconds. Nearly all scans can be conducted within a single breathhold, thereby avoiding respiratory motion artefacts.
Quantification
CT systems offer an isotropic spatial resolution of about 0.5 mm and, therefore, are best suited for anatomical imaging. The physical property that is displayed in CT is the distribution of the linear attenuation coefficients. For convenience, the values are linearly transformed to become integer-valued Hounsfield units (HU) in a way such that air corresponds to − 1000 HU, water to 0 HU, and soft tissue and fat to about 50 HU, and -70 HU, respectively. However, tissue differentiation by means of HU alone remains challenging, particularly for a range of soft tissues; while soft tissues exhibit similar X-ray attenuation properties, their difference is noticeable more clearly on MR images. Therefore, most CT scans in oncological imaging require the administration of an iodinated contrast agent to achieve sufficient image contrast between the vessels, perfused organs and tumours relative to the surrounding soft tissue and fat. The contrast injection is typically synchronized with the acquisition of the CT scan to ensure sufficient contrast in the organ of interest during the short time of scanning.
Functional CT
Functional properties can be assessed with CT by performing dynamic perfusion scans. During perfusion protocols, a 4D acquisition is performed by repeatedly imaging the same body region in time lapses of 3 s to 5 s for about 30 s post administration of the contrast agent. Using the so-called "time attenuation curve", important hemodynamic parameters can be derived, such as the blood flow, the blood volume, or the permeability surface area product. Such functional parameters have been shown to correlate with histopathology and therapy response [95][96][97].
Ionizing radiation
Despite the lack of solid arguments for a linear no threshold (LNT) model, fears associated with radiationinduced risks have led to a wide number of efforts to reduce the dose in CT. These include the introduction of tube current modulation, automatic exposure control, automatic tube voltage selection, more powerful X-ray tubes that allow for thicker pre-filtration, highly-integrated detectors with less internal noise, multidimensional adaptive raw data filtering, as well as iterative image reconstruction algorithms [98]. These techniques make more efficient use of the X-ray dose and help reduce the X-ray dose while preserving image quality. Alternatively, such methods can also be used to improve the image quality without increasing the patient dose. The latter may be of importance for oncological imaging where frequently subtle contrast changes in small lesions need to be detected, and where patients may undergo a large number of follow-up scans. The need for dose-conscious CT imaging is typically amplified when scanning young patients.
Detectors
Suppl. Table 2 lists important properties of state-of-theart CT systems. For example, maximum tube power at low tube voltages helps lower patient exposure or boost image quality of contrast-enhanced scans, since the iodine in contrast agents exhibits a k-edge at low energies. Most CT vendors provide the possibility to reconstruct the images at grids larger than 512 by 512 pixels, so as to use the full spatial resolution capabilities for large field-of-view reconstructions, which helps improve sensitivity in lung imaging or imaging bone metastases.
Dual-energy CT (DECT)
DECT exploits differences in the energy dependency of the attenuation coefficients and thus allows to selectively display materials that might appear at the same CT value in single-energy CT. DECT is can be realized by including a dual-source CT, a CT with fast tube voltage switching, a CT equipped with a sandwich or dual-layer detector, or a CT with different prefiltration on different parts of the detector array [99]. Approaches that combine two separate CT scans are also available, but suffer from motion in-between the scans. Applications of DECT include the selective display of iodine and bone, the characterization of kidney stones, the reduction of artefacts and the quantification of materials (Fig. 9). Advantages of DECT for oncological imaging have been reported for tumour detection and lesion characterization [100] and for surrogates of perfusion measurements [101,102]. Well-designed scan protocols help distribute the X-ray dose across the two X-ray spectra, such that the total dose remains similar to that of a single energy CT [103,104].
Metal artifact correction (MAR)
Metal artefacts are the most dominating type of artefacts in CT. In many cases, metal artefacts prevent accurate diagnosis of lesions that are close to or in-between metal objects. Moreover, the artefacts degrade the attenuation correction needed for PET imaging [105]. Metal artefacts Conventional energy integrating (EI) detectors convert the X-rays into an electric signal indirectly (left). Future photon counting (PC) detectors convert each X-ray photon into an electric peak directly (right). The peak of the PC detector is short enough in time to permit counting single photons. The peak height gives information about their energy. To minimize pile up the PC detector pixels are much smaller than the EI pixels are mainly caused by the strong beam-hardening of dense objects, related to the strong attenuation of metal. Due to the corruption of the projection data behind the metal the only way to reduce such artefacts is to replace the raw data in the vicinity of the metal implant [106]. Standard metal artifact reduction algorithms (MAR) are based on inpainting techniques, such as FSNMAR [107], IMAR [108], or OMAR [109].
In summary, CT systems comprise up to 100 detector rows and allow for routine acquisitions with spatial resolution on the order of 0.5 mm within scans that take less than a second to a few seconds at dose value ranges of sub mSv to 15 mSv. CT images are highly quantitative and reproducible and come with very good contrast resolution. The most dominant artefacts are addressed by dedicated correction algorithms. Dual-energy CT acquisitions are available in the mid-range and premium systems.
Status go Photon counting
Future CT systems may be equipped with novel X-ray detectors (Fig. 10). Currently-available detectors are indirect converters that convert X-ray photons into visible light, which is then converted into an electrical current. The next generation of CT systems may utilize photon counting detectors, which directly convert the X-ray photon into an electric signal. In contrast to the indirect converters, the bell-shaped electric signal that is generated in the photon counting detector pixel following the registration of an X-ray photon, is very short in timeshort enough to be able to count a photon before the next photon arrives. Moreover, the height of the peak can be measured. Since it is proportional to the photon energy, photon counting detectors will be energy-selective, thus, inherently producing spectral information without the need to generate two different X-ray spectra (DECT) [110]. It is assumed that a future photon counting CT may distinguish four energy bins, thus, the yielding information similar to that from a conventional CT scan performed at four different tube voltages.
Apart from the inherent capability of performing spectral separation, photon counting detectors also promise to yield images with lower noise levels or to acquire images at lower patient exposure levels. Furthermore, iodine contrast will be increased with photon counting detectors. Conventional detectors give a high weight to high energy photons and a low weight to low energy photons, including those that are close to the k-edge of iodine (33 keV). In contrast, photon counting detectors weight low and high energy photons equally and, thus, yield improved iodine contrast.
Machine learning (ML)
ML has entered the field of CT imaging and will gain importance, not only for computer-aided diagnosis and big data evaluation, but also for CT image formation. For example, image reconstruction may benefit from deep learning approaches to help reduce patient exposure or improve image quality [78,111] (Fig. 11).
In summary, CT technology is evolving towards images with spectral information at significantly higher spatial resolution and greatly reduced noise. These development will positively impact oncology imaging applications, such as assessing the thorax, or the characterization of bone metastases. Further, the characterization of low contrast lesions will be improved. The added spectral information, which will be available on demand, may benefit from the development of new contrast agents. These advances will likely find their way into PET/CT systems to improve attenuation correction and increase soft tissue contrast even for low-dose CT scans (Table 1).
Magnetic Resonance Imaging (MRI) and instrumentation trends
MRI is founded on the principle of nuclear magnetic resonance (NMR), whereby spin-carrying nuclei under a strong static magnetic field (B 0 ) can be excited with a weak oscillating radiofrequency (RF) field (B 1 ) and absorb its energy [112][113][114]. The observation that the relaxation time constants of the NMR signals are different between normal and cancerous tissues preceded the advent of MRI [115]. In MRI, magnetic field gradients (G) are used to make the frequency and phase of the spin precession spatially dependent and, thus, produce an image of the signal emitting spins [116,117]. Compared to other imaging modalities, such as CT and PET, MRI has the intrinsic advantage of providing multifaceted and excellent image contrast, especially for soft tissues, and for not utilizing ionizing radiation. MRI can also be performed in any orientation without relying on postprocessing image reformatting. Conversely, MRI has an intrinsic limitation in signal-to-noise ratio (SNR), and because most MRI schemes rely on acquiring one line of k-space at a time, it has been relatively slow.
Status quo Magnets and gradients
In the magnet technology, the superconducting magnet, active shielding, and "zero helium boil-off" are a few significant milestones that have helped shift MRI from being low field (0.1 -0.2 T) in the early days to the present day of high-field systems (1.5 -3.0 T) as the standard. A strong motivation for the development is the increased intrinsic SNR, which is approximately proportional to B 0 . Unfortunately, higher magnetic field strength also entails technical challenges, including higher magnetic susceptibility and higher RF field dielectric effects. On the other hand, the development of faster and more powerful gradients, including better eddy current management with actively shielded gradients and real-time compensation strategies, has largely been motivated by fast imaging, such as echo planar imaging, fully-balanced steady-state imaging, and fast spin echo imaging. The successful implementation of these fast imaging techniques has brought to fruition of novel clinical applications, such as blood oxygen level dependent or BOLD functional MRI, and diffusion weighted imaging (DWI), which has found application in early detection of stroke and providing better characterization of tumours.
RF coils
Similar to magnet and gradient innovations, the RF technology for MRI has also been undergoing a steady stream of development. In the early days of MRI, a single "one-size-fits-all" RF coil was often used for both transmit and receive. The phased-array technology [118] is a major step forward in MRI as it provides simultaneously the large coverage of a volume coil and the superior SNR of a small surface coil. The phased-array coils have also enabled parallel imaging [119][120][121] and simultaneous multi-slice (SMS) imaging [122], which greatly speed up the MRI acquisition by combining the different spatial sensitivity profiles of the different coil elements with magnetic field gradient for spatial encoding.
Similarly, phased-array coils also enable parallel RF transmit [123], which has the potential for tailoring the excitation profiles (e.g., for spatially focused excitation or reduced dielectric artefacts). For these and other advantages, many present-day MRI scanners carry a wide variety of phased array coils and are equipped with up to 128 or more simultaneous RF channels.
Oncology protocols
From its very beginning, clinical applications, such as in cancer, have always been the underlying driving force for the development of MRI. The versatility of MRI is reflected in many different types of images that can be generated with the different scan protocols, pulse sequences and image reconstruction algorithms. Among them, T1-weighted and T2-weighted images, in conjunction with Gadolinium-based contrast agents, have been the mainstay for most oncological MRI applications, especially for the detection, characterization, and staging of the tumours (Fig. 12). The development of the fast spin echo (alternatively known as turbo spin echo, or RARE) pulse sequence [124], which employs a train of rapidly refocused echoes after each excitation to fill multiple lines of k-space, greatly reduces the acquisition time for T1-weighted or T2-weighted images. Many magnetization preparation schemes have also been developed to further enhance the utility and versatility of a basic pulse sequence, such as inversion recovery for fluid attenuation (FLAIR) [125] or chemical shift selective saturation (CHESS) for fat suppression [126].
Multi-parametric MRI
In addition to stationary tissues, MRI can also be used for imaging moving blood or cerebral spinal fluid (CSF) in MR angiography (MRA) to detect and depict vascular abnormalities. For this application, several MRA methods have been developed and are in routine clinical use. Contrast-enhanced (CE) MRA relies on the injection of a Gd-based contrast agent to shorten the T1 of the blood, thus, creating enhanced signal for the blood relative to the background tissues. Time-of-flight (TOF) MRA, on the other hand, relies on the inflow enhancement of the blood signal and various saturation techniques to suppress the signal from the stationary background tissues. Although less robust and usually of lower spatial resolution, TOF-MRA does not require contrast agent injection.
Compared to other imaging modalities, a unique feature of MRI is that its signals have complex rather than scalar values. The velocity of the moving spins, such as from blood, can be encoded into the phase of the MR signals through the application of magnetic field gradients. Phase-contrast (PC) MRA is based on this principle and compared to CE-or TOF-MRA, has the advantage of being able to quantitatively measure the blood flow [127]. PC-MRI is also the basis of MR Elastography [128], which can measure the tissue stiffness and has been successfully applied for staging liver fibrosis and differentiation between benign and malignant tumours. Another important application where the phase of the MRI signals is successfully manipulated is Dixon imaging [129], by which water and fat in the tissue can be separated with a combination of modified data acquisition and postprocessing. Dixon imaging, in combination with the segmentation of bone and air by a zero-TE (ZTE) pulse sequence, has also been useful for creating the attenuation maps that are needed for accurate PET reconstruction in the combined PET/MR systems [130].
In addition to the conventional T1, T2, and proton density-weighting, several alternative contrast mechanisms in MRI have been developed and found their value for imaging of cancer. Diffusion-weighted imaging (DWI) encodes the Brownian motion of water molecules into the signal loss through the application of large diffusionsensitizing gradients. Unlike the image contrast by T1 and T2, which is generally related to "water content", diffusion contrast is sensitive to changes in the intra-and extracellular structures and density. Further, DWI images can be used to generate an apparent diffusion coefficients (ADC) map, which can serve as a quantitative marker for tissue characterization without requiring an exogenous agent. In cancer applications, restricted diffusion and low ADC have in general been found to correlate with cell malignancy and aggressiveness [131]. DWI also serves as the basis for diffusion tensor imaging (DTI) [132], which measures the diffusion anisotropy of the underlying tissues and has been useful for preoperative planning by depicting the relationship between brain tumours and adjacent white matter fibre tracks (Fig. 13).
Chemical exchange saturation transfer (CEST) imaging [133] is an MRI contrast mechanism that relies on the magnetization transfer [134] and chemical exchange between the protons of water and the protons of some labile macromolecules, such as the mobile proteins or peptides. The detection of these macromolecules with CEST is indirect, yet with a sensitivity that is enhanced by orders of magnitude compared to the detection of water. Interestingly, CEST can measure not only the concentration of these macromolecules but also the pH-level of their environment because the latter is dependent on the rate of magnetization transfer and chemical exchange. Since the accumulation of the macromolecules and pH-level of their environment that are measurable by CEST are sensitive to the cancer metabolism and progression, there has been an intense interest in investigating the potential of CEST for a variety of cancer applications, including for improved detection, characterization, and treatment response assessment.
In summary, MRI has established itself as a modality of choice in the detection, characterization, and staging of cancers of many types, including those of the brain, spine, liver, prostate, rectum, and breast. Many technical limitations of MRI of the early days (e.g., speed, SNR, and motion artefacts) have become increasingly wellmanaged. Thanks to the excellent soft tissue contrast and the growing number of useful contrast mechanisms, MRI is an indispensable tool for imaging of cancer.
Status go Hardware
Wide-bore scanners with light-weight and more anatomy conformal phased array RF coils will become increasingly available, making MRI more patient friendly and less time-consuming to operate. Hardware enhancements with anatomy-dependent adaptive shimming, such as in the areas of C-spine/neck and shoulders, and more accurate and easy-to-use motion monitoring/management devices are also expected to substantially improve the image quality and reduce the scan-to-scan variability.
Dual-modality combinations with MRI
MRI has been integrated successfully as a major component in a couple of dual-modality systems, such as PET/ MR [135,136] and MR-Linac [137]. PET/MR, in particular, will be used to explore the value of quasisimultaneous acquisitions of metabolic information from PET and morpho-functional information from MRI (e.g., T1, T2, DWI, perfusion, CEST) for tumour characterization [45]. It should be recognized, however, that key applications remain to be identified and established in clinical routine. There is evidence that the high soft tissue contrast of MR and its simultaneous acquisition with PET data are useful for accurate local staging and treatment assessment of several types of tumours, such as gynaecological and prostate tumours, and that, therefore, fully-integrated PET/MRI could potentially be regarded as advantageous over PET/CT (Fig. 14). With regard to MR-Linac, the superior soft tissue contrast and ability to continuously image a moving tumour (e.g., in the liver or lung) and its surrounding anatomy by MRI can be leveraged to guide and optimize the treatment of the tumour by external beam radiation and minimize the radiation damage to the normal tissues. The potential for functional assessment of the tumours by MRI is also an advantage over the traditional and more established CT-guided radiotherapy.
Speed and SNR
Several novel and exciting techniques are emerging to further increase the speed and SNR in MRI. Compressed sensing [138], which employs the image sparsity, random undersampling, and non-linear iterative reconstruction, enables MRI beyond the Nyquist limit without a major resolution loss. In conjunction with parallel imaging, compressed sensing is poised to become an important acceleration option for many MRI acquisition. MR Fingerprinting [139], which allows quantitative mapping of multiple tissue parameters simultaneously with a single acquisition and postprocessing pattern recognition with the aid of a pre-established signal dictionary, has the potential to become a useful tool for tissue parameter quantitation and detection of a target disease change with high sensitivity.
Machine learning
As for other imaging modalities, artificial intelligence (AI) and machine learning (ML) have the potential of bringing about the most disruptive changes to MRI [140]. Although still considered at its infancy, AI/ML has demonstrated an array of remarkable applications in MRI data acquisition and image reconstruction (e.g., for speed acceleration, resolution enhancement, and generation/synthesis of multiple image contrasts including across the different modalities), in image processing (e.g., automatic artefacts removal and denoising without blurring) (Fig. 15), and in image analysis and direct diagnosis (e.g., automated tumour and anatomy segmentation, image co-registration, and tissue parameter estimation without relying on empirical modelling, classification of tumour type, grade, and treatment response). AI/ML has also been successfully applied for practice efficiency improvement through protocol determination based on short-text classifications, for substantially reduced Gd-dosage in contrast-enhanced MRI without a noticeable reduction in image quality, and for generating synthetized images used for PET/MR attenuation correction. There is little doubt that the list of AI/ML applications in MRI will grow rapidly. However, it is also important to note that many challenges remain for these applications to be translated into clinical practice, among them cross-site validation of a trained model/algorithm, clinical workflow integration, and regulatory hurdles.
In summary, MRI is expected to continue to evolve and expand in technical scope and clinical applications. In addition to becoming more patient-and user-friendly, MRI is poised to break some fundamental boundaries (such as in speed, sensitivity, and artefact removal) with several emerging technologies, including compressed sensing and deep learning ( Table 1). The improved image quality and the accuracy and reliability of the quantitative MRI measurements will be highly valuable in better tumour characterization and in better longitudinal assessment or cancer prognostication.
Ultrasound (US) imaging and instrumentation trends
Ultrasound (US), also referred to as sonography, is a non-invasive imaging method that employs sound waves to generate images of the inside of the body. US is based on high-frequency sound waves that are emitted from a transducer that is placed on the body surface over the volume of interest. The same transducer is used to receive the sound waves, which are reflected by the target. Depth-encoded US images can be generated by determining the run time of the reflected sound waves within the tissue. Because of the ease-of-use and widespread availability, US is used frequently as the first diagnostic imaging modality during a patient work-up. This is true for investigations of most soft tissues, except the lungs and the brain. The key advantages of this imaging modality are the lack of radiation, its real-time imaging capability, low cost and high mobility of the devices.
Status quo
US-imaging strongly evolved over the last decades [141]. State-of-the-art transducer technology supports the recording of the higher harmonics of reflected ultrasound pulses (harmonic imaging), thereby, improving image quality, in particular for deeper layers of tissue. Significant advances, with respect to the field-of-view and the transition from 2D to 3D data assessment, have been achieved by introducing moving transducer arrays inside handheld instruments, e.g., for breast imaging [142,143]. Real-time 3D data acquisitions are made possible with phased array volumetric ultrasound scanners [144]. However, specific design aspects of the transducer matrix including cross-talk between elements and size and number of elements render the fabrication of matrix transducers difficult and costly [145].
Next to volumetric imaging, also other technologies, such as ultrasound elastography (USE) have been developed to extend the field of ultrasound applications. Transducers capable of strain or shear wave imaging support the non-invasive characterization of the mechanical properties of tissues. Given the differences in tissue elasticities in specific pathologies, such as organ fibrosis and tumours, USE has became a valuable diagnostic tool for the differential diagnosis of healthy and diseased tissues. For example, USE of the breast can be applied to differentiate between benign and malignant lesions, given that malignant lesions appear more heterogeneous and are stiffer than surrounding healthy tissues [146,147]. The combination of Brightness-mode (B-mode) US with USE further improves the specificity to detect vital malignant tissue and reduces the number of required biopsies [148,149]. Chronic thyroiditis and malignant tumours are associated with higher stiffness, thus, making USE a valuable tool for malignancy assessment of nodules and biopsy guidance (Fig. 16) [150].
In addition to the evaluation of mechanical tissue properties, the aberrant vasculature in tumours is of high interest for tumour characterization and therapy monitoring. Doppler methods, which use deformations of sound waves reflected from moving elements in the blood to visualize blood flow, enable a detailed vascular characterization not only for assessing plaques and stenoses in larger vessels, but also to detect larger neoplastic vessels. However, at clinically applied frequencies the very fine vascular networks can still only be detected with contrast-enhanced US (ceUS) scans. Today, a variety of commercial and clinically approved contrast agents is available [151]. These agents are based on microbubbles with a size of 1-3 μm, which can be separated into a gas core and a shell material [151]. Microbubbles can be destroyed and, thereby, detected by highly energetic US (e.g., Power Doppler imaging) or non-destructively detected by harmonic imaging by their non-linear responses, e.g., using amplitude modulation and pulse inversion sequencing modes [152][153][154][155]. In particular, ceUS is used in clinical routine for the characterization of liver lesions [156], thereby significantly improving tumour characterization as well as supporting early and specific therapy response assessment [156,157]. Consequently, many clinical guidelines now mention ceUS imaging as a diagnostic option [158][159][160].
Next to diagnostic applications, US can also be used for therapeutic applications. Therapeutic high-intensity focused ultrasound (HIFU) applications were developed that are often combined with diagnostic US imaging [161]. Focusing US pulses to a defined location in tissue can lead to a regionally defined energy deposition. Depending on the settings, thermal or mechanical effects can be induced, known as thermal ablation or histotripsy. So far, HIFU is approved by the FDA for six indications, including uterine fibroids, benign and malignant prostate lesions, essential tremor, tremor dominant Parkinson's disease, and bone metastases [162].
In summary, low cost and lack of radiation make US a frequently used imaging modality for first-line investigations, intermediate follow-up investigations during pharmaceutical therapies and interventions. In this context, its anatomical and vascular imaging capabilities are of particularuse and applied together.
Status go Three-dimensional US imaging
Advances in the development of micromachined ultrasound transducers might enable to overcome current limitation in the number and positioning of elements in matrix transducers [163]. However, the control of far more than 256 channels in parallel is necessary to make use of the full potential of these transducers, and to maximize image quality. Next to a higher number of active transmit and receive elements, the image quality could also benefit from image reconstruction using 3D datasets.
Ultrafast US imaging and US localization microscopy
Conventionally, the elements of an ultrasound probe are excited sequentially, whereby the number of lines determines the frame rate and thus, the speed of the image generation. Novel ultrafast imaging systems are capable of computing all lines in parallel [164]. Therefore, an ultrasound image can be computed from a single transmit pulse. However, image generation is limited by the propagation speed of sound in the medium. Using ultrafast scan technologies in Doppler, the motion of individual blood cells or microbubbles can be tracked [165]. Displaying these tracks in the size of the blood cells, images of the vasculature are obtained that exceed the resolution limit of the scanner (Ultrasound Localization Microscopy, ULM). As an alternative to ultrafast data acquisition, contrast-enhanced US helps generate superresolution images at lower frame rates if motion models are applied to assign the detected microbubbles to vascular tracks [166]. This method is termed motion-model Ultrasound Localization Microscopy (mULM). Generally, ULM enables the analysis of the vascular architecture (e.g., vessel length and branches) with an interpolated spatial resolution of a few micrometers. At the same time, functional information (i.e. blood flow velocity) can be acquired. Opacic et al. first successfully applied mULM in patients and demonstrated responses to therapy in breast cancer patients (Fig. 17) [166]. There remains room for improvement regarding the adaptation of the algorithms used in pre-clinical experiments to the clinical situation, where scanners work at different frame rates, ultrasound frequencies and in cases of more significant motion [167]. Once this is achieved, clinical studies need to prove the additional value of super-resolution ultrasound for various applications, such as malignancy assessment or treatment monitoring.
Molecular ultrasound imaging (MOUS) has been shown to detect therapy responses earlier than normal ceUS during anti-angiogenic therapies (Fig. 18) [168]. Clinical studies using BR55, microbubbles targeting the endothelial biomarker vascular endothelial growth factor receptor 2 (VEGFR2), have shown their safety and usefulness for cancer detection in ovarian, breast and prostate lesions [169,170]. Clinical studies are currently initiated to identify whether and how this information about VEGFR2 can improve clinical decision making in primary diagnosis, cancer characterization, therapy selection and monitoring.
Hybrid US imaging (HYBUS)
Combined, or hybrid imaging typically relies on hardware combinations of complementary imaging methods. To date, US has not entered the hybrid imaging field beyond pilot field developments. Tavitian and his team did propose a triple-modality imaging device, by adding ultrafast ultrasound to a PET/CT system, and investigating the energy metabolism and vascular architecture during tumour growth at the same time [171]. In the future, we may see efforts towards integrating US into PET and also MRI systems, so as to overcome the intrinsic spatio-temporal limitations of these modalities through ultrasound-based motion correction during image reconstruction [172]. US-guided motion compensation of PET data may be explored in the future for tracking more complex, non-respiratory motions as well.
High-intensity focused ultrasound (HIFU)
The future of HIFU is likely to be MR-guided, so as to leverage high spatial resolution and soft tissue contrast of MR images to delineate a target lesion and its surrounding normal anatomic structures, and to use MR for real-time thermal dose control and treatment monitoring. The use of HIFU can be combined with drug delivery triggered by US-induced hyperthermia or mechanical effects, which can further help in reducing potential tumour relapse from the ablation site [173].
Sonoporation and sonopermeabilisation
When microbubbles enter the ultrasound field, they begin to oscillate (stable cavitation). If the amplitude exceeds a certain threshold, the microbubbles undergo inertial cavitation, that is they vigorously oscillate and eventually collapse, thereby generating high speed air jets, which can permeate cell membranes. Both, stable and inertial cavitation, can be used to overcome biological barriers. Sonoporation describes US applications with the intent to improve the transport of drugs or genes through pores generated in cell membranes, while sonopermeabilisation gears towards overcoming vascular and stromal barriers, such as the blood brain barrier (BBB) or matrix fiber networks in desmoplastic tumours. Carpentier et al. showed in a phase I study that repeated and transient BBB opening can be achieved and is well tolerated [174]. In another clinical study, inoperable pancreatic cancer patients were treated successfully with gemcitabine in combination with microbubbles and ultrasound [175]. Despite these promising results, extensive characterization, optimization and safety assessments are required to identify the best setup and settings for its purpose while preventing unwanted and irreversible injury of the tissue. Especially in the treatment of very stromal tumours, as well as for drug delivery across the BBB, sonoporation may improve therapy efficacy and outcome in the future.
In summary, the diagnostic and therapeutic potential of US will be explored through translation of early pre-clinical developments. US offers microvascular characterization at almost microscopic level in humans and provides important functional and molecular characteristics that can be used for tissue characterization. In this context, the use of computer-assisted lesion detection and automated image analysis will decrease user-dependency of the modality. Radiomics analysis of ultrasound data will further increase the accuracy and diagnostic impact [176]. In addition, new therapeutic applications of US, e.g., to ablate tissue or to improve drug delivery are emerging. In conjunction with other imaging modalities, US can, thus, be expected to play a significant role in theranostic imaging in the future.
Optical imaging and instrumentation trends
Optical imaging instrumentation encompasses a large variety of technologies ranging from aided visual inspection using magnifying glasses or video endoscopes, to light microscopy, as well as techniques that require computational efforts to generate diagnostically relevant information. Two thirds of the medical imaging market are defined by optical instrumentations with a mainstay in ophthalmology and endoscopy [177]. With its unique spatial resolution capabilities, optical imaging can be used to assess subcellular structures as well as mesoscopic details of tissue and organs.
Status quo
Given its limited imaging depth, optical imaging is used mainly to study easily accessible outer surfaces of organs, such as the skin or the inner lumen of organs that can be reached by endoscopes or catheters, such as cardiovascular system, the gastrointestinal tract, oral cavity, larynx down to the lung, bladder and urethra, cervix and uterus. The deepest noninvasive penetration is achieved through the eye down to the retina, thereby making use of the natural transparency of the ocular media. Optical coherence tomography (OCT) is applied with great success in the eye as well as through endoscopes in inner organs [178]. Deeper penetration in tissues can be achieved by diffuse optical spectroscopy and tomography (ODT), which has been successfully combined with non-optical imaging technologies, such as CT, or photo-or optoacoustic imaging (PAI) [179]. Further, Cherenkov luminescence imaging (CLI) is currently being explored for deep optical imaging [180,181]. In case of invasive intra-surgical application, optical technologies can guide precise tumour excision and help to spare vessels to avoid excessive bleeding. Fluorescence markers are used to enhance image contrast in optical imaging, so as to support tumour detection and image-guided interventions during bioptic sampling and surgical excision.
Light microscopy is the workhorse of standard histopathologic tissue assessment. However, the entire procedure, from tissue sampling to a stained histology, is complex and time consuming. Timelines are critical, in particular during surgical intervention, when tumour borders need to be separated from healthy surrounding tissues. Even when using faster cryo-fixation of biopsies, the examination is still done on stained samples by a pathologist outside the surgical room, which makes it impossible to assess the full tumour margin. This results in inadequate tumour resections with reported 20-70% of cases in breast cancer surgery, or 85% of cases in head and neck surgery [182]. Additional surgery may be required, thus, impairing the quality of life of these patients.
Contrast agents
The identification of early-stage cancer is supported by optical imaging technologies. For example, fluorescent markers, such as indocyanine green (ICG), 5-Aminolevulinic acid (5-ALA) [183] or hexylaminolevulinate (HAL -Hexvix®) [184] help to delineate tumour boundaries and highlight suspicious tissue regions that go undetected with standard white light illumination. Additional biopsy analysis is still required that benefits from guidance by narrow band imaging (NBI) [185], or optical coherence tomography (OCT).
NBI is limited in penetration depths to 100 μm, or less, thus, visualizing only superficial vessels. OCT, on the other hand, exhibits better penetration into tissue (down to 2 mm) depending on the centre wavelength and tissue type. OCT, and its functional extension OCT angiography, allows to determine the cancer stage by assessing the actual invasiveness of the tumour growth, which is an important parameter to affect treatment decision [186][187][188] (Fig. 19). Fluorescence Medical Imaging (FMI) has proven effective in various scenarios of intra-surgical guidance. ICG selectively accumulates in tumour cells, presumably due to the enhanced permeability and retention (EPR) effect [190]. Over the last years, FMI or PDD have become important diagnostic tools for brain tumour resection and for the detection and delineation of bladder cancer. Both methods have helped improve surgical treatment of patients with cancers of the breast, ovaries and cervix (Fig. 20). Recent studies suggest that surgeons are able to resect cancerous lesions smaller than a millimeter in size using FMI guidance [192,193].
Resolution versus penetration depth
While optical imaging methods exhibit high spatial resolution, their penetration depth is limited by dominant scattering and absorption effects. Scatter decreases with wavelengths in the visible range, while for longer wavelengths water absorption becomes the dominant effect. The absorption coefficient depends on molecular dynamics and exhibits local maxima (resonances) and minima. In general, medical optical windows are centred at local minima of the water absorption curve at 800 nm, 1060 nm, or 1300 nm.
Diffuse optical spectroscopy (DOS) is a well-established medical imaging technology, that operates in these optical windows with reduced scattering and water absorption. It extracts absorption and scattering spectra from deep within tissue associated with concentrations of endogenous and exogenous contrast agents [194]. For example, changes in oxygen saturation indicating the balance between oxy-and deoxyhemoglobine, can be used as a sensitive biomarker of tissue pathologies and for monitoring the effects of therapy [195,196]. However, the advantages of DOS come at the expense of low spatial resolution.
Optical coherence tomography (OCT) acts as a link between high resolution and low penetration cellular microscopy on the one hand and full organ and body imaging with low spatial resolution on the other hand. Today, it has become a reference standard in ophthalmic diagnostics, with a growing field of applications also in other medical disciplines [178]. OCT is capable of producing label-free angiographic maps in 3D, called OCT angiography (OCTA), which has been quickly translated to diagnostic imaging in ophthalmology and dermatology [188,189,197].
In summary, optical imaging has a long tradition in medical diagnostics, given its unprecedented spatial resolution. Apart from histopathology, optics is widely used in endoscopy, dermatology, and ophthalmology. The most recent optical technique that was successfully translated to the clinics is OCT and OCTA Both provide cross-sectional images of tissue and vasculature with high spatial resolution and with millimeter deep penetration butwith limited tissue specificity. Fluorescence medical imaging significantly improves surgical treatment, thus, resulting in increased life expectation of patients.
Status go
Currently there are many exciting developments of medical optical technologies in oncology that aim at [191] with permission from AACR Publications) improving diagnostic specificity, treatment guidance and surgical guidance.
Deep tissue imaging
Optical imaging techniques are mostly confined to superficial lesions due to physical limitations of light propagation. Opto-or photoacoustics (PAI), that is the optical excitation of an ultrasound wave in tissue, allows for deeper tissue penetration (about 1cm) [179,198]. This method uses absorbing endogenous chromophores in tissue, such as hemoglobin or melanin, but could potentially also target exogenous contrast agents. When a short ns-pulse of light is absorbed, it causes local heating and thermo-elastic expansion. This leads to a mechanical pressure wave at high frequency, that propagates to the tissue surface, where it gets detected by an ultrasound sensor. In a recent multi-centre trial with over 2000 women with breast cancer, the specificity of combined PAI and US exceeded that of US alone by 15% with a sensitivity of close to 99%. Advancements in PAI mandate alternative light sources, such as LEDs, that support fast pulse sequences for higher imaging speed. Similar to US, PAI would be have a handheld probe head for clinical application, calling for realtime or quasi real-time imaging to identify anatomically and pathologically relevant structures [199]. A key strength of PAI is the ability to sense targeted fluorophores deep inside tissues, in addition to visualizing vascular structures and providing oxygen saturation (Fig. 21). Nonetheless, an in-vivo imaging depth of several cm is still a challenge for PAI.
Optical Diffuse Tomography (ODT) shares with ODS the advantage of deep tissue sensing, and overcomes the spatial resolution challenge by using multiple illumination and detection directions. Tomographic reconstruction is based on solving the complex photon diffusion equation incorporating several assumptions on tissue geometry and its optical properties [201]. Nonetheless, in most medical applications the detection space is single-sided and reconstruction is an ill-conditioned problem. The addition of complementary structural imaging yields the required information to locate optical sources in depth (Fig. 22).
Fluorescence medical tomography (FMT) is a variant of ODT providing the depth distribution of fluorophores in tissue. The method becomes particularly powerful in combination with targeted exogenous fluorophores emitting in the near infrared region [203].
Cerenkov luminescence imaging (CLI) [180,181] is another emerging medical imaging modality. The radiation is a side effect of radioactive decay, due to particles travelling faster than the speed of light in the respective [200] with permission from AACR Publications) dielectric medium. Already, there is a variety of PET tracers available that have been approved by national authorities, thus, giving CLI a potential jump start concerning its clinical translation. Nonetheless, the main drawback is its inherently low intensity, and strong scattering and absorption at the typical violet to blue spectral region [204].
Multi-modality imaging
A new trend in optical imaging is to replace invasive, and often erratic removal of tissue during biopsy by optical biopsy, i.e., the pathologic assessment of tissue insitu. Through the combination of high structural sensitivity with molecular sensitivity tumour invasiveness and tumour grading come within reach of multi-modality optical imaging. Given its excellent spatial resolution, OCT comes with a high structural sensitivity while exhibiting low specificity. The latter could be complemented by fluorescence imaging, or by infrared or Raman spectroscopy [205,206]. Raman spectroscopy has an excellent molecular specificity that results from excitation of rotational and vibrational modes of molecular spectra. The Stokes-shifted emitted light spectrum can be regarded as fingerprint of the assessed substance and exhibits typical regions and resonances for lipids, proteins, etc. Raman scattering is, however, a very ineffective process with less than one photon in a million carrying valuable molecular information. Combining OCT with Raman techniques ex-vivo has been shown to provide insights into urological tumour stages (invasiveness) and grades [186].
Tumour grade is strongly related to its metabolic activity. Metabolites, such as Flavin adenine dinucleotide (FAD) or Nicotinamide adenine dinucleotide (NADH and the redox partner NAD+) have been proven valuable endogenous molecular probes of cell metabolism. Recent developments in fluorescence lifetime imaging (FLIM) show significant changes of FAD and NADH lifetime in tumour versus healthy tissue [207,208]. Fluorophores are excited in the short visible wavelength range where scattering limits large penetration depths. Multiphoton microscopy, on the other hand, would permit deeper tissue penetration, since the excitation happens at double or triple the wavelength of single photon fluorescence at the cost of non-linear optical processes needing spatiotemporally confined beams with high numerical aperture. The latter calls for close proximity to the tissue, which during intraoperative application can only be achieved through additional hand-held scopes. Moreover, the light sources are bulky and expensive, which limits an easy clinical translation despite promising results in dermatology [209].
Targeted exogenous contrast agents
Numerous studies have demonstrated the effectiveness of fluorophore labelling of tumour lesions for enhancing the outcome of tumour resection in different applications. Novel targeted molecular contrast agents are, therefore, extensively studied for precision medicine and are awaiting approval for clinical translation [210,211]. Hybrid contrast agents are developed supporting modalities with different and often complementary physical contrast mechanisms. Microbubbles for example, that are used in ultrasound imaging, can be modified to incorporate fluorescent agents or radionuclides in their lipid shells supporting US-optical PET imaging. For (reproduced from [202] with permission from the Radiological Society of North America) combining optical imaging and MRI, fluorescent quantum dots with different magnetic properties can be designed to have paramagnetic coatings, or to exhibit high native relaxivity. The contrast agents can further incorporate radionuclide tags to support PET and SPECT imaging. Such multi-modality contrast agents would, for example, allow studying their pharmacokinetics on different scales across various imaging modalities, starting from cell cultures, to preclinical imaging, and finally to humans.
Compact systems
Optical imaging technologies are attractive due to their immediate and most intuitive access to tissue and its pathologic changes. They are easily integrated into hand-held devices or hand-held applicators, which is ideal for point-of-care use, as well as for combination with other established imaging technologies. Point-ofcare, hand-held endoscopes for cervix screening have been developed and successfully demonstrated by the group of Ramanujam, in particular addressing global health disparities in cancer prevention and disease management [212]. Finally, photonic integrated circuits (PIC) are capable to realize complex optical setups, including sensors, modulators, as well as light sources on a single chip of the size of a cent coin, thus, rendering them ideally suited for compact devices. In the future, wearable devices based on such compact chips may help monitor health status of subjects and as such become integral to disease prevention.
In summary, there are many exciting advances of optical technologies to precisely target structure, invasiveness as well as the metabolic activity of tumour cells, so as to replace invasive tissue biopsy extraction. Furthermore, targeting fluorophores or other contrast agents support precision medicine approaches. Navigating regulatory hurdles is key to a faster translation of those novel molecular probes. Optical techniques are on the way to reach deeper into tissue without employing ionizing radiation or as in the case of CLI are a natural add-on to established medical imaging technologies. Non-invasive optical methods may be easily used for large scale screening or for longitudinal treatment monitoring. Future systems provide flexible resolution from organ level down to cellular, make use of the richness of information that light carries targeting specific tissue and cells, being compact, mobile, and are easily combined with other imaging modalities.
A critical perspective
From a wider perspective, evolution of medical imaging is driven by attempts of making examinations faster, more accurate and, ideally, easier and cheaper to use. Requirements and successes differ for individual modalities. While not exhaustive by design, this section of the paper will attempt to list a number of contenders for improving the quality and applications of medical imaging for cancer patient management (Table 1).
Modalities
The current differentiation between morphological and functional modalities (Fig. 1) will remain, but the border between them will likely become blurred. Standard morphological modalities will also provide physiological and functional information through more advanced imaging protocols and exploitation of their physical principles through advanced instrumentation and novel contrast agents. Traditional functional modalities will also provide improved morphological cues through new "smarter" instrumentation that not only better exploits the physical properties of detector materials and design, but also circumvents current limitations through novel geometry and processing schemes.
For PET, new total-body designs with enlarged axial FOVs have confirmed a much-increased sensitivity, and low spatial resolution has now become the weakest attribute. New detector designs and detailed modelling of the detector and gantry geometries (incl. Depth-ofinteraction, non-collinearity), as well as improved partial volume correction, will markedly increase the spatial resolution up to the physical limits of the modalities. While a recently launched challenge aims at direct imaging through 10 ps TOF PET, further improvements in the meantime will come from image reconstruction techniques (e.g., ML/AI-enhanced) that will take the whole "system" into consideration, that is the subject being imaged, the probe being used in given protocols, and the gantry and detector design as well as all necessary corrections. Another major aspect of PET (and SPECT) are the molecular probes (aka radiotracers) being used for imaging and characterization, with development and validation processes that should be significantly sped up through more advanced in-silico approaches towards first-in-human trials, approval, and clinical deployment. An interesting application, also referred to as "theranostics", of SPECT and more recently PET, is the use of similar tracers for imaging and therapy, with either different doses or isotopes for each application. In this paradigm, a low radioactive dose scan, either with the same or a different radioisotope, acts as a surrogate that can then be used either directly or through modelling to estimate the internal dose depositions when the actual therapeutic agent is delivered. Such applications will likely further develop alongside targeted personalised molecular therapies that are expected to be much more specific and comprehensive than current surgery, or radiotherapy.
For SPECT, there are many similarities with PET. Larger imaging geometries, this time in the form of ring detectors, will significantly improve volume sensitivity, while the replacement of scintillation detectors with semi-conductor detectors, and/or the replacement of photomultiplier tubes with semiconductor light readout technology will bring significant improvements in energy and spatial resolution. Progress with collimator design together with developments in image reconstruction algorithms will bring further improvements to image quality. These technological advancements will drive the development of theranostics and image-based dosimetry which will ultimately widen the appeal and availability of quantitative SPECT.
For CT, spectral or multi-energy CT has already shown its potential for identifying different materials in the FOV and for removing artefacts, which can be further enhanced through ML/AI approaches. Photoncounting spectral CT has the added benefit of further reducing the ionizing radiation dose from the current lows already afforded by advanced reconstruction techniques. It will also provide much higher spatial resolution, thus, further improving the potential of texture analysis in an advanced analysis regimen. Since spectral CT can also provide information that is relevant to the PET and SPECT energy spectra, it has the potential to produce much more accurate attenuation maps in hybrid modalities (also by unambiguously identifying iodine contrast agent), thus, leading to much improved corrections and reconstructions. Naturally, this potential can be reaped only with the availability of a combined PET/CT offering with spectral CT included, which will depend heavily on market needs. In that case, we anticipate that new contrast agents will also be designed at fast rate that will provide means to qualify physiological processes in complement to more functional data.
For MR, the onus will remain on producing images with better contrast, higher SNR, shorter scan time (translating into more sequences being included in a given examination time for expanded multi-parametric imaging), improved quantification, and higher spatial resolution. For various reasons, including a still very low installed base, the potential of high (up to 7 T) and ultra-high (up from 7 T) fields in the clinic is still unclear, but should the inherent technical limitations (e.g., homogeneity and artefacts) be successfully addressed, such machines will likely be very helpful for moving beyond the current limitations of lower fields by boosting speed, intrinsic SNR and imaging sensitivity (e.g., for fMRI and especially resting state fMRI), while enabling morphological imaging at sub-millimetre spatial resolution. As with CT, new contrast agents will be designed to characterize physiological processes. A special case thereof, which is still not broadly implemented, is hyperpolarized MR by which intrinsic magnetization is enhanced by over four orders of magnitude through dynamic nuclear polarization (DNP), so as to enable realtime monitoring of cell metabolic activities, such as increased pyruvate to lactate conversion in cancer (Warburg effect) [213].
For combined, or hybrid modalities (PET/CT, SPECT/ CT and PET/MR) the expected improvements on each of their modality components would potentially yield better corrections (e.g., spectral CT for attenuation and artefact removal) and soft tissue differentiation (e.g., spectral CT again, and faster MR, which will permit more sequences within an acceptable examination period). Combined with more detailed modelling of the imaging instruments, physics and physiology, this will lead to significantly improved image reconstruction and more meaningful imaging and derived information. Further, axial FOVs of PET/CT systems are prone to expand in view of the perceived benefits. Likewise, SPECT/ CT may experience a push towards fast and quantitative whole-body imaging, thanks to recent evidence in clinical pilots. While the technical advancement of fullyintegrated PET/MR systems is hampered by costs andstill -suboptimal workflow and throughput for a busy clinic, alterative MR-based combinations, such as SPECT/MR [214], may move into the focus of attention, despite similar cost-efficacy concerns. Optical imaging, in combination with other, complimentary imaging modalities, or alone will impact clinical routine in oncology only if the penetration depth challenge can be addressed adequately for probing deep lesions.
Ultrasound has traditionally been seen as highly userdependent and with low reproducibility, but new 3D transducers will make it more like other tomographic modalities [215,216]. This will be combined with software that will guide the operator to ensure that all relevant space is scanned and stitched together so that the corresponding volume can be sliced through and analyzed in the most optimal orientation, with real-time performance to build upon the interactive nature of this modality. Contrast agents (e.g., microbubbles) will be further used to provide some functional information, and elastography -ideally through its less userdependent shear-wave implementation -should also become more prevalent to assess suspected lesions. When combined with ultrafast and super-resolution imaging, it will open further perspectives to microvascular imaging. While ultrafast US imaging requires more advanced transducers and systems, super-resolution imaging based on carefully validated motion models could be applicable to existing machines. More applications can be envisaged through directionally increasing US energy either directly or through nanotech resonators will lead to further applications, such as for heating tissues, making them more prone to taking up drugs and genes via sonoporation, or simply for releasing drugs from nanotech capsules at targeted location.
Optical imaging provides targeted molecular contrast and resolution that is unmatched by other modalities, and, thus, holds great promises for targeted precision medicine even though it is still limited for imaging deep tissue. While this limitation has been successfully addressed through PAI, BLI or CLI in preclinical and research settings, its translation and reproducibility are still lagging. Optical imaging can also relatively easily be combined with other modalities, such as CT, MR or PET, although this again has mostly been demonstrated pre-clinically thus far. A successful implementation of optical imaging is in surgical microscopes, with FMI soon expected to radically improve the efficacy of cytoreductive surgery. While FMI systems are currently limited in specificity due to the paucity of approved fluorophores, FLIM may soon leverage such limitation with several high specificity agents already in the approval pipeline. Perhaps due to a relative ubiquity and low cost of adding optical imaging to other modalities, several multi-modality contrast agents have already been developed for combinations such as Optical/PET, Optical/SPECT, Optical/MR and Optical/US, but their translation to the clinic is still pending. As another special case currently under clinical evaluation, endoscopic OCT provides exquisite differentiation of cancer tissues, also at early and pre-cancerous stages, which can be combined with additional molecular information and Raman spectroscopy for a complete pathologic evaluation.
Photo-acoustic imaging, which uses a laser actuation and US readout, is a modality that has recently emerged and combines the pros (and some cons) of both optical and US imaging to provide an anato-functional insight into live tissues. With its potential already clearly demonstrated in pre-clinical and research settings, this new modality is expected to translate to the clinic soon in combination with specific contrast agents for further expanding its scope. As with US or optical imaging, it will likely be combined with other modalities to complement and expand upon respective findings.
Quantification
In light of the complexities of cancerous disease, qualitative assessment of images appears not sufficient anymore for detailed and actionable characterisation of extent and staging, or for advanced therapy response assessment. Accordingly, modern evaluation approaches may increasingly rely on extracting or deriving quantitative parameters and information from images, also depending on their inherent type and the modality they are stemming from. Such quantitative biomarker information may provide additional insight into the stage of a disease for a given patient (e.g., lymphoma), where existing consensus criteria have been shown to work well already, or it may soon surpass and replace purely image-based diagnosis, particularly when parametric imaging information is shown to increase specificity of tracer accumulation by better separating cancerous from in-active or inflammatory tissues.
Not all modalities provide equal access on quantitative information in general, or to similar measurements and parameters, which is why it is often key for a disease as complex as cancer to consider them in combination rather than on their own. For morphological modalities, physical measurements can relate to the estimated size (e.g., RECIST or WHO criteria [217,218], in addition to volume), to the shape of a given lesion (e.g., tortuosity, sphericity, spiculation, etc. [219,220]), or to structural characteristics as interpreted through imaging (e.g., stereology, BMD, HU in CT; DWI, ADC, DTI in MR; elastography in MR and US; histogram and "texture" in all modalities), all of which require the accurate segmentation of the structures of interest. For functional modalities or protocols, physiological, metabolic and functional characteristics can be extracted, such as respiratory and cardiac motion, dynamic uptake of CT, MR and radioactive tracers or binding of US contrast agents. In the case of PET, individualised metabolic surrogates, such as semi-quantitative SUV [221,222] or PERCIST [223], can be used, but a truly quantitative approach, such as bona-fide pharmaco-kinetic modelling, which is now re-emerging in clinical settings, will be preferred as it provides much more refined information about the actual inner workings or response of the tissues of interest [224]. Here, leveraging increased volume sensitivities from extended axial FOV coverage will be of the essence for robust quantification even in early-phase, low-count imaging situations shortly after tracer injection, or during follow-up imaging after several half-lives of the injected radioisotopes. Of special note about RECIST and similar metrics that have been primarily developed for/through large clinical trials, while related progression criteria can thus be considered as relevant to and reasonably validated for large cohorts, they may not necessarily be as meaningful for individual patients and pathologies or consistently applied everywhere, and there are still ongoing debates about their optimal use [225][226][227][228][229].
Naturally, standardized imaging pipelines are required to ensure that any differences in measured quantitative value are due to a real functional or morphological change rather than a variation in the imaging process itself; this is irrespective of origin of biomarkers that can be extracted from either morphological modalities or functional modalities. Examples of this standardization process do not only include adherence to high standards of quality control to ensure a repeatable and accurate imaging system performance, but also requires a consistent patient management workflow and robust image analysis tools to extract quantitative biomarker information. In multi-centre trials, this process further extends to harmonizing differences between imaging platforms while frequently relying on contract research organizations (CROs) to standardize the imaging protocols and data analyses.
The recently formulated term of Radiomics now encompasses most extraction -and often combination -of features that initially evolved organically for every modality and are mathematically derived from the digital images at hand [230]. Even before the term Radiomics was coined, the underlying assumption for extracting such features was that, either on their own or in combination with other data, they might be suitable surrogates for underlying biological, physiological or morphological characteristics of particular relevance. Because such approaches tend to be very sensitive to the initial data selection process, some of the main limits to their clinical validation and deployment lies in the variability of source data (e.g., across centres or vendors) and the complexity of identifying regions-of-interest in a robust, consistent and accurate manner. With the advent of better ways to harmonise and extract data, and also of more advanced segmentation techniques informed by statistical atlases and ML techniques, these approaches should become easier to use and more reliable, thus, opening the door to proper validation and eventually broad clinical implementation. At any rate, the main aim of quantitative approaches is to develop and identify robust imaging biomarkers that can be further combined into so-called signatures/fingerprints for diagnosing and characterizing disease as early as possible, and ideally without requiring invasive tissue sampling via biopsies or resections. These biomarkers will then be used for dynamically tailoring therapy regimen and assessing response. However, combining various parameters into a single signature may hide individual contributions, and complex tentative Radiomics signatures should always be carefully assessed [231], with negative results also being of relevance [232].
Artificial intelligence
Because of the high-dimensionality of multi-modality imaging dataeither the source images or the parametrically derived informationand of their cross-disciplinary diversity, their meaningful and actionable analysis and interpretation will increasingly rely on the assistance of computers and advanced algorithms. Accordingly, Machine Learning (ML) and associated Artificial Intelligence (AI) approaches are becoming increasingly ubiquitous in the medical imaging domain, from instrument design and characterization, to acquisition, corrections [233][234][235] and reconstructions.
While the potential of such approaches has already been demonstrated in various scenarios [236], one should still remain cautious about how the underlying models were trained, and especially whether the data used at the training stage might have introduced some bias into the models [237,238]. To minimize such biases in either population, pathology or even interpretation, proper ML/ AI training tends to require a high number of highly diverse (and high-quality) data in order to cover all possibilities the algorithm may have to deal with, but such numbers and diversity are rarely achievable in practice for medical imaging, and especially so for rarer complex cases that may thus be of most interest for these approaches. As a way to mitigate this inherent limitation, synthetic data can be created from existing ones in order to expand virtually the learning space, but, again, some bias might be introduced by how the simulation model is configured. Another more versatile mitigation strategy, therefore, is to ensure that any results obtained through ML/AI will be fully explainable and come with a confidence score for a human operator to assess.
The ultimate goal is personalized precision medicine that is individually tailored to the patient and condition at hand, as determined by in-vivo biomarkers derived from imaging in combination with other information (e.g., genomics, circulating tumour cells, etc.). While therapies will be applied to real patients, in-vivo imaging and derived information will be key to informing a virtualin-silicomodel of the patient and their pathologies down to the molecular level, on which a virtual therapy regimen can then be devised and refined prior to its real-life deployment.
Visualization
With ever more high-dimensional data and advanced analysis thereof comes the requirement to develop visualization paradigms able to convey the most significant information to human operators in a clear and unambiguous fashion. Through ad-hoc data reduction techniques and highly interactive systems incorporating multidimensional input devices, this can be achieved through standard means such as traditional 2D and 3D displays. The increasing numbers of modalities and complexity of information, as derived from both imaging and nonimaging biomarkers and data [5], to be merged together for a meaningful and actionable representation may still eventually require other approaches, likely derived from other areas, such as games or virtual reality, for more advanced visualization and interaction paradigms also relying on various senses and actuators (e.g., haptics, sound, eye tracking, brain-computer interfaces, etc.). A special case is for planning and guidance during interventional or surgical procedures, where augmented and virtual reality approaches will need to be implemented as un-obstructively as possible to assist with both navigation and delivery. Such approaches will also benefit from/to robotic devices, from mere assistants to semi-autonomous machines.
Need for training
With new, more capable and complex machines and approaches comes the need to train users so that they can make the most of the added potential and also ensure that expanded capabilities are used with a full understanding of their strengths and limitations [239]. New medical school curriculuma already include some material pertaining to advanced techniques and technology, and conversion degrees (i.e., programmes providing formal training in a separate discipline than one's core degree) will likely come up to complement the main medical courses with imaging (incl. Modalities and advanced visualization), ML/AI, software engineering, medical physics, etc. similarly to how disciplines, such are biostatistics are currently taught to medical practitioners (physicians, radiographer, technologists, physicists, etc.) who want/need to expand their grasp of disciplines relevant to their clinical work or research interests [240].
Conclusions
Non-invasive imaging is an integral part of patient management, particularly in oncology. A range of imaging modalities provides a wealth of information encompassing anatomical, functional and molecular data. Imaging modalities are also combined for so-called hybrid imaging concepts if the combination is made of complementary imaging methods and if, as in many cases, such hybrid imaging systems are both feasible and cost effective. Together, technologies and methodologies advance quickly thanks to innovations by clinical researchers and users as well as equipment manufacturers. General trends in these advances include attempts to make imaging faster, more accurate and more amenable to patients. Lately, the use of ancillary machine learning approaches and artificial intelligence has become the focus of attention so as to reduce image distortions, reduce patient exposure or examination time, and last but not least, to assist the clinical readers with supplementary decision support systems. In order to keep pace with the systemic advances of imaging, clinical users and readers are required to continuously update their knowledge and understanding. Finally, non-invasive imaging does provide only a snapshot of the patient, while other biomarkers, including omics, may add to the breadth of diagnostic information. Therefore, a closer integration of imaging and non-imaging methods is expected over the years to come as one of the key components of truly personalized medicine.
Additional file 1: Table S1. A list of the key performance characteristics of the different detector material that are currently used commercially, NaI is used as a reference scintillator. Table S2. Important parameters of state-of-the-art premium CT systems today, including the way how the systems realize dual energy CT. Table adopted with permission from reference [241]. Figure S1. Consensus perspective of the co-authors on the use of the key imaging modalities reviewed here for the different stages of cancer patient work-up. Here, diagnosis is the image-led process of identifying cancer. Staging is the image-supported process of assessing the extent of the disease, incl. Metastatic spread. Restaging is the imageled attempt to find out the amount or spread of cancer in the body as the disease returns or intensifies after treatment. Restaging may also be done to find out how the cancer responded to treatment. Follow-up describes the image-supported monitoring process of a person's health over time after treatment. For example, CT imaging is used extensively across all four pillars of cancer patient management while Optical Imaging (OI) plays a significant role primarily during diagnosis and follow-up. Figure S2. Key challenges for PET imaging relate to image quality (partial volume effects, image data / noise, randoms, scatter, motion, etc). These challenges were mentioned first in the late 1980s (centre bars, [16][17][18][19][20][21][22][23]). Since then, multiple technological and methodological advances have been made that help address these challenges. TX = Transmission, recon = image reconstruction. Here, the thickness of the connectors describes the magnitude of the crosscorrelation. | 2020-06-09T15:34:43.210Z | 2020-06-09T00:00:00.000 | {
"year": 2020,
"sha1": "6672bc2b9a87fb85fada8927280e4c7ac79a1776",
"oa_license": "CCBY",
"oa_url": "https://cancerimagingjournal.biomedcentral.com/track/pdf/10.1186/s40644-020-00312-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6672bc2b9a87fb85fada8927280e4c7ac79a1776",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
41629350 | pes2o/s2orc | v3-fos-license | Scalar 3-point functions in CFT: renormalisation, beta functions and anomalies
We present a comprehensive discussion of renormalisation of 3-point functions of scalar operators in conformal field theories in general dimension. We have previously shown that conformal symmetry uniquely determines the momentum-space 3-point functions in terms of certain integrals involving a product of three Bessel functions (triple-K integrals). The triple-K integrals diverge when the dimensions of operators satisfy certain relations and we discuss how to obtain renormalised 3-point functions in all cases. There are three different types of divergences: ultralocal, semilocal and nonlocal, and a given divergent triple-K integral may have any combination of them. Ultralocal divergences may be removed using local counterterms and this results in new conformal anomalies. Semilocal divergences may be removed by renormalising the sources, and this results in CFT correlators that satisfy Callan-Symanzik equations with beta functions. In the case of non-local divergences, it is the triple-K representation that is singular, not the 3-point function. Here, the CFT correlator is the coefficient of the leading nonlocal singularity, which satisfies all the expected conformal Ward identities. Such correlators exhibit enhanced symmetry: they are also invariant under dual conformal transformations where the momenta play the role of coordinates. When both anomalies and beta functions are present the correlators exhibit novel analytic structure containing products of logarithms of momenta. We illustrate our discussion with numerous examples, including free field realisations and AdS/CFT computations.
Introduction
Conformal invariance and its implications for correlation functions is a well-studied subject [1]. Already from the first works on this topic it was clear that 2-and 3-point functions are fixed by conformal invariance up to constants. For example, the 2-point and 3-point functions of scalar operators are given by [2] O(x) O One needs to regularise and renormalise the correlator and this gives rise to new conformal anomalies [3][4][5]. 1 The renormalised correlators then satisfy anomalous conformal Ward identities. The purpose of this paper is to present a renormalised version of the 3-point correlators (1.2). In particular, we would like to understand the analogue of the condition (1.3), the possible new conformal anomalies that arise, and their structure. In [7] we initiated a study of conformal field theory in momentum space. 2 In particular, we started a systematic analysis of the implications of the conformal Ward identities and we presented a complete solution of the conformal Ward identities for scalar and tensor 3-point functions. Here we will present a comprehensive discussion of regularisation/renormalisation for scalar 3-point functions. The corresponding discussion for tensorial 3-point function will be discussed in a sequel [29].
The organisation of this paper, and an overview of our plan of attack, is as follows. We start in section 2 with the conformal Ward identities in position space, and derive their corresponding form in momentum space. Rather than attempting to construct a welldefined Fourier transform for the correlators (1.1) and (1.2) (which, while straightforward for 2-point functions, is very challenging for 3-point functions [12]), we will instead simply solve the conformal Ward identities directly in momentum space. As preparation for our analysis of 3-point functions, in section 3 we first solve the momentum-space Ward identities for 2-point functions, reviewing their renormalisation and the anomalies that arise in cases where the condition (1.3) is satisfied.
Our main analysis of CFT 3-point functions then follows in section 4. In section 4.1, we convert the conformal Ward identities from their original tensorial form to a purely scalar form. The solution for 3-point functions can then be written as an integral of three Bessel-K functions: (
1.4)
This is the triple-K integral, and we review its derivation in section 4.2. (Our doublebracket notation for momentum-space correlators simply indicates the removal of the overall momentum-conserving delta function.) For generic values of the operator dimensions this triple-K integral is well defined, either directly through convergence of the integral or else indirectly through analytic continuation, leading to a correspondingly well-defined 3point function in momentum space. As we will show, however, there are certain special values of the operator dimensions for which the triple-K integral is singular. In these cases regularisation and renormalisation are required. The condition identifying these special values is: Here, d is the spacetime dimension (though we work throughout in Euclidean signature for simplicity) and k is any non-negative integer (i.e., k = 0, 1, 2, . . .). Any independent choice of the ± signs can be made for each of the terms in this expression, and a different value of k is permitted for each choice. The remainder of section 4 then presents our renormalisation procedure. First, we discuss the different types of singularities that can arise in the triple-K integral: these correspond to the different choices of signs for which the singularity condition (1.5) can be satisfied. The different types of singularity are not mutually exclusive and can arise either separately or in various combinations. Each type of singularity is linked to the existence of a particular type of counterterm that can be added to the CFT action: the nature of these counterterms then reveals how to deal with each of the different types of singularity.
In general, the singularities may be either ultralocal, semilocal or nonlocal, by which we mean that the corresponding position-space expressions have support either only when all three insertion points coincide (ultralocal), only when two insertions coincide (semilocal), or else without any insertions coinciding (nonlocal). In momentum space, ultralocal singularities correspond to expressions that are purely analytic in the squared momenta (i.e., p 2 1 , p 2 2 and p 2 3 , where each p 2 i = p i · p i ), while semilocal singularities are constructed from terms each of which is non-analytic in only a single squared momentum. Nonlocal singularities, on the other hand, are constructed from terms that are individually non-analytic in two or more squared momenta. For the triple-K integral to contain such nonlocal singularities, the singularity condition (1.5) must admit at least one solution with either two or three plus signs. If nonlocal singularities are absent but the triple-K integral has semilocal singularities, the singularity condition (1.5) admits a solution with two minus signs and one plus sign. If instead only ultralocal singularities are present (as was the case for 2-point functions when (1.3) was satisfied), the singularity condition (1.5) can only be satisfied with three minus signs.
In section 4.3, we show that ultralocal singularities in the triple-K integral can be removed through the addition of local counterterms constructed from the sources. The corresponding renormalised 3-point functions then contain single logarithms of momentum divided by the renormalisation scale µ. This explicit µ-dependence signals the presence of a conformal anomaly. Interestingly, this anomaly can arise in both odd-and even-dimensional spaces, unlike the more familiar trace anomaly that appears when we put the CFT on a background metric. Semilocal singularities of the triple-K integral can be removed by a renormalisation of the sources for the scalar operators. In this case we find a surprising new result: that the corresponding renormalised 3-point correlators contain double logarithms of momenta. 3 These renormalised correlators obey Callan-Symanzik equations with non-trivial beta function terms. There is no contradiction with the theory being a CFT, however, as these beta functions are for sources that couple to composite operators, rather than to operators appearing in the fundamental Lagrangian of the theory. Finally, nonlocal singularities of the triple-K integral cannot be removed by local counterms: instead it is the triple-K representation that is singular. In such cases the renormalised correlator is simply given by the leading nonlocal singularity of the triple-K integral, which as we will show directly satisfies the appropriate conformal Ward identities. Section 4.3.1 discusses our regularisation procedure for the divergent triple-K integral: this is most easily accomplished by infinitesimally shifting the dimensions of operators and of the spacetime itself. These shifts give rise to corresponding shifts in the indices of the Bessel-K functions that appear in the triple-K integral, as well as in the power of the integration variable. The advantage of this regularisation scheme is that the regulated triple-K integral preserves conformal invariance, and satisfies a set of regulated conformal Ward identities. It is also straightforward to extract the divergences of the regulated triple-K integral as the regulator is removed. As we will show, the divergences can be read off from a simple series expansion of the integrand about the origin.
In section 4.3.2 we discuss the residual freedom in the regularisation scheme, corresponding to the precise manner in which the operator and spacetime dimensions are shifted. It is straightforward to convert between the different choices of scheme, and we discuss the procedure for doing this. As the regulated triple-K integrals satisfy regulated Ward identities, by expanding in powers of the regulator one can identify the Ward identities satisfied by the individual divergent terms in the regulated triple-K integral. These Ward identities contain anomalous terms as we show in section 4.3.3, although we defer a full analysis until section 5. In section 4.3.4 we illustrate in detail our renormalisation procedure for all cases in which the triple-K integral has only a single pole in the regulator, and present a number of explicit examples. This case is the simplest that can arise; cases where the regulated triple-K integral contains higher-order singularities are discussed in section 4.3.5, which again presents a number of worked examples, postponing a complete analysis to appendix A.
In certain cases the correlation functions we consider can be realised in perturbative conformal field theories or in free field theories such as massless scalars or fermions. When this happens, the correlators can be calculated using perturbation theory by means of (typically multi-loop and heavily divergent) Feynman diagrams. The standard renormalisation procedure for Feynman diagrams then proceeds loop by loop, where nested divergences are removed at every step leading to a sequence of momentum integrals, each exhibiting only ultralocal divergences. This renormalisation procedure differs only in execution from our more general procedure, which is valid for any CFT (perturbative or non-perturbative), but is otherwise completely equivalent. In both cases the divergences are removed by the addition of counterterms to the action, and these counterterms have identical form (modulo scheme dependence). Any possible difference in the final renormalised correlation functions can therefore be removed by introducing finite counterterms, meaning that the two schemes are equivalent. However, since conformal field theories may be not perturbative, the methods we present in this paper are much more general than Feynman diagram-based calculations.
In section 5, we present a general first-principles discussion of the conformal Ward identities obeyed by the renormalised correlators, including the contributions from both beta functions and conformal anomalies. As well as confirming the Ward identities found earlier for specific correlators, we obtain a general understanding of the relationship between the anomalous terms appearing in the Ward identities for dilatations and for special conformal transformations. As we show, this relationship sometimes leads to additional constraints on the renormalisation-scheme dependent constants that feature in the renormalised correlators.
In section 6 we discuss dual conformal invariance: the extraordinary observation that in certain cases the CFT 3-point functions in momentum space take precisely the form expected for a CFT 3-point function in position space (namely (1.2) with x i → p i ). For this additional momentum-space conformal symmetry to be present, the leading divergence of the regulated triple-K integral must be nonlocal. We give a number of examples and clarify the origin of dual conformal invariance by relating triple-K integrals to the startriangle duality of ordinary 1-loop massless Feynman integrals.
We summarise and present our main conclusions in section 7. Four important appendices then complete our analysis. In appendix A, we derive a complete classification of all possible singularities of the triple-K integral for any 3-point correlator. Renormalising in a convenient choice of scheme, we arrive at explicit expressions for the renormalised 3-point functions wherever these can be read off from the singularities of the triple-K integral. Changes of renormalisation scheme are related to a corresponding non-uniqueness of the triple-K representation as we discuss. Appendix B then elaborates on the curious relations found between correlators of operators with 'shadow' dimensions ∆ and d − ∆.
Appendix C provides independent confirmation of our main results (including the presence of double logarithms of momenta) through explicit free field calculations. Here we also demonstrate that our renormalisation procedure yields results equivalent to those obtained through a conventional perturbation theory analysis. Appendix D discusses triple-K integrals in a holographic context, explaining how they arise in AdS/CFT calculations of 3-point functions. We present a complete worked example of holographic renormalisation for the 3-point function of a marginal operator in three dimensions.
Conformal Ward identities
. . , O n be conformal primary operators of dimensions ∆ 1 , ∆ 2 , . . . , ∆ n . The dilatation Ward identity in position space reads This Ward identity tells us that the correlator is a homogeneous function of the positions of degree −∆ t , where the total dimension ∆ t = ∆ j . The Ward identity associated with special conformal transformations for n-point functions is where µ is a free Lorentz index. For tensorial operators an additional term appears, see [7]. In position space, the special conformal Ward identity is a first-order linear PDE. It can be solved by using the fact that special conformal transformations can be obtained by combining inversions and translations, and then analysing the implications of inversions.
Here we will instead solve the special conformal Ward identity directly. In momentum space, translational invariance implies that we can pull out a momentumconserving delta function, thereby defining the reduced matrix element which we denote with double brackets. 4 The Ward identities for the reduced matrix elements are then where we used the momentum-conserving delta function to express p n in terms of the other momenta.
The dilatation Ward identity (2.4) is again easy to deal with: it tells us that the reduced matrix elements are homogeneous functions of degree ∆ t − (n − 1)d. The special conformal Ward identity (2.5) is now a second-order linear PDE (while it was first-order in position space), so at first sight going to momentum space appears to make the problem more difficult. However, momentum space has one advantage: any tensorial object can be expanded in a basis constructed out of momenta and the metric. Let us denote the differential operator on the right-hand side of (2.5) as K µ , so that the conformal Ward identities may be compactly expressed as Since K µ carries one free Lorentz index, K µ can be decomposed into a basis of independent vectors p µ j , j = 1, 2, . . . , n − 1, i.e., The Ward identity (2.5) thus gives rise to (n − 1) scalar equations, Altogether the dilatation and special conformal Ward identities constitute n differential equations. A Poincaré-invariant n-point function of scalar operators depends on n(n − 1)/2 kinematic variables, so after imposing the conformal Ward identities, the correlator should be a function of n(n − 3)/2 variables. This agrees with position-space considerations: the number of conformal cross-ratios in n variables in d > 2 dimensions is n(n − 3)/2.
2-point functions
As a warm-up exercise, in this section we discuss CFT 2-point functions. We will use this section to establish the benchmarks we want to achieve for 3-point functions.
Poincaré symmetry implies that the correlator depends only on the magnitude of a single vector p 1 = −p 2 ≡ p and both the dilatation and special conformal Ward identities (2.4) and (2.5) simplify to ordinary differential equations.
We start by discussing the implications of special conformal transformations. The special conformal Ward identity is indeed proportional to p µ (after using d/dp µ = (p µ /p)d/dp) and the corresponding scalar equation reads As we shall see, the differential operator K will reappear later in our discussion of the conformal Ward identities for 3-point functions. Note also that which, when acting on spherically symmetric configurations, is equal to the box operator in R d+2−2∆ 1 with p the radial coordinate.
The general solution of (3.1) is where c 0 and c 1 are integration constants. We still need to impose the dilatation Ward identity, We thus recover the well-known fact that only operators with the same dimension have non-zero 2-point function in CFT. The general form of the 2-point function is where we renamed c 0 → c ∆ . For generic dimension ∆ this is the end of the story. Something special happens however when When this condition holds, This correlator is local, 6 i.e., it has support only at x 2 = 0, since if we Fourier transform to position space it is proportional to (derivatives of) a delta function, When the dimension of the operator is (3.7) there is something else special: there is a new local term of dimension d, namely φ k φ, (3.10) where φ is the source of O. This term can appear as a new counterterm (and as we shall see below, as a new contribution to the trace of the energy momentum, i.e., a new conformal anomaly [5]). Adding the counterterm (3.10) with appropriate (finite) coefficient one may arrange to cancel the right-hand side of (3.9), O(x)O(0) = 0. In a unitary theory, this 5 In the special case ∆1 = d/2 the general solution of (3.1) is O1(p)O2(−p) = c0 + c1 ln p and then inserting in (3.4) we find (3.5). 6 In position space the problem is that the standard expression, 1/x 2∆ , does not have a Fourier transform we see that the gamma function has a pole when ∆ = d/2+k. One may proceed by differential regularisation to obtain the renormalised correlator. The final result (upon taking the Fourier transform, which now exists) agrees with (3.23).
implies that O = 0 an an operator. However, we know there are CFTs containing nontrivial operators of dimension ∆ = d/2 + k. For example, all half-BPS scalar operators of N = 4 SYM in d = 4 have dimensions of this form. What happens in these special cases is that there are new UV infinities and we need to renormalise theory. As we shall see, the renormalised correlators will be non-trivial. However, the theory will now have a conformal anomaly: the conformal Ward identities will be violated by local terms. Our strategy will be the following. First, we will regularise the theory and solve the conformal Ward identities in the regulated theory. We will then add counterterms to remove the UV infinities and remove the regulator to obtain renormalised correlators.
To proceed we need to discuss our regularisation. We want to analyse the problem in complete generality, i.e., with no reference to any specific model, and the only parameters in our disposal are the space-time dimension and the dimensions of the operators. We proceed by using a dimensional regularisation that also shifts the dimensions of the operators as follows, where u and v are arbitrary real numbers and denotes a regulator. More generally, one may shift each dimension by a different amount but we found that this scheme is sufficient for the discussion up to 3-point functions. We will discuss special choices of u and v below. The solution of the conformal Ward identities in the regulated theory is exactly the same as in (3.6) (with d and ∆ replaced byd and∆) but the integration constant c ∆ can depend on the regulator, (3.12) In dimensional regularisation, UV infinities appear as poles in . In local QFT, UV infinities should be local and this implies that c ∆ can have at most a first-order pole, Inserting this in (3.12) and expanding in we find, 14) The generators of dilatations and special conformal transformations in the regulated theory are related to those of the original as follows, Notice that in the v = 0 scheme the generators are not corrected. However, for this scheme the 2-point function itself is not regulated so this is not a useful scheme for 2-point functions. This will change when we move to 3-point functions and it will turn out that for scalar 3-point functions this is a convenient scheme. From now on we will stay with a general (u, v) scheme. The fact that the regulated correlator (3.14) is annihilated byD andK implies that the terms that appear in its expansion will satisfy related equations.
In particular, the leading-order term in the expansion should satisfy the Ward identities of the un-regulated theory which we have already solved. Let us start with the generic case, ∆ = d/2 + k. In this case there are no true UV infinities and our earlier discussion shows that (3.6) is the correct 2-point function. It is instructive however to still discuss it starting from the regulated theory. The regulated 2-point function (3.14) has a 1/ singularity. However, its coefficient is nonlocal and thus it cannot be removed by a local counterterm. On the other hand, it satisfies the correct (non-anomalous) Ward identities, and the same withD and D replaced byK and K. It follows that p 2∆−d is the correct 2point function. In a sense the leading-order pole is 'fake': we could remove it by multiplying c 0 ( , u, v) by . This discussion may look somewhat superfluous but we will find an exactly analogous situation when we discuss 3-point functions Let us now discuss the case ∆ = d/2+k. Here, the leading-order divergence is local and satisfies the Ward identities. This is precisely as expected on general grounds: divergences should be local and should be invariant under the original symmetries of the theory. With φ again denoting the source for the operator O, the regulated action reads If Z denotes the generating functional of the regulated theory, The divergence in the 2-point function (3.14) can be removed by the addition of the counterterm action where a ct ( , u, v) is a counterterm constant. As is standard in dimensional regularisation, the renormalisation scale µ appears for dimensional reasons. In the regularisation scheme (3.11), φ has scaling dimension d − ∆ + (u − v) and this implies that µ enters with power 2v . The contribution from the counterterm action reads and cancels the divergence in (3.14) if where a 0 is an arbitrary constant. We can now take the limit → 0 to obtain the renormalised correlation function where c ∆ is the actual normalisation of the 2-point function and the combination c ∆ = (c ∆ − a 0 ) is scheme dependent, since it can be absorbed by a redefinition of the scale µ. The renormalised 2-point function (3.23) is however scale dependent, There is thus a conformal anomaly, where W = ln Z is the generating functional of connected correlation functions and A is the conformal anomaly [5], where A k is the anomaly coefficient (which can be read off from (3.24)), the sum is over all operators of dimension ∆ = d/2 + k, and the dots indicate terms higher order in the sources and terms that are associated with non-scalar operators (such as the more often discussed terms that depend only on the background metric). In the next section we will compute the terms cubic in the sources.
3-point functions
We now present the analogue discussion for scalar 3-point functions. We start with the conformal Ward identities and their solution for generic conformal dimensions, then discuss the special cases where renormalisation may be required. We illustrate our discussion throughout with explicit examples.
Ward identities
Poincaré invariance implies that 3-point functions can be expressed in terms of three variables, which we choose to be the magnitudes of the three momenta, Using the chain rule and noting that p 3 = −p 1 − p 2 , we find The dilatation Ward identity (2.4) may then be processed to become where ∆ t = ∆ 1 + ∆ 2 + ∆ 3 . This equation shows that the correlation function is a homogeneous function of degree ∆ t − 2d, which implies that where F is a general function of two variables. Let us now discuss the special conformal Ward identity (2.5). As noted in section 2, it implies two scalar equations. The first one reads while the second equation, is obtained from this one by substituting p 1 ↔ p 2 and ∆ 1 ↔ ∆ 2 . Let us consider the combinations The effect of the dilatation terms is to remove the terms with mixed derivatives in (4.5).
In this way we arrive at the particularly simple set of equations discussed in [7], where for i, j = 1, 2, 3. Note that K i is the same operator that appeared in our analysis of 2-point functions, see (3.1).
General solution
The system of the dilatation and special conformal Ward identities is equivalent to that defining the generalised hypergeometric function of two variables Appell F 4 [7,27] and from this fact one can infer general properties such as the uniqueness of the solution. An explicit form of the general solution is given in terms of triple-K integrals [7], where c 123 is an integration constant and is the triple-K integral. Here K ν (x) denotes the modified Bessel function of the second kind (or the Bessel-K function for short), while the parameters Before we proceed to use this result, let us present an elementary derivation of it. We will start by solving (4.7) using separation of variables, (4.13) Inserting this ansatz in (4.7), we obtain where x 2 is a constant since the equalities hold for arbitrary p i . The equation K i f i = x 2 f i is equivalent to Bessel's equation and has the general solution The integrand of the triple-K integral is thus itself a solution of the special conformal Ward identities. Now, given a solution of the special conformal Ward identities f (p 1 , p 2 , p 3 ) = i f i (p i ), we can immediately construct a solution of both the special conformal and the dilatation Ward identities by taking the Mellin transform, where β t = β 1 + β 2 + β 3 . To see this, note that and then use integration by parts. In order for this Mellin transform to converge, at least one of the f i (p i ) must be a Bessel-K function, as Bessel-I grows exponentially at large x. A closer analysis [7,27] (see also appendix A.3) reveals that in fact all three f i (p i ) must be Bessel-K functions, as otherwise the resulting 3-point function becomes singular for collinear momentum configurations (e.g., p 1 + p 2 = p 3 ). It remains to discuss convergence at x = 0. As it stands, the triple-K integral converges only if α > |β 1 | + |β 2 | + |β 3 | − 1, p 1 , p 2 , p 3 > 0. (4.18) However, one can extend the triple-K integral beyond this region by means of analytic continuation. If one considers the triple-K integral as a function of its parameters with momenta fixed, then analytic continuation can be used in order to define the triple-K everywhere, provided for any choice (of independent) signs and non-negative integer k. When the equality holds we recover (1.5) and the triple-K integral contains poles (as we will discuss in detail shortly). In such cases a non-trivial renormalisation of the correlation function (4.10) may be required. In summary, when the dimensions are generic, meaning (1.5) is not satisfied for any choice of signs and non-negative integer k, the solution of the dilatation and special conformal Ward identities is (4.10). This is then the analogue of (3.6) for 3-point functions. We will shortly discuss in detail the special cases but first a couple of examples. In these examples, and those we consider later, it will often be useful to label operators and their sources according to their (bare) dimensions, as indicated in square brackets. In this notation an operator of dimension ∆ and its corresponding source are thus O [∆] and φ [d−∆] . (4.20) where c is the integration constant. All Bessel K functions with half-integral indices are elementary. In this case the integral is convergent and evaluates to Example 2: d = 4 and ∆ 1 = ∆ 2 = ∆ 3 = 2.
In this case the 3-point function is given by It turns that this integral has already been computed in the literature [31,32] and is given by where As will be discussed in [33] (see also [7]), triple-K integrals with integral indices can be obtained from this integral using a recursion method.
Example 3: d = 4 and ∆ 1 = ∆ 2 = ∆ 3 = 7/2. This is an example of a finite correlation function expressible in terms of a triple-K integral which diverges but nevertheless possesses a unique analytic continuation. The 3-point function is represented by (4. 26) In this case the condition (4.18) is violated so the integral does not converge. However, (4.19) does hold for all choices of signs and therefore the integral can be defined by means of analytic continuation. In such cases the dimensionally regulated integral is actually finite. We discuss dimensional regularisation below, in section 4.3.1. The integral (4.26) can be regulated in any (u, v)-regularisation scheme (see (4.36)). However, since the Bessel functions are elementary when their orders are half integers, it is convenient to use the (1, 0)-scheme, where a 123 = p 1 + p 2 + p 3 , b 123 = p 1 p 2 + p 1 p 3 + p 2 p 3 , c 123 = p 1 p 2 p 3 . This expression is valid for a range of , not necessarily close to zero. It has a finite → 0 limit, as anticipated. This 3-point function satisfies all conformal Ward identities.
In summary, if all the beta indices are half-integral the triple-K integrals can be computed in terms of elementary functions and if they are integral they are given in terms of expressions involving dilogarithms. If the beta indices are generic, the triple-K integral does not appear to be reducible to a more explicit expression.
Renormalisation
We will now focus on the special cases where the triple-K integral is singular, i.e., we will consider the cases where the dimensions of operators satisfy one or more of the the following conditions, where σ i ∈ {±}, i = 1, 2, 3 and the k σ 1 σ 2 σ 3 are non-negative integers. There are four conditions (up to permutations) depending on the relative number of minus and plus signs. We will call these conditions the (−−−), (−−+), (−++) and (+++) conditions. In general the condition (4.30) can be satisfied in more than one way, with a different number of positive and negative signs and different values of associated non-negative integers k σ 1 σ 2 σ 3 . We will discuss all possibilities below. When these conditions hold there are new terms of dimension d that appear (as was the case in our discussion of 2-point functions of operators of dimension d/2 + k in section 3), and the nature of these terms gives a hint of how to deal with each of the singularities. Let us discuss each case in turn.
(−−−)-condition: ∆ 1 +∆ 2 +∆ 3 = 2d+2k −−− . In this case the new terms of dimension d have the following schematic form where the φ i are sources for the operators O i of dimension ∆ i , and k 1 +k 2 +k 3 = k −−− . Such terms are a direct analogue of (3.10); they may appear as counterterms and also as new conformal anomalies. The fact that new conformal anomalies may appear when the theory has operators with dimensions that satisfy this relation was anticipated in [5,30]. We thus expect that when such singularities are present one would have to renormalise by adding (4.31) with the appropriate coefficient and there would be an associated conformal anomaly. As we shall see such singularities are linked with logarithmic terms in the renormalised 3point functions, similar to what we saw for 2-point functions.
(− − +)-condition: In this case (and similarly for its permutations), the new terms of dimension d have the following schematic form where k 1 + k 2 + k 3 = k −−+ . This term can appear as a counterterm (with appropriate singular coefficient a ct ) and thus in this case we renormalise the source of O 3 , We then expect the renormalised correlators to satisfy a Callan-Symanzik equation with beta function terms. These beta functions are for sources that couple to composite operators and not for couplings that appear in the Lagrangian of the theory, so there is no contradiction here with the fact that we are discussing CFT correlation functions. As we shall see, singularities of this type are linked with double logarithms in correlation functions. The existence of such double-log terms, noted also in [30], is one of our most surprising findings, and will be discussed further in the conclusions.
(− + +)-condition: In this case 7 (and similarly for its permutations), the following term has a classical dimension d, 7 Note that when k−++ = 0, we have extremal correlators which were conjectured not to renomalise [34].
where k 1 + k 2 + k 3 = k −++ . In other words, classically O 1 has the same dimension 8 as . Such a term cannot act as a counterterm for the 3-point function. As will shall see, in such cases it is the representation of the 3-point function in terms of the triple-K integrals that is singular, not the correlator itself. The conformal Ward identities have a finite non-anomalous solution.
(+ + +)-condition: This is similar to the previous case.
is classically marginal and the same comments as in the case of (− + +)-condition apply. In particular, this term cannot act as a counterterm and it is again the representation of the 3-point function that is singular. The conformal Ward identities have a finite non-anomalous solution.
Regularisation
We will regularise using the dimensional regularisation (3.11). In the regulated theory the solution of the conformal Ward identities is again given by (4.10) but with the indices shifted, and the integration constant depends now on the regularisation parameters, We see from these expressions that v = 0 is special in that the indices of the Bessel functions remain the same. This makes the analysis of the singularity structure of the triple-K integral easier, as we discuss in appendix A. However, as mentioned in the previous section, this scheme does not regulate 2-point functions and as such it is not a good scheme for regulating tensorial 3-point functions involving the energy momentum tensor and conserved current. In these cases, the Weyl and diffeomorphism/conservation Ward identities relate 2-and 3-point functions (see for example [7]). For this reason we will continue to work in the general (u, v) scheme. In section 4.3.2 we will discuss how to go from one scheme to another. The regulated triple-K integral Iα ,{β i } is well defined since for nonzero the condition (4.30) (with α →α, β i →β i ) does not hold. The integral is nevertheless still singular as → 0, however, and our task is to extract the singularities and understand how to deal with them. This can be achieved in an elementary fashion as follows. 9 Since the integral converges at infinity even when → 0, all singularities come from the x = 0 region. We therefore split the integral into an upper and a lower piece, where µ is an arbitrary scale which plays the role of the renormalisation scale. Note that by construction the full answer for Iα ,{β i } is independent of µ. We now focus on the lower part (which contains the UV infinities) and note that for small x, the integrand has a Fröbenius series The exponents η and the coefficients c η follow from the standard series expansions for Bessel functions. After some manipulation we find where we used the fact that we work in a general (u, v) scheme so neitherα norβ i are integers. The sums here run over all values of the σ j and all non-negative integer values of the k j (where j = 1, 2, 3). It follows that where in the second equality we used (4.30).
Recall that in momentum space, 3-point functions are ultralocal if they depend analytically on all momenta (i.e., they depend on positive integral powers of all momenta squared), semilocal if they depend analytically in two of the three momenta (they depend on positive integral powers of two of the momenta squared) and otherwise they are nonlocal. Inserting (4.38) in (4.37) we find, Note that the lower limit of integration x = 0 gives a vanishing contribution: the integral Iα ,{β i } is defined by means of analytic continuation from the region where it converges (i.e., (4.18) with α →α, β i →β i ) and in this region the lower limit vanishes (since η > −1 in this region).
We will now analyse the structure of singularities using the following two facts: (i) the upper part of the integral is finite and so can only contribute at order 0 and higher, and (ii), the divergent terms cannot have any dependence on µ. This follows from the fact that the total integral (i.e., upper plus lower part) is independent of the arbitrary scale µ, and this must remain true when the integral is expanded term by term in powers of .
These two facts allow us to determine the form of the divergent terms, as we now discuss. The first implication is that the divergent terms are those with η = −1 + w for some finite w. Indeed, suppose η = m + w for m = −1. Then 1/(η + 1) is regular as → 0 and the singularity must come from the coefficients c η . However, such singularities would be µ dependent since µ −(η+1) = µ −(m+1) (1 + O( )). Cancelling this leading order µ dependence requires m = −1. We thus conclude (using (4.40)) that In other words there are four possibilities for w depending on the signs required to satisfy (4.30). This condition may be satisfied for different signs (and different integers k σ 1 σ 2 σ 3 ) and the number of such conditions that are satisfied simultaneously determines the singularity structure of the integral. Suppose (4.30) has only a single solution. Then In this case, the coefficient c −1+w must be finite as the ln µ piece cannot be associated with a divergent power of . On the other hand, if the condition is satisfied in multiple ways the coefficients c −1+w may be singular. In fact if there are s conditions satisfied simultaneously, the c −1+w can diverge as −s+1 , so the triple-K integral can diverge as −s . Since there are at most four different values of w (in this regularisation scheme) the most singular behaviour is −4 . Let us first discuss the case where there are two simultaneous solutions to (4.30). Expanding For the µ-dependence of the divergent terms to cancel then requires c −1+w 2 = 0. The leading −2 divergence of the triple-K integral then carries a coefficient c This case occurs for example when we have both {σ i } = {− − −} and {− − +} singularities, for which w equals u − 3v and u − v respectively. Here, the −2 divergence of the triple-K integral appears with a coefficient , (4.46) giving rise to additional divergences at u = v and u = 3v if c , the coefficients c −1+w are finite even when multiple conditions hold. In such cases the singularity is of first order.
In the general case where solutions of (4.30) exist for multiple values of w, expanding we see that for the divergent part of the triple-K integral to be µ-independent requires for all m ≥ 0, in order for the coefficient of (ln µ) m+1 to vanish. Expanding the coefficients as we obtain the nontrivial equations With all µ-dependent divergences cancelling, the remaining divergent part of the triple-K integral is then simply For a specific triple-K integral, (4.51) is straightforward to evaluate. In particular, there is no need to evaluate the triple-K integral itself, only the series expansion of its integrand. We can therefore compute the divergent part of any triple-K integral, in any (u, v)-scheme, through this procedure. Before we proceed, we illustrate how to compute (4.51) using an example. Expanding the integrand of the regulated triple-K integral, I 1+u ,{2+v ,1+v ,1+v } , the terms of the form x −1+w are The divergent part of the regulated triple-K integral is then The coefficient of the leading order term is ultralocal while the coefficient of the subleading singularity is semilocal. 11
Changing the regularisation scheme
Some regularisation schemes (i.e., choices of u and v) may be more convenient than others. For example, there may be a scheme in which one can compute the regulated integrals exactly. More generally, different schemes come with different advantages and disadvantages. As discussed earlier (see also appendix A.2), the choice u = 1, v = 0 is particularly convenient because the indices of the Bessel functions are unchanged. However, this scheme is unsuitable for tensorial correlators involving conserved currents and/or stress tensors, since these are related via the diffeomorphism Ward identity to 2-point functions which are not regulated by this scheme. The scheme with u = v, on the other hand, has the attractive property that ∆ and d are each shifted by the same amount. The dimensions of conserved currents and the stress tensor in the regulated theory are thus still correlated with the dimension of the regulated spacetime, as required by conservation. In some cases, however, divergences may have poles in 1/(u − v), as we saw in (4.53). A third useful scheme is to set u = −v: here only the spacetime dimension is shifted, and as will be discussed in [33], many regulated integrals can be computed exactly.
Given the different choices of scheme available, we would like to understand the dependence of the renormalised correlators on the scheme used. In this subsection we discuss how to change from one regularisation (u 0 , v 0 )-scheme to another (u, v)-scheme. Let us consider a divergent triple-K integral, I α,{β i } and consider the difference in its value in the two different schemes, Note now that triple-K integrals satisfy the following relations: as can be shown by using the definition of the triple-K integral and the standard properties of Bessel functions (complete proofs will be given in [33]). Suppose that we start with a divergent triple-K integral (an integral where one or more of the conditions (4.30) hold). Then acting with L 1 on its regulated version will decrease k −σ 2 σ 3 by one and leave k +σ 2 σ 3 unchanged, while acting with M 1 will decrease k +σ 2 σ 3 by one and leave k −σ 2 σ 3 unchanged. Thus, by acting a sufficient number of times with L i and/or M i , we will end up with a convergent integral in all cases. Let {D r } be the set of such differential operators, where r labels each operator in the set. Then where m r 1 , m r 2 , m r 3 , m r 4 are integers, are convergent integrals. It follows that The equations (4.59) are a set of differential equations that may be used to determine the momentum dependence of I (scheme) , which on general grounds should be a sum of local and semilocal terms. The coefficients of the different terms are constants that depend on u, v, u 0 , v 0 and , and can be determined by expanding I α+u {β 1 +v ,β 2 +v ,β 3 +v } for small p i , extracting all terms up to finite order in , then inserting in (4.54) and comparing with the solution of (4.59).
We will now illustrate this procedure with a simple example. Consider the integral I 2{111} . In this case the (− − −) condition holds with k −−− = 0, and thus it suffices to act once with L i in order to obtain a convergent integral. We then have {D r } = {L 1 , L 2 , L 3 }, and (4.59) reads We therefore need to compute the momentum-independent terms in I 2+u {1+v ,1+v ,1+v } , up to finite terms in . Since we want the momentum-independent part of this integral, we may wish to take first the zero-momentum limit in the integrand and then compute the integral. One has to be careful, however, as taking the limit inside the integral is not always allowed. Moreover, I 2+u {1+v ,1+v ,1+v } may diverge in this limit. What we are guaranteed In other words, any IR divergence must be independent of (u, v).
In the case at hand, we may safely take two momenta to zero, say p 1 and p 2 , but we need to keep the third momentum non-zero, This integral can computed using the result with the integral defined outside its domain of convergence Re α > | Re ν| and Re c > 0 through analytic continuation. Expanding the answer in , we find This is divergent as p 3 → 0, but the coefficient is (u, v) independent and we obtain (4.65) which is what we wanted to derive. This allows us to obtain I 2+u {1+v ,1+v ,1+v } in any (u, v) scheme. More generally, using this method we can convert a triple-K integral evaluated in one scheme to its counterpart in any other scheme.
Ward identities
The regulated correlators satisfy the original Ward identities by constructioñ This implies that the coefficient of the leading-order divergence is also annihilated by K ij and D, while sub-leading coefficients satisfy inhomogeneous equations, where s max is the power of the most singular behaviour in c −1+w ∼ 1/ smax . Equations (4.69) imply in particular that if the leading divergence is nonlocal then its coefficient satisfies the non-anomalous Ward identities and is therefore the sought-for answer for the 3-point function. We have seen that the divergences are nonlocal in the cases of (− + +) and (+ + +) singularities. In other words, in these cases it is the representation of the 3-point function in terms of triple-K integral that is singular, not the correlator itself. To obtain the correlators it suffices to multiply the triple-K integral by smax and take the limit → 0. (See below (3.16) for the analogous discussion for 2-point functions.) On the other hand, if the leading order singularity is local or semilocal, then one needs to renomalise. This is again exactly analogous to what we saw when we discussed 2-point functions: the solution of the non-anomalous Ward identities is (semi)-local and as such not acceptable as a 3-point function (because one can add finite local counterterms in the action and set these correlators to zero). Instead, after renormalisation one obtains renormalised correlators, which now satisfy anomalous Ward identities to which we will return in section 5.
In the following we will organise our discussion according to the degree of singularity of the triple-K integral.
Triple-K integrals with 1/ singularity
In this case only one of the conditions (4.30) holds. The analysis then depends on which condition this is.
(+ + +) or (+ + −) singularities. In this case, as discussed above, the correlator can be read off from the leading-order singularity. We will present the general case in appendix A and focus our attention here on a few illustrative examples: This is an example of a (+ + −) singularity: α = 1/2, β 1 = β 2 = −1 and β 3 = −1/2 and k ++− = 0. Expanding the triple-K integrand we find Extracting the leading term as → 0 we obtain One may easily verify that this 3-point function satisfies the (non-anomalous) conformal Ward identities. This example may be realised using a free scalar Φ as O [1/2] = Φ and O [1] = : Φ 2 :.
This is an example of a (+ + +) singularity: α = 1/2, β i = −1/2 and k +++ = 0. Expanding the triple-K integrand we have and extracting the leading term as → 0 we obtain This example may be realised using a free scalar Φ setting O [1] = : Φ 2 :, as in the previous example.
It is also instructive to also analyse this case in the (1, 0) scheme: The advantage of this scheme is that the index of the Bessel function does not change and since K 1 One may easily verify that this 3-point function satisfies the (non-anomalous) conformal Ward identities.
(− − −) singularities and new anomalies. In this case the divergence is ultralocal and satisfies the conformal Ward identities, as one expects on general grounds. Using (4.39) and (4.43) we find the divergent terms are 12 where c 123 is a constant and here and in the following we have shortened k −−− to k.
To proceed we add a counterterm to remove the infinity and then remove the regulator to obtained the renormalised 3-point function. The counterterm takes form where the renormalisation scale µ was introduced on dimensional grounds and As we shall see, the constant a (−1) is uniquely fixed by requiring the cancellation of infinities, while a (0) parametrises the scheme dependence. Note that all terms with different contraction of derivatives can be always rearranged in the form of (4.80). Indeed, using integration by parts, which can be used recursively to end up with the expression (4.80). The counterterm contribution is where k 1 +k 2 +k 3 = k (we assume that all three operators are pairwise different -otherwise there are additional symmetry factors). Thus with appropriate choice of the coefficients a k 1 k 2 k 3 we may cancel the divergence (4.79) in the 3-point function. We then define the renormalised correlator as (4.84) This renormalised correlator depends on the scale µ: where in the first equality we used the fact that the regulated 3-point function does not depend on µ, and in the second the fact that the counterterm cancels the infinity in (4.79). This implies that there is a new conformal anomaly A 123 associated with this 3-point function.
The existence of the anomaly implies that the generating functional of correlators W [φ i ] depends on the mass scale µ, Indeed, differentiating (4.86) with respect to φ 1 , φ 2 and φ 3 and comparing with (4.85) we find and the ratio A k 1 k 2 k 3 /c 123 is universal. One may integrate the anomaly equation (4.85) to obtain where ∆ t = j ∆ j and f (x, y) is an arbitrary function of two variables (which is of course uniquely fixed by the conformal Ward identities). The argument of logarithm must be linear in momenta and changing the specific combination amounts to redefining f (x, y).
We thus conclude that conformal anomalies lead to terms linear in ln p i .
This example is closely related to the example of three operators of dimension one in d = 3 we discussed earlier. The correlator in the (1, 0)-scheme is given by where the overall minus is for later convenience. Notice that this is the same triple-K integral that appeared in (4.76). Nevertheless, we will deal with the divergence in a very different way. The regulated correlator is given by In this case the divergence is local and it can be cancel by a local counterterm where a 0 is an arbitrary constant parametrising the scheme dependence, we find for the renormalised correlator The renormalised correlator correlator now depends on a scale, 2 in agreement with (4.88). Correspondingly, there is a new conformal anomaly where k = k −−+ denotes the integer appearing in the defining condition (4.30). Since this expression is analytic in p 2 1 and p 2 2 it is semilocal. When the dimensions of operators satisfy the (− − +) condition there is a possible counterterm given by The coefficient a parametrises the (finite) scheme-dependent contribution of this counterterm. The counterterm contribution reads where c ∆ 3 is the normalisation of the 2-point function (see (3.12)). Recalling that β 3 = ∆ 3 − d/2 we see that the momentum dependence of (4.100) exactly matches that of (4.98) and therefore we may cancel the infinity by an appropriate choice of a k 1 k 2 k 3 . The renormalised correlator is then (4.101) The renormalised correlator depends on the scale µ, (4.102) where we used the fact that the regulated 3-point function does not depend on µ. To understand this result, note that the counterterm amounts to a renormalisation of the source that couples to O 3 . The source φ 3 is in fact the renormalised coupling, since functionally differentiating with respect to it yields the renormalised correlator, while the bare source is φ bare Inverting perturbatively, to quadratic order we find where we have defined φ bare 1 = φ 1 and φ bare 2 = φ 2 since these sources are unrenormalised. As the bare couplings are independent of the renormalisation scale µ, we then obtain the beta function Comparing (4.102) and (4.105) we find 14 We thus find that in this case the correlators depend on µ through the implicit µdependence of the renormalised source φ 3 . In terms of the generating function W we now have where the total variation is given by .
Indeed, differentiating (4.107) with respect to the renormalised sources we recover (4.106). Integrating (4.106) we find that the renormalised correlator will contains terms proportional to either ln p i , if ∆ 3 = d/2 + k, or ln p i ln p j terms if ∆ 3 = d/2 + k. Thus, single logs are not only associated with conformal anomalies but also with beta functions and (perhaps more surprisingly) double logs may also appear in conformal correlators. In the case of double logs, one of the logs is due to the conformal anomaly in 2-point functions and the other is due to the beta function.
We will now illustrate this case by discussing the computation of the 3-point function of three marginal operators in d = 3. In this case, α = 1/2, β 1 = β 2 = β 3 = 3/2 and the The bare 3-point function, is divergent. As we are in d = 3 it is most convenient to work in the (1, 0)-scheme (since then the integral is elementary). Extracting the divergences as discussed earlier we obtain This divergence is semilocal because it is a sum of terms each of which is analytic in two momenta and non-analytic in one.
To remove this divergence we add the counterterm, This counterterm does not contribute to 2-point functions and its contribution to the 3point function reads Therefore, the counterterm removes the divergence from the 3-point function provided where a (0) is an undetermined -independent constant that parametrises scheme dependence. The renormalised source φ is related to the bare source via which after inverting leads to a beta function Adding the contribution from the counterterm (4.112) and sending → 0 we obtain the renormalised correlator, Note that changing the renormalisation scale µ amounts to changing a (0) , i.e., the schemedependent part of the correlator. Acting with µ(∂/∂µ) we find with respect to the renormalised sources φ i and noting that . (4.120) If there are additional singularities of type (+ + −) and/or (+ + +) then one needs to multiply the triple-K integral by an appropriate power of before removing the regulator.
The classification and analysis of all possible cases is discussed in appendix A. Here we will discuss two examples that illustrate the general case. (4.121) We have already discussed the computation of the divergent terms at the end of section 4.3.1 (see example 4 on page 20), where we saw that the regulated triple-K integral, I 1+u ,{2+v ,1+v ,1+v } , diverges as −2 . This leading order singularity is ultralocal while the subleading singularity at order −1 is semilocal.
To cancel the infinities we introduce the counterterm action is the source of O [4] and φ [1] is the source of O [3] . (To reduce clutter here we have used the bare rather than regulated dimensions in our notation, writing φ [0] as shorthand for φ [0+(u−v) ] , etc.) This generates the following contribution to the 3-point function, where a 0 , a 1 and a 2 have series expansions in , and the regulated 2-point function is When (4.123) is expanded in , the divergent terms must match I div 1+u ,{2+v ,1+v ,1+v } as evaluated in (4.53). This procedure fixes the coefficients in the counterterm action as where for simplicity we set c parametrise the scheme dependence. Due to the a 0 counterterms, the renormalised source φ [1] is related to the bare source φ bare [1] by φ bare leading to a beta function [1] . (4.129) The triple-K integral I 1+u ,{2+v ,1+v ,1+v } can be computed exactly using recursion relations [7,33], and after adding the contribution of the counterterm contribution, the limit → 0 may be taken leading to the renormalised correlator Here, I 1{000} and J 2 are given in (4.23) and (4.24), and a 0 , a 1 and a 2 are scheme-dependent constants linearly related to a 1 and a 2 . (In fact, as we will see later in section 5, the special conformal Ward identities further fix a 2 + a 0 = −c 433 /2.) Acting with µ(∂/∂µ), we find where the anomaly In this case, only the coefficient of p 2 1 (divided by the overall normalisation of the 3-point function c 433 ) is physically meaningful: the remainder of the anomaly is scheme-dependent and can be adjusted by adding finite counterterms to change a 0 .
Note that the dimensions of the operators O [3] and O [4] are such that f (φ [0] )φ [1] φ [1] has dimension four for any function f (φ [0] ) of the dimensionless sources φ [0] . As discussed in section 3, the 2-point function of the operator O [3] also requires renormalisation and a counterterm of the form S ct ∝ d 4 xφ [1] φ [1] . This counterterm and the second counterterm in (4.122) maybe considered as the expansion of f (φ [0] ) around φ [0] ≈ 0. Similarly, the conformal anomaly may contain a term proportional to g(φ [0] )φ [1] φ [1] for some function g of φ [0] , and we have found that the part associated with the 3-point function is scheme dependent. The divergent part of the regulated triple-K integral is (4.134) Expanding out, we find The leading −3 divergence of I diṽ α,{β} is therefore ultralocal while the subleading −2 divergence is semilocal. Only the sub-subleading order −1 divergence is nonlocal, and it is this that is proportional to the renormalised correlator once the −3 and −2 divergences have been removed. We therefore write 136) where c 422 is a theory-dependent constant that is independent of and represents the overall normalisation of the 3-point function. (The additional factor of 2(u + v) is purely for convenience.) The counterterm contribution follows from the action namely, where O [2] (p)O [2] (−p) reg = C 2 p 2v , (Once again, to reduce clutter we are labelling operators and sources through their bare rather than their regulated dimensions.) Working in the most compact scheme where C (0) 2 = 0, to obtain a finite renormalised correlator we require 1 here as the regulated 2-point function is proportional to −1 .) The counterterms (4.137) mean the renormalised source φ [2] is related to the corresponding bare source according to φ bare generating a beta function The renormalised correlator is then where a 1 and a 0 are ( -independent) scheme-dependent constants linearly related to the a (0) 1 and a (0) 0 above. Specifically, the relation is (4.146) Notice also that since ∆ 1 = ∆ 2 + ∆ 3 this correlator is extremal. As we expect, the momentum dependence of the nonlocal part of (4.144) then matches that of the product 15 Under a change of renormalisation scale, (4.147) 15 Note however the semi-and ultralocal terms in the correlator (i.e., the terms proportional to a 1 and a 0 ) can be adjusted arbitrarily through finite counterterms, as can be seen from (4.145) and (4.146).
where the anomaly A 422 = −4a 1 . In this example, then, the anomaly is purely schemedependent and can be adjusted arbitrarily through the addition of finite counterterms. As in the case of the previous example, the existence of a dimensionless source implies that we can consider counterterms and anomalies of the form f (φ [0] )φ 2 [2] , where f is a function of φ [0] . The Taylor expansion of this function is fixed by the n-point function and we have determined the terms up to linear order. As in the previous example, the corresponding conformal anomaly due to the 3-point function is again scheme dependent.
Beta functions and anomalies
In this section we examine more closely the anomaly and beta function terms that appear in the conformal Ward identities. Since these terms break conformal symmetry, we will start from the diffeomorphism and Weyl Ward identities that hold for a general quantum field theory. We will restrict our considerations to scalar operators; for a more complete discussion we refer the reader to [3,36].
First, let us consider the variation of the generating functional for renormalised correlators under a variation of the renormalised sources φ i , Here, the quantum field theory lives on an arbitrary background metric g µν , the background source profiles φ i are also arbitrary, as indicated by the subscript s (for source) on the 1point functions. The index i labels the different scalar operators, and is distinct from the spatial indices µ, ν. Under a diffeomorphism, x µ → x µ + ξ µ , we have giving rise to the Ward identity The corresponding Ward identities for correlators, if required, can then be derived by functionally differentiating this relation with respect to the sources φ i before setting them to zero and returning to a flat metric. Under a Weyl transformation of the background metric g µν → e 2σ(x) g µν , we have instead where the B φ i and the anomaly density A are local functions of dimension d − ∆ i and d respectively, constructed from the set of sources {φ i , g µν } and their derivatives. According to our present conventions where φ i has a bare dimension d − ∆ i , where β φ i is the beta function for φ i . (We could alternatively regard B φ i as µ d−∆ i times the beta function for the dimensionless coupling φ dimless i = φ i µ ∆ i −d .) Note also that since W is the generating functional of renormalised correlators, A is a finite quantity. Writing the trace of the stress tensor as T µ µ = T , the corresponding Ward identity is then Let us now proceed to conformal transformations, which are diffeomorphisms mapping flat space to itself up to a Weyl transformation, We therefore specialise to a flat background metric g µν = δ µν and write all indices henceforth in the lowered position, although we keep the scalar source profiles φ i arbitrary.
To undo the action of this diffeomorphism on the metric we can use an opposing Weyl transformation with σ = − 1 d (∂ · ξ). The net variation of the sources is then which after integrating by parts yields the conformal Ward identity To obtain the corresponding identities for correlators we must now functionally differentiate with respect to the sources before restoring them to zero. Since we assume that the theory with all sources switched off (denoted by a subscript zero) is a CFT, β φ i begins at quadratic order in the sources as we saw in previous sections, hence We will also assume all 1-point functions vanish once the sources are switched off, i.e., conformal symmetry is not spontaneously broken. Functionally differentiating three times with respect to the sources, we then obtain This is the general 3-point conformal Ward identity including all beta function and anomalous contributions. The beta function contributions are semilocal, arising only when the dimensions of the operators in the 2-point functions coincide, while the anomaly contribution is ultralocal. More generally, we see that the existence of a beta function contribution requires a nonzero quadratic term in the expansion of the beta function about the origin: on dimensional grounds, for β φ i to contain a term ∼ m φ j n φ k requires −∆ i + ∆ j + ∆ k = d + 2(m + n) or equivalently α + 1 + β i − β j − β k = −2(m + n). The corresponding triple-K integral therefore has a singularity of (+ − −) type with k +−− = m + n. (For k +−− > 0, note also that the second derivative of the beta function in (5.11) leads to boxes acting on delta functions.) Similarly, to have an anomalous contribution requires A to contain a term The triple-K integral then has a singularity of (− − −) type with k −−− = l + m + n. These conditions, while necessary, are not always sufficient as we will see in example 13 below.
To obtain specifically the dilatation Ward identity we must set ξ µ = x µ , meaning ∂ · ξ = d, while to obtain the special conformal Ward identity we set ξ µ = x 2 b µ − 2(b · x)x µ for some vector b µ , whereupon ∂ · ξ = −2d(b · x). Let us now consider a few examples to illustrate this discussion.
labelling sources by their bare dimensions for compactness. The dilatation Ward identity then reads while the special conformal Ward identity is (5.14) We therefore have both beta functions and an anomalous contribution as anticipated.
Extracting the factor of b µ and converting to momentum space, these two identities become O [4] (p 1 )O [3] (p 2 )O [3] (p 3 ) Decomposing these vector equations into a scalar basis, the dilatation Ward identity is while the special conformal Ward identities are or equivalently, While (5.22) follows trivially from permutation symmetry, (5.23) is non-trivial and relates the anomalous contributions appearing in the dilatation and the special conformal Ward identities. In fact, we can use this identity (or equivalently (5.20)) to eliminate a scheme-dependent term in our earlier result (4.130) for the renormalised correlator. Under dilatations [3] (p 2 )O [3] (p 3 ) = (a 2 + a 0 )K 31 p 2 1 = 4(a 2 + a 0 ), (5.27) and hence the special conformal Ward identity (5.23) fixes There are therefore only two, rather than three scheme-dependent coefficients in (4.130).
29) labelling sources by their bare dimensions once again.
The dilatation Ward identity is while the special conformal Ward identity reads The remainder of the analysis then closely mirrors that of the previous example. The momentum-space dilatation Ward identity reads This is consistent with (4.144) above since (D + µ(∂/∂µ)) O [4] (p 1 )O [2] (p 2 )O [2] (p 3 ) = 0. Meanwhile, the special conformal Ward identities are 35) or equivalently, The renormalised correlator indeed satisfies these identities, since (4.144) obeys Note that in this case (unlike the previous), the special conformal Ward identities provide no additional constraints on the scheme-dependent constants in (4.144). The leading divergence of the regulated triple-K integral occurs at −1 order and is nonlocal in the momenta. The renormalised correlator then follows by multiplying through by an overall constant of order and sending → 0, yielding The nonlocal piece F ++− is equal to the product as we would expect for an extremal correlator. The finite constant a multiplying the ultralocal F −−− piece can be adjusted arbitrarily through the addition of a finite counterterm Both F ++− and F −−− independently satisfy the homogeneous dilatation and special conformal Ward identities, DF = 0 and K ij F = (K i − K j )F = 0, as is easily verified noting that Indeed this makes sense, as the finite counterterm (5.42) fails to generate a nonzero anomaly: The point here is that we only have a finite counterterm: there are no counterterms with divergent coefficients, since the renormalised correlator is given by multiplying the leading −1 divergence of the triple-K integral through by an overall constant of order . (This must be the case as there are no counterterms to remove the (+ + −) singularity.) To have a nonzero anomaly would instead require a (− − −) counterterm whose coefficient has an −1 pole.
Dual conformal symmetry
Several of the renormalised 3-point functions we have met thus far have the curious property of dual conformal symmetry: their momentum-space expressions take the form expected of a CFT 3-point function in position space. 17 One example is when solely the (+ + +) condition is satisfied with k +++ = 0. In this case, ∆ 1 + ∆ 2 + ∆ 3 = d, and we find (e.g., from the general formula (A.16) in appendix A) to ensure momentum conservation i p i = 0, we then have 17 Early hints of dual conformal symmetry emerged in [37,38], and were later developed in the context of scattering amplitudes in N = 4 SYM, see e.g., [39][40][41]. Dual conformal symmetry is known to be connected to the existence of a Yangian algebra.
The 3-point function thus has exactly the form imposed by conformal symmetry acting on the y coordinates. This dual momentum-space conformal symmetry is present in addition to the position-space conformal symmetry we started with, which acts on the original x coordinates. In the example above, the operator dimensions associated with the dual conformal symmetry are the same as for the original conformal symmetry. This is not always the case, however, as can be seen from the following example. Consider the case where solely the condition (+ + −) is satisfied, with k ++− = 0. Now we have ∆ 1 + ∆ 2 = ∆ 3 and the correlator is extremal. From (A.17) in appendix A, the renormalised correlator is (6.4) we see that The dimensions∆ i associated with the dual conformal symmetry are therefore in general different from those associated with the position-space conformal symmetry. (Note however the modified dimensions still satisfy the extremality condition∆ 1 +∆ 2 =∆ 3 .) A third case where dual conformal symmetry can arise is when both (+ + +) and (+ + −) conditions are simultaneously satisfied (see case (5) in appendix A). This requires β 3 to be an integer: if β 3 ∈ Z + and k +++ = 0, then the 3-point function is the same as in the first example above, while if β 3 ∈ Z − and k ++− = 0, the 3-point function is the same as in the second example.
In all the examples above we had either k +++ = 0 or k ++− = 0. To understand what happens more generally, consider for example the case where only the (+ + +) condition is satisfied with k +++ = 1. If all the β i ≥ 0 say, from (A.16) the renormalised correlator is Now, in order to have dual conformal symmetry, it is necessary for the correlator to transform appropriately under inversions y i → y i /y 2 i , namely where the∆ i denote generic dual conformal dimensions. Since under inversions, we see that (6.7) transforms as a sum of 3-point functions of different conformal dimensions, rather than as a single 3-point function. This behaviour occurs whenever the renormalised correlator is purely the sum of products of momenta raised to various powers, without any logarithms being present. 18 As dual conformal symmetry is more typically encountered in the context of massless Feynman diagrams [40], it is interesting to analyse the triple-K integral from this perspective. As shown in appendix A.3 of [7], we can rewrite the regulated triple-K integral as a massless 1-loop Feynman integral, In this formulaδ i =β i −β t /2 +d/4, whereδ t = iδ i andβ t = iβ i , and we regulate in our usual manner so thatβ i = β i + v andd = d + 2u . Setting p = y − y 3 , this 1-loop triangle integral is then related to an equivalent star integral in which we integrate over the position y of a central vertex, (6.11) For this star integral to possess dual conformal symmetry, it must transform under inversions y i → y i /y 2 i in the same manner as a CFT 3-point function, namely To achieve this requiresδ t =d ⇒δ i =∆ i ,∆ t =d, (6.13) as can be seen by inverting the integration variable y → y/y 2 . When this condition is satisfied, however, the relation between the star integral and the triple-K integral in (6.10) is singular, due to the factor of Γ(d −δ t ). Indeed, this makes sense as the regulated triple-K integral does not by itself possess dual conformal symmetry. (One can verify directly that the triple-K integral fails to transform correctly under inversions y i → y i /y 2 i .) Dual conformal symmetry therefore cannot exist for CFT 3-point functions for which renormalisation is not required, i.e., cases where the singularity condition (4.30) is not satisfied and the triple-K integral can be defined through analytic continuation alone.
How then can dual conformal symmetry arise in certain of the remaining cases for which renormalisation is required? The answer is that, in order for the renormalised correlator to possess dual conformal symmetry, we need not require that the star integral possesses exact dual conformal symmetry: it is sufficient that this holds simply to leading order in .
In the first example above, where the (+ + +) condition alone held with k +++ = 0, we had ∆ t = d and so β t = −d/2. We then find (6.14) The star integral (6.11) now only satisfies (6.12) at order 0 , since after inverting we pick up a net factor of y 2(δt−d) = y −(u+3v) = 1 + O( ) in the numerator of the integral. In addition, the factor of Γ(d −δ t ) = Γ((u + 3v) /2) in (6.10) contributes an −1 pole. Consequently, only the leading −1 divergence of the regulated triple-K integral possesses dual conformal invariance. As we have already seen, however, this leading −1 divergence is precisely the renormalised correlator: since there are no counterterms when the (+ + +) condition alone is satisfied, the renormalised correlator is obtained by multiplying the regulated triple-K integral through by an overall constant of order before sending → 0.
In the second example, where the (++−) condition alone was satisfied with k ++− = 0, the emergence of dual conformal symmetry is less obvious as the star integral (6.11) does not satisfy the condition (6.13). One can show, however, that the gamma function prefactors in (6.10) are all finite as → 0, and as we know the triple-K integral has an −1 divergence, the star integral must therefore diverge as −1 . This leading −1 divergence of the star integral, which is proportional to the renormalised correlator, does then possess dual conformal symmetry.
Discussion
We have presented a comprehensive discussion of the renormalisation of 3-point functions of primary operators in conformal field theory. Our results were obtained by solving the conformal Ward identities and as such they apply to all CFTs, perturbative or nonperturbative, and in any dimension. Renormalisation is required when the dimensions of operators involved in the 3-point function satisfy specific relations.
Our discussion is analogous to that for 2-point functions, where renormalisation is required when the operators involved have dimension such that ∆ − d/2 is integral. Correspondingly, there is a conformal anomaly, and (like the more familiar conformal anomaly that depends on the background metric) the coefficients of these anomalies are part of the CFT data. Operators with such dimensions are common in CFTs, and also in supersymmetric CFTs as BPS operators typically have such dimensions (for example, 1/2-BPS operators in N = 4 SYM). A recent application of the anomalies related to 2-point functions may be found in [42].
In the case of 3-point functions, renormalisation leads to a richer structure: new conformal anomalies arise and beta functions appear. The generating functional of CFT connected correlators satisfies where φ i are the renormalised sources and Anomalies arise when while a beta function for the source that couples to O 3 will appear when where k −−− and k −−− are non-negative integers (and similarly for permutations). The beta functions are due to renormalisation of the sources. 19 If either (7.3) or (7.4) holds, (7.1) implies that the 3-point function will depend logarithmically on the renormalisation scale µ, and thus it will contain logarithms of momenta. If both conditions hold simultaneously, ∆ 3 − d/2 must be integral and thus O 3 is one of the operators that have anomalies already at the level of 2-point functions. In this case, (7.1) implies that the 3-point functions contain double logarithms. The fact that 3-point functions can exhibit such analytic structure is one of the most surprising results to emerge from this work.
A further special case arises when one of the other two operators is marginal. The coefficient of the conformal anomaly due to the 2-point function of O 3 may now become a function of the source of the marginal operator, and indeed we find such an anomaly does arise at the level of 3-point functions. This anomaly however is scheme-dependent and the corresponding µ-dependence of the 3-point function may be set to zero by a choice of scheme.
A different set of special cases arises when the operators have dimensions that satisfy one (or both) of the following conditions: (along with permutations), where k −++ and k +++ are non-negative integers. In such cases, the triple-K representation of the 3-point functions is singular, not the correlators themselves. The corresponding 3-point functions may be extracted from the singular part of the triple-K integral and satisfy non-anomalous conformal Ward identities. Actually, these correlators exhibit enhanced symmetry. If k −++ = 0 and/or k +++ = 0, the correlators take the form of position-space correlators but with differences in position replaced by momenta, i.e., these correlators are dual conformal invariant. If k −++ = 0 and/or k +++ = 0 the correlators are instead equal to a sum of terms, each of which is individually dual conformal invariant (albeit with different conformal weights). It would interesting to understand the implications of dual conformal invariance. We emphasise that we are considering the theory at the fixed point and the correlation functions we derive are those of the CFT. If we were to promote the source of O 3 to a new coupling, however, then the deformed theory would run. A corollary of our analysis is a necessary condition for a marginal operator O [d] to be exactly marginal: its 3-point function should vanish. If this 3-point function is non-vanishing there will be a beta function (see e.g., example 8), and the deformed theory will not be conformal. A similar argument (in d = 2) based on OPEs was made in [44,45]. 20 19 Note that the fact that renormalisation requires the sources of composite operators to renormalise is not new: for example, BRST renormalisation of Yang-Mills theory requires renormalisation of the sources that couple to the BRST variation of the Yang-Mills field and of the ghost fields, see for example [43]. 20 We thank Adam Schwimmer and Stefan Theisen for bringing these references to our attention.
In this paper we discussed the renormalisation of 3-point functions of scalar operators. The same techniques also apply to tensorial 3-point functions, but there are new issues that arise. More specifically, since the diffeomorphism and Weyl Ward identities relate 2-and 3point functions, we need a regulator that regulates both. For this reason the (1, 0)-scheme which proved so useful here cannot be used there. Moreover, conservation requires that in d dimensions the stress tensor has dimension d and conserved currents have dimension d − 1. This condition requires a u = v scheme, however the regulated expressions appear to have singularities when u = v. We will discuss in detail how to overcome these problems and renormalise tensorial correlators in a sequel to this work [29].
It would be interesting to extend our discussion to higher-point functions. Correlators higher than 3-point functions are not uniquely determined by the conformal Ward identities: conformal invariance allows for an arbitrary function of cross-ratios in position space. One would first need to understand what is the analogue of the cross-ratio in momentum space. The singularity structure is also richer since there are different short distance behaviours depending on how many points are coincident. One would anticipate obtaining new anomalies when [5] and new contributions to beta functions, which are of order (n − 1) in the sources, when and permutations, where where k 1 , k 2 are non-negative integers. These two cases should correspond to ultralocal divergences and divergences where all but one point is coincident. All other divergences should already be accounted for by the counterterms introduced to renormalise lower point functions. Based on the case of 3-point functions studied here, one may anticipate that correlators with dimensions that satisfy the analogue of (7.5) should also be special. 21 It would be interesting to see whether such correlators are dual conformal invariant.
Anomalies have provided invaluable insights into quantum field theory and have led to many important results. In this paper, we uncovered a new set of conformal anomalies that originate from divergences in 3-point functions of scalar operators, and we saw that even without anomalies CFT correlators can depend on a scale (via the scale-dependence of the renormalised sources). Moreover, CFT 3-point functions may depend quadratically on logarithms of momenta. It will be exciting to explore the implications and applications of these results. ity during the workshop "Holographic Methods for Strongly Coupled Systems" and PM thanks the Centre du Recherches Mathematiques, Montreal. KS gratefully acknowledges support from the Simons Center for Geometry and Physics, Stony Brook University and the "Simons Summer Workshop 2015: New advances in Conformal Field Theories" during which some of the research for this paper was performed. AB is supported by the Interuniversity Attraction Poles Programme initiated by the Belgian Science Policy (P7/37) and the European Research Council grant no. ERC-2013-CoG 616732 HoloQosmos. AB would like to thank COST for partial support via the STMS grant COST-STSM-MP1210-29014. PM is supported by the STFC Consolidated Grant ST/L00044X/1. AB and PM would like to thank the University of Southampton for hospitality during parts of this work.
A General results
The triple-K integral is singular whenever the condition (4.30), namely is satisfied for some non-negative integers k σ 1 σ 2 σ 3 and (independent) choice of signs σ i ∈ {±}, i = 1, 2, 3, with α = d/2 − 1 and β i = ∆ i − d/2. In the main text, we focused on cases where only a single solution of this condition exists. In general, however, this condition may have multiple solutions, each with a different number of positive and negative signs, and potentially different values of k σ 1 σ 2 σ 3 . When such multiple solutions exist, the regulated triple-K integral typically has higher-order poles in , with the maximum permitted being −s where s is the number of different solutions of (A.1) (not counting simple permutations).
Our purpose in this appendix is to classify all the cases that can arise, including those where multiple solutions of (A.1) exist, and to understand their singularity structure. We will also give explicit results for the renormalised 3-point function wherever this can be determined purely from the singularities of the regulated triple-K integral.
A.1 Classification of cases
Let us call a solution of (A.1) associated with some σ i ∈ {±} a solution of type (σ 1 σ 2 σ 3 ).
To classify the cases where (A.1) admits multiple solutions, we first observe that certain types of solution are mutually incompatible, since on physical grounds (Note the latter condition is a weaker restriction than unitarity which requires ∆ i ≥ (d − 2)/2.) The types of solution that cannot appear simultaneously are therefore: In the first two cases, we would violate the condition d > 0, while in the third we would violate the condition ∆ 1 > 0. For example, to have solutions of both type (+ + +) and but on adding these equations we find d = −2(k +++ + k −−− ) ≤ 0. Similarly, to have both (+ + +) and (+ − −) solutions requires but on adding we find d + 2β Excluding cases with incompatible solution types, the remaining allowed cases are: For ease of reference we have numbered these cases (1)-(9) as listed in table 1. We will also need to keep track of which permutations of the (+ − −) type solution are present, subdividing cases (3), (6), (8) and (9) into further subcases accordingly (see later). Fortunately, we do not need to do the same for the type (+ + −) solutions as these can only arise in a single permutation due to the condition ∆ i > 0. For example, if we had both (+ + −) and (+ − +) solutions of (A.1), then on adding we would find ∆ 1 = −k ++− − k +−+ ≤ 0.
A.2 Renormalised correlators in (1, 0)-scheme In cases (3), (4) and (8), the singularity condition (A.1) has only (+ − −) and/or (− − −) type solutions. In these cases, the singularities of the regulated triple-K integral involve terms that are only semi-and/or ultralocal in the momenta. To determine the (fully nonlocal) renormalised correlator then requires a complete evaluation of the regulated triple-K integral including its finite piece of order 0 .
Here we will focus primarily on the remaining cases, where (A.1) admits solutions of type (+++) and/or type (++−). When solutions of these types are present, the regulated triple-K integral has singularities that are nonlocal in the momenta, for which there are no corresponding counterterms. Rather, it is the triple-K integral representation itself that is singular: the renormalised 3-point function is given by multiplying the regulated triple-K integral through by appropriate positive powers of , so as to extract the leading nonlocal singularities in the limit → 0. In cases (5), (6), (7) and (9) an additional complication arises, which is that the desired nonlocal singularities potentially occur at Case Solution types present Leading divergence First nonlocal divergence 1 (+ + +) Table 1: Singular cases consistent with d > 0 and ∆ i > 0, including those where (A.1) admits multiple solutions. The third and fourth columns refer to the divergence of the corresponding regulated triple-K integral, as discussed in section A.2. The third column lists the maximum leading divergence of the regulated triple-K integral, while the fourth column gives the order at which terms fully nonlocal in the momenta first arise. When this order is 0 we must evaluate the regulated triple-K integral in order to determine the renormalised correlator. In all other cases, we can determine the renormalised correlator purely from the singularities of the triple-K integral. Explicit expressions for all such cases are listed in section A.2.
subleading order in (or even at sub-subleading order in case (9)). When this occurs, the leading singularities are either ultra-or semilocal, and correspond to the presence of type (+ − −) and (− − −) solutions to (A.1). In such instances, one must first remove these leading ultra-or semilocal singularities through the addition of suitable counterterms. To place this discussion on a more explicit footing, let us now systematically evaluate the divergences of the regulated triple-K integral. In all cases apart from (3), (4) and (8), we will be able to read off the renormalised 3-point function directly from the leading nonlocal divergence. A convenient scheme for this computation is (u, v) = (1, 0), where the indices of the Bessel functions are preserved. The individual coefficients in the series expansion of the Bessel functions (the a ± k (β) defined below) then have no -dependence, making it easy to identify the overall order of terms.
The Bessel function K β (z) has the standard series expansion where the coefficients a ± k (β) are Here, ψ(k) = Γ (k)/Γ(k) is the digamma function, which for positive integer k > 0 can be re-expressed in terms of the k-th harmonic number H k = k n=1 n −1 and the Euler-Mascheroni constant γ E as ψ(k + 1) = H k − γ E . When β ∈ Z, the expansion coefficients defined in (A.8) and (A.9) are strictly only valid for β ≥ 0. Since K −β (z) = K β (z), however, we can handle all cases including β < 0 by using K |β| (z) in place of K β (z). We have also pulled out the overall factors in (A.7) to simplify our later expressions for the renormalised correlators in section A.2. These correlators are only determined up to a finite overall constant of proportionality, to which the terms we have pulled out make a fixed contribution. Extracting this contribution allows us to simplify our final results, which will be expressed in terms of the expansion coefficients (A.8) and (A.9).
Writing the regulated triple-K integral as the next step is to apply the series expansion (A.7) to each of the three Bessel functions.
As the a ± k (β) coefficients in the Bessel functions are all finite, divergences can only arise from the lower part of the integrals over x. Factoring out all momentum dependence, the only divergent integrals are those of the form where the divergent pieces are independent of µ −1 . The singularities with the highest degree of divergence therefore arise from integrals with the greatest number of logarithms. The number of logarithms present in a given term corresponds in turn to the number of coefficients a + k (|β i |) for which β i ∈ Z. Modulo possible logarithms, the factors of momentum accompanying divergent x-integrals of the form (A.11) have the general structure for some independent choice of signs {ν i } ∈ ±1. The sum here runs over integer k i ≥ 0 such that so as to obtain the appropriate overall power of x. If we denote the sign of each β i by s i so that β i = s i |β i |, then clearly (A.13) is solved by ν i = s i σ i with i k i = k σ 1 σ 2 σ 3 according to (A.1). The factor (A.12) can then be re-expressed more conveniently as where the sum runs over all k i ≥ 0 such that i k i = k σ 1 σ 2 σ 3 . For there to be accompanying logarithms requires both s i σ i = +1 and β i ∈ Z. To understand in which of the cases (1)- (9) this occurs, we must introduce one further concept. The significance of pairing is as follows. Given a solution (σ 1 σ 2 σ 3 ) of (A.1), for this solution to be paired on index σ 1 requires the quantity n defined by to be a non-negative integer. Thus, if the solution is paired on index σ 1 , then β 1 must be an integer. If instead the solution is not paired, n is either non-integer or else a negative integer. In the case where σ 1 s 1 = +1, the solution not being paired on σ 1 then implies β 1 is non-integer. (Recall k σ 1 σ 2 σ 3 is a non-negative integer). In the remaining case where σ 1 s 1 = −1, knowing that that solution is not paired on σ 1 does not tell us anything about whether or not β 1 is integer.
To have a logarithm requires both integer β i and also s i σ i = +1. Tabulating all possibilities as per table 2, we see that if two solutions are paired on an index σ i then we always have a logarithm from the solution with σ i = s i . The momentum factor p (1+σ i )β i i accompanying this log is however always analytic. On the other hand, if a solution is not paired on some index σ i , there are no log contributions, and for the accompanying momentum factor to be non-analytic requires σ i = +1. If σ i = −1 the accompanying momentum factor is always analytic, regardless of pairing.
With these considerations in place, we can easily understand the order of the leading divergence of the regulated triple-K integral given in table 1. From (A.11), this order is Table 2: For any given solution of (A.1), from the sign s i of β i and whether or not the solution is paired on σ i , we can deduce whether an order-boosting logarithm is present as well as whether the accompanying factor of momentum is non-analytic in p 2 i . In this manner one can reconstruct the pole structure and locality properties of the divergent parts of the regulated triple-K integral.
one more than the maximum number of logs that can occur in each case. The maximum number of logs is in turn given by the maximum number of indices on which any of the solutions present is paired. Thus, for example, case (7) is only order −1 divergent (rather than −2 , the maximum allowed order when two solutions of (A.1) are present) because neither solution is paired on any of its indices. In case (9), on the other hand, the leading divergence can be −3 when the (+ − −) solution (or one of its permutations) is paired on both its first and second indices.
Going through each of the cases (1)-(9) with the aid of table 2, we can now reconstruct all divergences of the regulated triple-K integral and read off the renormalised correlators where possible. Before proceeding to the complete listing below, let us first run through a few examples. In case (5), for instance, both solutions are paired on the third index meaning β 3 ∈ Z and we have one logarithm present. From table 2, this log is associated with the type (+ + +) solution if β 3 ∈ Z + ; otherwise it is associated with the (+ + −) solution. The leading divergence is therefore of order −2 and carries a momentum factor of either F +++ or F ++− according to which solution has the log. (Note that, after splitting ln(p 3 x) = ln x + ln p 3 , only the ln x part acts to boost the order of the divergence: the remaining ln p 3 piece contributes only to the subleading −1 divergence.) Examining this leading −2 divergence we see that is nonlocal due to the non-analytic factors of momentum associated with the first two indices. This is indeed as we expect since no counterterms are available for removing divergences: instead we must multiply the regulated triple-K integral by an overall constant of order 2 before sending → 0 to extract the renormalised correlator.
As a second example, let us consider case (9b), the most complicated case, where (+ + −), (+ − −), (− + −) and (− − −) solutions of (A.1) are present. Here, each solution is paired on both its first and its second indices. The number of logarithms associated with each solution then depends on the signs of β 1 and β 2 . To have a logarithm requires σ i s i = +1, hence if both s 1 = s 2 = +1 then the (+ + −) solution has two logarithms (i.e., contributes a factor of ln(p 1 x) ln(p 2 x)), the (+ − −) and the (− + −) solution each have only a single logarithm (ln(p 1 x) and ln(p 2 x) respectively), while the (− − −) solution has none. The leading divergence is therefore −3 F ++− , however from table 2 this is ultralocal as the momentum factors associated with all three indices are analytic. The subleading divergence at order −2 is then semilocal, and only the sub-subleading order −1 divergence is nonlocal. It is this last quantity therefore that is proportional to the renormalised correlator, which may be obtained by removing the leading and subleading divergences through counterterms, multiplying by an overall constant of order , then sending → 0. Its momentum dependence, given in (A.30) below, follows from collecting terms without factors of ln x: for the (+ + −) solution this is F ++− ln p 1 ln p 2 , for the (+ − −) solution this is F +−− ln p 1 , etc.
We are now ready to list the complete results as follows. In cases (3), (4) and (8) where it is not possible to determine the renormalised correlator we have instead listed the complete singularity structure of the regulated triple-K integral, which contains only ultralocal or semilocal terms. In the remaining cases where we provide results for the renormalised correlator, note that these are specified in a particular choice of renormalisation scheme; when type (+ − −) or (− − −) solutions are present we can adjust the coefficients of ultra-and/or semilocal terms arbitrarily by adding finite counterterms to the action. The function of momentum F σ 1 σ 2 σ 3 is as defined in (A.14).
Case (1): (+ + +) only
Case (2) Case (5): (+ + +) and (+ + −) Note that we cannot have both β 1 ≥ 0 and β 2 ≥ 0 here: taking linear combinations of the solutions of (A.1), we find As we cannot have both β 1 ≥ 0 and β 2 ≥ 0, there are then no double-log contributions to the renormalised correlator even though the (+ + −) solution is paired on both its first and second indices. Independently, we know that such a contribution cannot appear since it would imply the existence of an −3 divergence, however this is forbidden since we have only two different types of solution of (A.1) ignoring permutations, and hence at most an order −2 divergence.
Case (7) The relative minus sign between the leading and subleading terms here arises from (A.11).
Case (8b): (+ − −), (− + −) and (− − −) In this subcase we cannot have both β 1 < 0 and β 2 < 0: taking linear combinations of the solutions of (A.1), we find β 1 + β 2 = 2k −−− − k +−− − k −+− . The absence of a (+ + −) solution means however that −2k ++− ≡ α is such that k ++− ∈ Z − , and hence β 1 + β 2 = −k ++− + k −−− > 0. As we cannot have both β 1 < 0 and β + 2 < 0, there are then no nonlocal double-log contributions to I div α+ ,{β i } even though the (− − −) solution is paired on both first and second indices. We know independently that such a contribution cannot arise as it would imply the presence of an −3 divergence which is forbidden since, discounting permutations, we only have two different types of solution of (A.1), and hence at most an order −2 divergence. The homogeneous conformal Ward identities for the regulated correlator are equivalent to the system of equations defining the generalised hypergeometric function of two variables Appell F 4 [7,27]. This system of equations has four solutions in general, but three of these solutions possess singularities in the collinear limit where the momenta satisfy p 3 = p 1 + p 2 (or similar). Of the four solutions, the only one free from collinear singularities is the triple-K integral. For the cases discussed in [7], for which renormalisation is not required, the triple-K integral is then the unique representation of the 3-point correlator.
The correlators studied in this paper do however require renormalisation, and the issue of uniqueness of the triple-K representation is consequently more subtle. Here, the absence of collinear singularities need hold only for the renormalised correlator obtained after we have sent → 0. The regulated correlator, obtained by solving the regulated homogeneous Ward identities and subtracting divergences with the aid of counterterms, must therefore have a finite piece of order 0 that is free from collinear singularities (being equal to the renormalised correlator), but also pieces that are of higher order in which vanish in the limit → 0. There is no physical reason why these higher-order pieces should be free from collinear singularities, since they make no contribution to the renormalised correlator. Thus, given the four general solutions to the regulated homogeneous Ward identities, we should only impose that the finite order 0 piece (after subtracting counterterms and multiplying through by any required overall factors of ) is free from collinear singularities. This additional freedom renders the triple-K representation non-unique, but the non-uniqueness simply corresponds to our freedom to change the renormalisation scheme by adding finite counterterms to the action.
Let us examine this argument in greater detail. As per the discussion in [7], in the present (1, 0)-scheme the four general solutions of the regulated homogeneous conformal Ward identities take the form where I β (x) is a modified Bessel function of the first kind. As with the triple-K integral, we can split each integral into a finite upper part for which µ −1 ≤ x < ∞, and a lower part for which 0 ≤ x < µ −1 . Once again, all the divergences as → 0 arise solely from the lower parts. From the large-x asymptotic expansions we see the upper parts are always singular for the collinear momentum configuration p 3 = p 1 + p 2 in any dimension d ≥ 3. The only way to eliminate this collinear singularity is to take appropriate linear combinations of the four solutions so that the leading asymptotic behaviours cancel, i.e., by combining the Bessel I to make Bessel K functions, The triple-K integral is thus the unique combination with an upper part that is free from collinear singularities. Turning now to the lower parts, through a modification of our earlier arguments we easily see that the divergences these contribute are always free from collinear singularities. First, we recall that Bessel I has the series expansion valid for any β / ∈ Z − . To handle all cases including β ∈ Z − , it is convenient to choose a different basis for the four general solutions of the homogeneous Ward identities in which all Bessel I −|β| (z) are recombined into Bessel K β (z) = K |β| (z). Our new basis thus consists of the original triple-K integral plus the three integrals To further simplify matters, we observe that where the a + k (β) are as defined in (A.9), where χ = 1 if β / ∈ Z and χ = β + 1 if β ∈ Z + . The divergences of the three solutions (A.35)-(A.37) can now be evaluated following the same method we used for the triple-K integral. In fact, up to an irrelevant constant overall phase arising from the factors of (−1) χ , the divergences are the same as for the triple-K integral except that we discard logs and set the a − k (|β|) to zero every time we encounter a Bessel I in place of a Bessel K. (Or equivalently, when we have a Bessel I, we only obtain a nonzero contribution if σ i = s i for that index.) As all these divergences are simply products of momenta raised to various powers, there are consequently never any collinear singularities.
Thus, when the renormalised correlator is given by the finite part of a solution of the regulated homogeneous conformal Ward identities, the triple-K integral is the unique solution. When, on the other hand, the renormalised correlator is given by the divergent part of a solution to the regulated homogeneous Ward identities, there are potentially additional contributions besides the triple-K integral. These additional contributions encode our freedom to change the renormalisation scheme by adding finite counterterms to the action.
An example of this is case (7), where we have (+ + −) and (− − −) type singularities. As we saw earlier in example 13 on page 41, the renormalised correlator satisfies the homogeneous conformal Ward identities. In fact, both the F ++− and the F −−− pieces of the general solution (A.25) independently satisfy the homogeneous conformal Ward identities, whose solution is therefore not unique. (Note the coefficient of the F −−− term in (A.25) can be adjusted arbitrarily through the addition of an appropriate counterterm.) In this case, the renormalised correlator corresponds to the leading −1 order divergence of the regulated triple-K integral, which must be multiplied through by an overall constant of order . The non-uniqueness therefore corresponds to the presence of additional solutions of the regulated homogeneous Ward identities of order −1 . Collecting together contributions at this order from (A.35)-(A.37), up to an overall constant of proportionality we obtain In this appendix we discuss the relation between correlation functions of operators of dimensions ∆ and d − ∆. We will assume operators of generic dimensions, i.e., none of the conditions that lead to singularities hold. We also set the normalisation of the 2-and 3-point functions to unity. First, note that under .
The propagator for a single real scalar field Φ in four dimensions is In position space, the operators O [4] and O [3] can be realised as Denoting the corresponding sources by φ [0] and φ [1] , in dimensional regularisation the canonical dimensions (defined according to the propagator) are Up to multiplicity factors, the 2-and 3-point functions are represented by the diagrams presented in figure 1. All correlators may be evaluated using the integral .
Immediately, we then find The counterterm can be added to the action to yield a finite renormalised 2-point function O [3] (p)O [3] (−p) = 3 256π 4 p 2 ln p 2 µ 2 . (C.8) We have chosen subleading terms in the renormalisation constant a( ) in such a way that the ultralocal portion of the 2-point function vanishes. This choice of renormalisation scheme will simplify subsequent expressions, although other choices of scheme are possible. The 3-point function is given by the Feynman integral (C.10) After dimensionally regularising to regulate the nested divergences, the integrals over k 2 and k 3 can be calculated using (C.4) leading to the result The integral on the right-hand side can be re-expressed as a triple-K integral according to equation (A.3.17) in [7]. This gives (C.12) The divergent part of this expression can then be extracted through the method presented in section 4.3.1, giving O [4] (p 1 )O [3] (p 2 )O [3] (p 3 ) reg = − 9 256π 6 2 (p 2 2 + p 2 3 ) + 9 512π 6 −p 2 1 + 3p 2 2 ln p 2 2 + 3p 2 3 ln p 2 3 + (p 2 2 + p 2 3 ) (−10 + 3γ E − 3 ln(4π)) + O( 0 ). (C.13) The form of the counterterm action is given by (4.122), up to factors of the renormalisation scale. Taking into account the choice of the regularisation (C.3) used here, we have (C.14) The counterterm contribution following from this action is then To cancel the divergences, the counterterm constants must be a 0 = 9 2π 2 + a From the counterterms we can now read off the beta function and anomaly as follows. The renormalised source φ [1] is related to the bare source via which after inverting yields the beta function Comparing this equation and (C.6) with (4.129) and (4.124), we find (C.21) From (C.15), we also have The anomaly is then in accord with (4.132). As we saw earlier, only the coefficient of p 2 1 is physical, with the remainder of the anomaly depending of the choice of scheme through the constant a (0) 0 . Finally, one can evaluate the triple-K integral in (C.12) using the reduction scheme described in [7,33] along with a suitable change of regularisation scheme. Using (C.9) and adding in the counterterm contribution (C.15) to cancel the divergences, on taking → 0 we (C.25) The value of a 1 can be retrieved as well, but its expression is longer and not particularly illuminating. As we saw in section 5, the scheme-dependent constants a 0 and a 2 are related by (5.28), which followed from the special conformal Ward identity (5.21). In terms of the present calculation, the scheme-dependent constants in (C. 16 Notice that throughout the evaluation we have worked consistently with regulated quantities. The procedure presented above highlights the fact that the 3-point function O [4] O [3] O [3] can be renormalised by adding the counterterms (C.7) and (C.14) to the regularised action. In particular, the sequence of integrals in (C.10) is finite for a small non-zero , and can in principle be evaluated in any order. While superficially different, the approach we present is however ultimately equivalent to the standard Feynman diagram calculus in which divergences are removed loop by loop.
From the point of view of Feynman diagrams, the first term in the counterterm action (C.14) can be interpreted as the renormalisation of the cubic vertex φ 3 . Indeed, after adding to the free field the couplings to the operators φ 3 and φ 4 , the total action is where the renormalisation factors Z j depend on couplings φ [1] and φ [0] . As one can read from (C.14), The renormalisation of the cubic vertex can be then expressed diagrammatically as in figure 2. The loop integral in the figure is divergent and requires renormalisation. Evaluating this integral in the dimensionally regulated theory, we find Figure 3: Feynman graphs representing O [3] O [3] and O [3] O [3] O [3] for a free scalar Φ with O [3] = : Φ 6 :.
D Triple-K integrals and AdS/CFT
Triple-K integrals appear naturally in the context of AdS/CFT since propagators in Poincaré coordinates, when transformed to momentum space, are expressible in terms of modified Bessel functions. A scalar 3-point function in the supergravity approximation arises from a cubic interaction term of the bulk action and is usually represented by a Witten diagram as per figure 4 (see page 68). In this section we will discuss triple-K integrals in a holographic context, and illustrate the holographic renormalisation procedure for the 3-point function of a marginal operator in three dimensions.
D.1 Set-up
We consider a real scalar field Φ with a cubic interaction, The equations of motion for Φ {n} (z, p) with n > 1 can then be solved in terms of the bulk-to-boundary and bulk-to-bulk propagators. These are uniquely fixed by asymptotic boundary conditions at z = 0, together with regularity requirements at z = ∞.
The bulk-to-boundary propagator K d,∆ is defined by where Φ (∆) denotes the coefficient of z ∆ in the near-boundary expansion of Φ, and X[φ 0 ] is a functional whose contribution to correlation functions is at most local. In order to extract the 3-point function, we need to identify the piece of Φ which depends quadratically on the source φ 0 . This piece is given by (D.10), after evaluating the integral on the right-hand side. When this integral diverges, we can introduce a cut-off at z = δ, I δ d,∆ (z, p, k) = ∞ δ dζ ζ d+1 G d,∆ (z, p; ζ)K d,∆ (ζ, k)K d,∆ (ζ, |p − k|). (D.12) The 3-point function of O then follows from this integral, with any divergences that may be present removed by holographic renormalisation of the supergravity on-shell action. (A complete example of this procedure for a marginal operator will be presented shortly in section D.3; for a related discussion of holographic renormalisation for irrelevant operators see [50,51].) From (D.11), we have , (D. 13) where I δ ct is a suitable counterterm and (. . .) (∆) denotes the coefficient of z ∆ in the nearboundary expansion. As we will now show, the first part of (D.13) can be re-written as a triple-K integral.
Firstly, the piecewise form of the bulk-to-bulk propagator in (D.8) splits the integral (D.12) into two regions: a near-boundary region ζ ≤ z and an inner region ζ > z. Denoting the corresponding integrals as I δ,< d,∆ and I > d,∆ , we then have I δ d,∆ = I δ,< d,∆ + I > d,∆ , (D.14) where only the near-boundary integral depends on the regulator δ.
In the near-boundary region ζ ≤ z, the integral reads I δ,< d,∆ = z When the expression (D.7) for the bulk-to-boundary propagator is substituted, this integral is proportional to a triple-K integral with a lower cut-off. To extract the coefficient of z ∆ for the complete right-hand side, we simply have to strip off the overall prefactor of z ∆ then evaluate the z-independent piece of the integral. To find this z-independent piece, it is tempting to send z → 0 leaving us with a genuine triple-K integral. We know, however, that when (D.16) is satisfied this triple-K integral diverges, since (D. 16) is equivalent to the singularity condition (4.19) with all β j = β. Thus, provided the condition (D. 16) is not satisfied, the contribution to the 3-point function from the near-boundary part of the integral (D.15) vanishes, while the contribution from the inner region (D.18) reduces to a finite triple-K integral upon sending z → 0. as follows by expanding the propagators in (D.18). This triple-K integral is finite, although in some cases it may be necessary to use analytic continuation to define its precise value (as in example 3 on page 15). If, on the other hand, the condition (D.16) holds, one still expects to obtain the non-local part of the correlation function from the inner region in (D.18). This non-local contribution corresponds to the finite order z 0 piece of the integral as z → 0, and so is equivalent to a triple-K integral up to local terms. (The overall correlator therefore receives local contributions from both (D.15) and (D.18).) We will illustrate this case with an example in the following section.
Notice however that the procedure of holographic renormalisation is not equivalent to shifting the α and β parameters in the triple-K integral in (D. 19). Instead, holographic regularisation amounts to the introduction of a cut-off on the integration variable in the triple-K integral; in the complete holographic renormalisation scheme, one then has to include additional local contributions from (D.15) and the functional X[φ 0 ] in (D.13).
D.3 Marginal operator in d = 3
To illustrate the general discussion above, we now discuss the complete holographic renormalisation of the 3-point function for a marginal operator in d = 3 dimensions. This case satisfies the condition (D.16) with a single plus sign and k = 0. and the scheme-dependent constant a (0) = (γ E − 1)/6. Moreover, with the beta function as in (D.30), the Callan-Symanzik equation (4.118) is satisfied. Further discussion of the Callan-Symanzik equation in a holographic context may be found in [51]. | 2016-03-14T14:51:55.000Z | 2015-10-28T00:00:00.000 | {
"year": 2015,
"sha1": "7eadedbb297a47075897b443fab2781c13e56b08",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2016)066.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "5e4d2018fc5a6c21d68ffa9d5ad3117065928b6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
214644528 | pes2o/s2orc | v3-fos-license | Isopropyl‐phloroglucinol‐DHA protects outer retinal cells against lethal dose of all‐trans‐retinal
Abstract All‐trans‐retinal (atRAL) is a highly reactive carbonyl specie, known for its reactivity on cellular phosphatidylethanolamine in photoreceptor. It is generated by photoisomerization of 11‐cis‐retinal chromophore linked to opsin by the Schiff's base reaction. In ABCA4‐associated autosomal recessive Stargardt macular dystrophy, atRAL results in carbonyl and oxidative stress, which leads to bisretinoid A2E, accumulation in the retinal pigment epithelium (RPE). This A2E‐accumulation presents as lipofuscin fluorescent pigment, and its photooxidation causes subsequent damage. Here we describe protection against a lethal dose of atRAL in both photoreceptors and RPE in primary cultures by a lipidic polyphenol derivative, an isopropyl‐phloroglucinol linked to DHA, referred to as IP‐DHA. Next, we addressed the cellular and molecular defence mechanisms in commonly used human ARPE‐19 cells. We determined that both polyunsaturated fatty acid and isopropyl substituents bond to phloroglucinol are essential to confer the highest protection. IP‐DHA responds rapidly against the toxicity of atRAL and its protective effect persists. This healthy effect of IP‐DHA applies to the mitochondrial respiration. IP‐DHA also rescues RPE cells subjected to the toxic effects of A2E after blue light exposure. Together, our findings suggest that the beneficial role of IP‐DHA in retinal cells involves both anti‐carbonyl and anti‐oxidative capacities.
11cRAL to atRAL in POS, the atRAL Schiff base is hydrolysed, yielding the photochemically inactive opsin protein and the freed atRAL. 2 Removal of the latter is required to avoid acute toxicity, and delayed clearance of atRAL after light exposure contributes to light-induced retinal degeneration. 3 The atRAL is cleared from disc membranes of POS by retinal ATP-binding cassette (ABCA4) transporter proteins to the cytoplasm where atRAL-dehydrogenase (RDH) catalyses its reduction to the much less reactive all-trans-retinol (atROL).
Mutations in the ABCA4 gene are found in patients with Stargardt macular dystrophy (STGD1), cone-rod dystrophy and recessive retinitis pigmentosa, and variants in ABCA4 increased susceptibility to age-related macular degeneration (AMD). 4 Mechanisms of acute toxicity of atRAL were previously studied by Palczewski and coworkers. 3,5 They first reported that NADPH oxidase at the plasma membrane can be activated by an increase in atRAL levels via the phospholipase C/ inositol 1,4,5-triphosphate pathway, resulting in overproduction of reactive oxygen species (ROS). The respiratory chain in mitochondria also participates in ROS production (the more reactive being HO · ) within the cell in reply to atRAL accumulation. 3 More recently, atRAL has been shown to induce mitochondrial transmembrane potential loss and endoplasmic reticulum (ER) stress that ultimately trigger programmed cell death by activating apoptotic Bax-and caspase-dependent cascades. 6,7 Free atRAL is itself a reactive carbonyl compound through its all trans-polyene conjugated aldehyde that is toxic to cells. 8,9 In ABCA4-associated pathologies, atRAL accumulates due to delayed clearance by the defective ABCR transporter. Nickell et al 10 reported that rhodopsin is present at a concentration of 4.62 mmol/L in disc membranes of rod outer segments. Therefore, the level of atRAL released after photoactivation of rhodopsin can range from 25 to 100 μmol/L in the disc membranes following photobleaching of only 0.5%-2%. This level of atRAL is toxic in cultured retinal cells. 7,9 Excess atRAL is a potent photosensitizer which can mediate light-induced oxidation. 11 However, atRAL condenses on the PE by a double mechanism of carbonyl and oxidative stress. 12,13 This leads to decrease atRAL levels and to the formation of bisretinoid adducts such as A2E and RAL dimer, which are pigments of retinal pigment epithelium (RPE) autofluorescent lipofuscin. 14 These pigments are sensitive to visible blue light and are photo-oxidized and fragmented accordingly. 15 The oxidized metabolites are reactive carbonyl and oxidative species that would have toxic effects in the RPE. 16 Based on epidemiology studies, natural antioxidants such as polyphenols appear as efficient protectors against oxidative stress.
This activity may be related to their capacity to block the formation and accumulation of ROS or to stimulate the enzymatic antioxidant defences of the organism. 17 Literature also addressed the efficiency of polyphenols to act as anti-carbonyl stressor agents by trapping reactive toxic carbonyl entities. 18 We previously reported in vitro cytoprotective effects of the polyphenol phloroglucinol, a natural monomer of phlorotannins abundantly present in Ecklonia cava (edible brown algae), in outer retinal cells by scavenging ROS and trapping atRAL. 9 Because of its low bioavailability, phloroglucinol was then structurally modified by the addition of polyunsaturated fatty acid (PUFA) and isopropyl substituents. 13 In the present study, we investigated the protective effect of the medicinal chemical compound, isopropyl-phloroglucinol-DHA (IP-DHA), also called lipophenol, against atRAL-related carbonyl and oxidative stresses (COS). We first analysed primary cultures of outer retina to demonstrate the dose-dependent protective effect against lethal dose of atRAL. We then used ARPE-19 cells as a standard cellular model to study the mechanisms of cell death and protection.
We demonstrate that each of the structural part, that is isopropyl and PUFA, is essential for the full action of the lipophenol with selectivity for LA and DHA. We compared the capacities of phloroglucinol and IP-DHA to reverse the effects of atRAL and finally discuss how to understand the enhanced protective effects of IP-DHA compared to phloroglucinol.
| Synthesis of phloroglucinol lipophenols
To evaluate the influence of different lipid chains and of the isopropyl substituent, several lipophenols were synthesized: five lipophenols with an isopropyl-phloroglucinol (IP) core linked to various fatty acid, IP-DHA, IP-EPA, IP-ALA, IP-LA and IP-C22, and three lipophenols using only the phloroglucinol without alkyl substituent, P-DHA, P-EPA and P-LA. All the lipophenols were synthesized according to the chemical strategy developed by Crauste et al. 13 Briefly, one hydroxyl group of the phloroglucinol or IP is protected by triisopropylsilyl (TIPS) groups using triflate reagent (TIPS-OTf) and diisopropylethylamine (DIPEA) as a base to obtain the protected derivative. The coupling reactions between the protected polyphenol and the different fatty acids, docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), α-linolenic acid (ALA), linoleic acid (LA) and behenic acid (C22), were initiated using dicyclohexylcarbodiimide and dimethylaminopyridine (DCC/DMAP) as coupling reagents to access the protected lipophenols. Deprotection of the TIPS groups by Et 3 N-3HF in dry tetrahydrofuran (THF) yielded final lipophenols compounds, IP-DHA, IP-EPA, IP-ALA, IP-LA, IP-C22, P-DHA, P-EPA and P-LA. A quality control assessment was established by a complete 1 H and 13 C NMR spectral analysis for each synthesized compound (chemical structure, general procedure, yield and NMR analysis are reported in Supporting Information).
| Cell cultures and atRAL treatment
Primary rat RPE and mouse neural retina (NR) cultures were obtained as previously described. 9 RPE cells were cultured for 3 days until they reached 80%-85% confluency, and NR was cultured for 10 days until glial cells were confluent in the bottom layer and neural cells with a neurite outgrowth in the upper layer. Human RPE-like cells, ARPE-19, were seeded at 100 000 cells/cm 2 and grown to confluency before being assayed as instruct by ATCC.
Pre-and co-treatment procedures with lipophenol were carried out. During pre-treatments, rat RPE primary cultures received a medium containing lipophenol at different concentrations (40-320 μmol/L) for 24 hours. The medium was then removed and replaced by a serum-free culture medium containing 25 μmol/L atRAL for 4 hours. During co-treatments, RPE and NR cells received a serum-free medium containing 25 μmol/L atRAL and/or IP-DHA (10-320 μmol/L) for 4 hours. NR cells were refreshed with serum-free medium the next day.
The cell cultures were treated with serum-free DMEM/F12 medium without phenol red containing IP-DHA at different concentrations (0-80 μmol/L) for 1 hour. A2E was then added to a final concentration of 20 μmol/L for 6 hours before rinsing with medium. Control cells were incubated with 0.2% DMSO with or without A2E. The cells were exposed to intense blue light (4600 lux) for 30 minutes to induce phototoxicity of A2E and incubated at 37°C. Irradiation was achieved using a LED device with blue emission wavelengths from 430 to 470 nm and a dimmable luminance (Roleadro lighting).
Control ambient white light was less than 300 lux. Without blue light exposure, A2E-loaded cell survival was not affected. The cell viability was determined 16-20 hours later using a MTT colorimetric assay. Results are expressed in percentage of viable cells normalized to control conditions in the absence of lipophenol and stressor.
| Cell viability
Cell viability in primary RPE and ARPE-19 was determined by MTT assay in 96-well plates as described. 9 To distinguish viable cells from
| Mitochondrial respirometry
Mitochondrial respiration was measured in ARPE-19 cultured in six-well plates after 4-hour exposure to 25 μmol/L atRAL and/or 40 μmol/L lipophenol. Respiration was measured on 10 6 cells permeabilized by incubation for 2 minutes with 15 µg digitonin and resuspended in a respiratory buffer (pH 7.4, 10 mmol/L KH 2 PO 4 , 300 mmol/L mannitol, 10 mmol/L KCl and 5 mmol/L MgCl 2 ). The respiratory rates were recorded at 37°C in 2-mL glass chambers using a high-resolution Oxygraph respirometer (Oroboros) as recently described. 20 Assays were initiated in the presence of 5 mmol/L malate/pyruvate to measure basal respiration. (state 2), Complex I-coupled state 3 respiration was measured by adding 0.5 mmol/L NAD + /1.5 mmol/L ADP. Then, 10 mmol/L succinate was added to reach maximal coupled respiration, and 10 μmol/L rotenone was injected to obtain the CII-coupled state 3 respiration. Oligomycin (8 µg/mL) was added to determine the uncoupled state 4 respiration rate. Finally, carbonyl cyanide-4-(trifluoromethoxy) phenylhydrazone (1 μmol/L) was added to control the permeabilization of the tissues.
The respiration rate driven by complex IV was measured starting from CII-coupled state 3 and the addition of antinomycin A (1 μmol/L), which inhibited complex III, and of ascorbate/TMPD redox couple to reduce cytochrome c.
| Mitochondrial enzymatic activities
The enzymatic activity of the mitochondrial respiratory chain complexes (RCC) was measured on cell homogenates as described previously. 21 Briefly, ARPE-19 was grown in six-well plates and treated for 4 hours with 25 μmol/L atRAL and/or 40 μmol/L lipophenol. Cells were scraped, rinsed with DPBS and resuspended on ice with cell buffer (250 mmol/L saccharose, 20 mmol/L tris[hydroxymethyl]aminomethane, 2 mmol/L EGTA, and 1 mg/mL bovine serum albumin (BSA), pH 7.2). Cell was disrupted by two freezing-thawing cycles, centrifuged at 16 000 g for one minute and suspended in the cell buffer (50 µL/10 6 cells). The cellular protein content was determined with the Bicinchoninic assay kit (Pierce) using BSA as standard. The initial kinetics of enzymatic activities were monitored by spectrophotometry (UV-SAFAS spectrophotometer, SAFAS monaco). Complex I (NADH ubiquinone reductase) activity was measured as described elsewhere 22 and adapted using 2, 6 dichloroindophenol (DCPIP) to avoid inhibition of complex I activity by decylubiquinol. 23 Complex II (succinate ubiquinone reductase) activity was measured according to James et al. 24 Specific enzymatic activities of complexes I and II were expressed in mIU (ie nanomoles of DCPIP/min/mg protein).
Complex IV (cytochrome c oxidase) activity was recorded according to a method by Rustin et al, 25 adapted in a 50 mmol/L KH 2 PO 4 buffer, using 15 μmol/L reduced cytochrome c. Specific enzymatic activity was expressed in mIU (ie nanomoles of cyt c/min/mg protein).
| ROS production
ROS were measured in ARPE-19 using the H 2 DCFDA probe as described 9 with minor modifications. Radicals such as peroxyl, alkoxyl, NO 2 · , carbonate or HO · are able to oxidize H 2 DCFDA and thus to be quantified by this assay. 14
| Catalase activity
ARPE-19 cells were seeded in six-well plates and treated as aforementioned. Immediately after treatment, cells were scrapped on ice,
| Western blot analysis
ARPE-19 cells were lysed in RIPA buffer containing protease inhibitors, homogenized and then centrifuged at 9600 g for 3 minutes.
Twenty-five micrograms of the protein lysates in Laemmli buffer were separated on 10% SDS-PAGE (Mini-PROTEAN ® TGX™ gels, Bio-Rad) and electrotransferred to PVDF membranes (Trans-Blot ® Turbo™ Transfer System, Bio-Rad). After blocking, membranes were blotted overnight at 4°C with primary antibodies. After incubation with the corresponding HRP-conjugated secondary antibodies, detection was performed using an enhanced chemiluminescence kit (Pierce ECL, Thermo Scientific) and recorded by the V3 Western Worflow™ system (Bio-Rad). The bands were semi-quantified using densitometry by ImageJ software. Commercial antibodies were used to assess protein expression as follows: monoclonal mouse anti-GAPDH (Sigma-Aldrich ® , G8795 diluted to 1:5000, 37 kD); mouse anti-α-tubulin (Sigma-Aldrich ® T5168, diluted to 1:4000,
| XCELLigence assay
The xCELLigence system (Roche and ACEA Biosciences) was used to monitor cell adhesion, proliferation and cytotoxicity. The xCELLigence system was connected and tested by Resistor Plate before the RTCA Single Plate station was placed inside the incubator at 37°C and 5% CO 2 . First, the optimal seeding concentration for the prolif-
| Statistics
Statistical analyses were performed using GraphPad Prism 5.0.
Software. Data were first analysed with Shapiro-Wilk normality test, and then, two-tailed P-values were determined using either the unpaired Student's t test or the non-parametric Mann-Whitney test. A P-value < .05 was considered significant. The linear correlation was measured by Pearson's r correlation coefficient.
| IP-DHA protects retinal primary cultures against atRAL toxicity
ABCA4-associated retinopathies often affect both photoreceptor and RPE in a manner that is not fully elucidated, 5,14 but which originally involves a defective retinal clearance from the photoreceptor. We tested the protection with IP-DHA of atRAL-challenged primary cultures of rat RPE ( Figure 1A,B) and mouse NR enriched in photoreceptor cells ( Figure 1C). These analyses confirmed significant protective effects of IP-DHA against atRAL, regardless of the mode of treatment (pre-vs co-treatment Figure 1A,B), the density ( Figure 1BB1,BB2), or the cell type (RPE and photoreceptor, Figure 1A,B,C). Furthermore, IP-DHA did not show cytotoxicity on the primary RPE at a concentration of 320 μmol/L up to eight times higher than the one we previously reported using the ARPE-19 cell line. 13 Thus, IP-DHA is able to protect both RPE and photoreceptor from the toxic effects of atRAL overload without adverse effect.
| Structure-function relationship of IP-DHA
A selection of fatty acids (omega-6, omega-3 and saturated fatty acids) that were conjugated to IP showed a structural selectivity for protection efficacy of ARPE-19 cells challenged with a toxic dose of atRAL ( Figure 2A). The rank of efficacy was LA ≥ DHA > EPA = ALA > C22.
This order was not correlated with the level of fatty acid unsaturation (DHA > EPA > ALA > LA > C22), nor with the rank of the toxicity of free fatty acid (EPA > DHA > ALA > LA > C22, Figure 2B). As oxidation levels are correlated with cell toxicity, PUFA toxicity may come from lipid peroxidation. 28 However, coupling polyunsaturated FA (PUFA) to IP or phloroglucinol (P) significantly reduced PUFA toxicity ( Figure 2C,D, respectively). These data highlight that isopropyl does not alter the low toxicity of lipophenols. Regardless of the fatty acid, isopropyl function was necessary for effective protection, the protective effect was lost upon use of non-alkylated lipophenols (P-fatty acid, Figure 2E). These results demonstrated that both PUFA and isopropyl are essential for lipophenol activity.
The choice to use IP-DHA throughout this study rather than IP-LA, despite the latter's showing better protection against atRAL is justified in view of planned in vivo evaluations. DHA has general and specific transporters to concentrate DHA in the photoreceptors. Therefore, the use of DHA seems more appropriate to improve the uptake of IP-DHA by the retina. In addition, an omega-3 such as DHA, which can be released by the enzyme esterase, can have beneficial effects on human retinal diseases that omega-6, known for their superior pro-inflammatory properties, cannot reproduce.
| Cell-based assays of IP-DHA protection
Dynamic cellular biology was first monitored using the xCELLigence System in ARPE-19 before and after treatment with atRAL and/or IP-DHA. The system measures electrical impedance which provides quantitative information about the biological status of the cells, including cell number, viability and morphology. The xCELLigence read-out is a dimensionless parameter called Cell Index (CI) that was normalized with the time-point before the treatment ~16.5 hours after plating ( Figure S1). The addition of 25 μmol/L atRAL significantly decreased the CI, which then stabilized 4 hours later at 11 ± 2% ( Figure S1; Table S1). Co-incubation with 40 μmol/L IP-DHA limited this decrease to 56 ± 11% two hours after the beginning of treatment, and this effect lasted throughout the analysis (37 hours).
We conclude that IP-DHA responds rapidly against the toxicity of atRAL with persisting protective effect.
Consequently, we applied a 4-hour co-treatment with atRAL and IP-DHA to assess the protection against detrimental effects induced by atRAL overload in ARPE-19 ( Figure 3). As shown in Figure 3A Morphologic changes in ARPE-19 cells were observed following atRAL exposure ( Figure 3C) and an apoptotic caspase 3-cleavage signal was detected by Western blot (Figure 3D,E). Long (24 hour's pre-) and short (4-hour co-) IP-DHA treatment restored healthy morphology and abolished the caspase-dependent apoptosis. Previous reports revealed that atRAL could directly act on and elicit a poisonous effect in mitochondria. 3,7 We therefore performed respirometry to assess the functionality of mitochondrial RCC ( Figure 4A showed that all RCC were impaired by atRAL and partially rescued by IP-DHA treatment ( Figure 4C). Thus, the protective effect of IP-DHA seems to apply directly to the mitochondrial respiration essential to the cell viability.
| Molecular and cellular mechanisms of IP-DHA protection
Natural polyphenols were reported as potent against COS involved in age-related diseases, 29 either as sequestrating agents of F I G U R E 1 Protection of retinal primary cultures by IP-DHA against atRAL. A, pre-treatment of RPE cells with IP-DHA inhibits atRALinduced cell death. A1, rat primary RPE cells were cultured in 96-well plates and pre-treated with increasing concentrations of IP-DHA for 24 h, washed and exposed to 25 μmol/L atRAL for 4 h. A2, RPE cultures were incubated for 24 h with increasing concentrations of IP-DHA. Cell viability was determined by MTT assay. The data are represented as mean ± SD (n = 7). B, co-treatment with IP-DHA and atRAL protects RPE cells. Sub-confluent (B1) and low-density (twofold less) (B2) cultures of rat primary RPE cells were cultured in 96-well plates and co-incubated with increasing concentrations of IP-DHA and 25 μmol/L atRAL for 4 h. Cell viability was determined by MTT assay. The data are represented as mean ± SD (n = 3-6). C, long-lasting effect of atRAL and IP-DHA on NR and photoreceptors. NR primary cultures were incubated with increasing concentrations of IP-DHA for 1 h, and 50 μmol/L atRAL was added for an additional 4 h. The medium was refreshed for the next 20 h. MTT assay measured cell survival (C1) and Rhodopsin-IR (rhodopsin-immunoreactivity positive cells) revealed the number of photoreceptor-derived primary cells (C2). The data are presented as mean ± SEM (n = 3-4). All data are expressed as a percentage of untreated cells (CTL) *P < .05, **P < .01, ***P < .001 vs atRAL-treated cells reactive aldehydes and scavengers of reactive species, or by the activation of Nrf2 transcription factor that promotes expression of many phase II detoxifying enzymes. 30 We recently showed that phloroglucinol acts as an anti-COS agent trapping atRAL and scavenging ROS produced by H 2 O 2 treatment (identified by DCFDA probes. 9,27 Here, we consider the anti-COS capacity of IP-DHA compared to phloroglucinol in RPE cells ( Figure 5). IP-DHA reduced both free atRAL and ROS produced by atRAL treatment ( Figure 5A and 5B, respectively). AtRAL treatment is able to alter the respiratory chain in mitochondria and potentially disrupts homeostasis in the ER, thereby initiating ER stress, which in turn induces ROS generation such as O 2 −· and then HO · . 31 Figure 6).
A 4-hour treatment with IP-DHA increased the catalase activity and prevented its decrease by atRAL ( Figure 6A), whereas it did not regulate its expression ( Figure 6B). The 24 hours pre-treatment with IP-DHA increased in a dose-dependent manner the expression of Nrf2 and its nuclear translocation in ARPE-19 cells ( Figure 6C). The same treatment increased the GSH/GSSG ratio ( Figure 6D) and the expression of NQO-1 ( Figure 6E), suggesting an up-regulation of redox regulating and detoxifying enzymes.
These results support the notion that the protective role of IP-DHA in retinal cells involves both molecular (atRAL reduction) and cellular (enzymatic) mechanisms.
| IP-DHA rescues cells under toxic effect of blue light-exposed A2E
The daily shedding of the distal tips of the outer segment followed by their phagocytosis in RPE cells leads to accumulation of bisretinoids in lysosomes and formation of A2E. 33
| D ISCUSS I ON
The pathophysiological mechanisms of STGD1, first involve alterations in the photoreceptors due to mutations in the ATP-binding cassette transporter ABCA4 gene, delay in atRAL reduction, and accumulation of autofluorescent bisretinoids in photoreceptors by condensation of atRAL and phosphatidylethanolamine. 34 At this stage, atRAL reactivity is responsible for COS. 9,13 Later, phagocytosis transfers bisretinoid-burdened POS to the RPE where bisretinoids can account for autofluorescence of lipofuscin, lightdependent COS and consequently death of RPE. 33 Therefore, COS play a crucial role throughout the disease from its onset in the photoreceptors to its progression in the RPE. Thus, it is highly relevant to develop new therapeutic compounds capable of limiting COS in the outer retina.
Polyphenols have long been recognized as antioxidant and more recently as anti-carbonyl stress derivatives, and their application in the treatment of neurodegenerative diseases has been widely acknowledged in the past few years. 35,36 Among them, phloroglucinol is a monomer of phlorotannins, which also displays therapeutic potential for neurodegenerative diseases. 37,38 Neurodegeneration is a multifactorial process and polyphenols present pleiotropic effects (antioxidant, anti-inflammatory, immunomodulatory properties) due to their ability to modulate the activity of multiple targets involved in pathogenesis, thereby halting the progression of these diseases.
We previously reported cytoprotective effects of phloroglucinol in In the prospective treatment of patients with IP-DHA, we assume that the release of free DHA and IP may be part of the mechanism of action of the compound, as the ester bond could be cleaved by a plasma and/or cellular esterase. This is not a drawback as many studies show the beneficial effects of DHA supplementation in AMD and STGD1 45,46 and dietary polyphenols in AMD against oxidative stress and beyond. 47 In addition, the oxidation of DHA not only causes deleterious effects (lipid peroxidation), but should also contribute to the release of cellular mediators (neuroprostane, neuroprotectin D1) helping the cell to fight oxidation. In addition, the Catalase expression was quantified by Western blot analysis with a monoclonal rabbit anti-catalase antibody and enhanced chemiluminescence (ECL) detection using densitometry and ImageJ software. GAPDH expression was used as a loading control. Results are expressed as mean ± SEM (n ≥ 3). C, Nrf2 expression and nuclear translocation were explored by immunofluorescence with a rabbit monoclonal anti-Nrf2 antibody and Alexa488-conjugated anti-rabbit. Nuclei were stained with the blue fluorescent Hoechst dye. Confocal imaging revealed increased green spots in the nuclei after 24-h treatment with IP-DHA. Similar ARPE-19 treatments were performed, and GSH/GGSSG ratio (D) and NQO-1 expression (E) were quantified. Results are expressed as mean ± SEM (n = 4). # P < .05, ### P < .001 vs untreated CTL and *P < .05, **P < .01, ***P < .001 vs atRAL-treated cells antioxidant activity of phloroglucinol would enhance the beneficial effect on vision.
In conclusion, our data show that IP-DHA is effective to protect outer retinal cells against lethal dose of atRAL. The beneficial role of IP-DHA in retinal cells involves both anti-carbonyl and anti-oxidative capacities. This suggests potential effects of lipophenols in the prevention of macular degeneration associated with COS, such as STGD1 and AMD. Additional studies will be necessary to examine the effect of IP-DHA in animal models of macular degeneration.
ACK N OWLED G EM ENTS
We would like to thank the ARPEGE Pharmacology Screening-
CO N FLI C T O F I NTE R E S T
The authors have declared that no conflict of interest exists.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2020-03-26T10:42:23.511Z | 2020-03-25T00:00:00.000 | {
"year": 2020,
"sha1": "b61b916a2c93041e43234642d70ccf5da1d3487a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcmm.15135",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db16e983074602184ca85d88086127c2f32e2e3e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
44630035 | pes2o/s2orc | v3-fos-license | Key to Trading Profits – Matching the Probability Distribution of A Contract with An Appropriate Mechanical Trading Strategy
Whether one takes a qualitative or a quantitative approach, the techniques available are many and varied, and that complicates a systematic assessment of the usefulness of technical analysis. It comes as no wonder then, that empirical tests of specific trading rules and their attendant signals are often less than satisfactory tests of the efficiency of technical analysis in general, since traders typically employ not one but a range of technical indicators. Additionally, many traders also apply considerable market intuition to complement the insight gained from technical analysis, so there will always be an element of subjectivity with its application.
Introduction
Technical analysis is commonly perceived to involve the prediction of future asset price movements from an analysis of past movements, employing either qualitative methods (such as chart pattern recognition) or quantitative techniques (such as moving averages), or a combination of both.
Whether one takes a qualitative or a quantitative approach, the techniques available are many and varied, and that complicates a systematic assessment of the usefulness of technical analysis.It comes as no wonder then, that empirical tests of specific trading rules and their attendant signals are often less than satisfactory tests of the efficiency of technical analysis in general, since traders typically employ not one but a range of technical indicators.Additionally, many traders also apply considerable market intuition to complement the insight gained from technical analysis, so there will always be an element of subjectivity with its application.
Since the publication of Fama & Blume (1966) most academics have considered the usefulness of technical analysis in forecasting to be probably close to nil.For many others, the continued and widespread use of these techniques Taylor & Allen (1992); Yin-Wong Cheung & Menzie D. Chinn (2001) is even puzzling since technical analysis shuns economic fundamentals and relies only on information on past price movements.Historical information, according to the weak form market efficiency, should already be embedded in the current asset price, thus its use is unprofitable.Burton G. Malkiel (1996), suggested that "technical strategies are usually amusing, often comforting, but of no real value".Malkiel's dismissal of technical analysis is glaringly at odds with the fact that technical analysis is widely used by market professionals.
On the other hand, Sweeney (1986Sweeney ( , 1988) ) presents results consistent with some usefulness to technical rules.More recent studies have included Taylor (1992), LeBaron (1994), and Levich & Thomas (1993).The latter two employed bootstrap techniques to further emphasise the magnitude of the forecastability.Other related evidence includes that of Taylor & Allen (1992) which shows the extent to which traders continue to use technical analysis.Brock et al (1992) showed using a bootstrap methodology that the rules did at least generate statistically significant forecastability.
It is not difficult to understand why technical analysis did not or could not sustain academic interest as long as the available evidence was not of a more systematic nature.The scepticism with which academic economists initially viewed (and to some extent continue to view) technical analysis can be largely attributed to the intellectual standing of the efficient markets hypothesis (EMH), which, in its "weak form" Fama (1970), maintains that all historical information should already be embodied in asset prices, making it impossible to earn excess returns on forecasts based on historical price movements.
If technical analysis reflects rational thinking that leads to profitable trading rules, how is it that market processes do not arbitrage these profit opportunities away?It is offered that in well functioning markets one would expect that profit opportunities will be exploited up to an extent where agents feel appropriately compensated for their risk.To take open positions is inherently risky, whether the decision is based on fundamental or technical considerations.
Or perhaps technical analysis is an indication of irrational behaviour, as can only be concluded if one follows the traditional understanding of the EMH and regards markets as at least weakly efficient; Fama (1970).However, to interpret technical analysis as an indication of irrational or even not-fully rational behaviour goes against the grain that virtually all market professionals rely on technical trading rules, albeit to varying degrees.Surely market professionals cannot all be exhibiting suboptimal behaviour, much less irrationality, even if temporarily.
Such is the paradox, so is it any wonder then, that evidence relating to the profitability of technical analysis tends to be inconclusive?Not at all, for if technical analysis was never profitable, its widespread use would be hard to fathom; if, on the other hand, technical analysis was always profitable, it would perhaps imply that the market is inefficient to a degree that many academics would not find credible.
Clearly, technical analysis remains an intrinsic part of the market.For market practitioners, the challenge is to constantly refine technical trading strategies as potentially important tools in the search for excess returns.For academic researchers, technical analysis must be understood and integrated into economic reasoning at both the macroeconomic and the micro structural levels.
This paper hopes to contribute to the existing literature by matching the distribution of a contract with an appropriate trading rule, thus integrating the characteristics of a contract with the capability of a trading strategy.In this regard, knowing that a contract's distribution is negatively skewed, for instance, one can expect a higher probability of making many small wins and a low probability of risking a larger loss.Thus by choosing the appropriate contract to trade vis-à-vis one's risk profile, one is already ahead of the game in terms of staking the odds of winning trades in one's favour even before selecting a trading strategy.As different trading strategies cater to different characteristics of price movements, back testing with different trading rules can uncover a trading rule that best exploit the characteristics of the intended contract, thus achieving a competitive edge.
Data
This study uses daily exchange series from Nymex as provided by Telequote Networks.The series represent the daily data for the Light Sweet Crude Oil futures extending almost 15 years and 3,714 observations.We first determine the distribution of daily lognormal returns for the in-sample period from 01 January, 1994 through to 31 December, 1998 and 1,254 observations.Based on the said distribution an appropriate trading strategy was adopted to trade subject commodity for the out-of-sample period from 01 January, 1999 through to 31 October, 2008 yielding 1,596 trades out of 2,460 observations.We also test the distribution of the out-of-sample period to determine the continuity of the distribution established for the in-sample period.
Summary Statistics
The daily return series is generated as follows: R t = ln (P t /P t-1 ) (1 where ln is the natural logarithm operator, R t is the return for period t, P is the closing price for period t and t is the time measured in days.
The descriptive statistics and results of the normality test for the daily returns are presented in Table 1.
The skewness coefficient, being the third moment about the mean/cube of the standard deviation is -0.0118 for in-sample period and -0.2615 for out-of-sample period, and is measured as follows: (2 where N is the number of observations, is the mean of the series and is an estimator for the standard deviation.
The distribution of the daily lognormal returns appeared to be slightly negatively skewed for the in-sample period, a characteristic that extended to the out-of-sample period.This suggested that wins are small and likely, and losses can be large but are far and few.In other words, there is the occasional large loss at the expense of promising consistent winnings.Negative skewness, although commonly viewed as risky, is not without its own appeal to traders as the occasional large downside can be more than mitigated by the frequent and smaller upside with appropriate trading strategies that deliver robust win/loss ratios.A trading strategy that has a high percentage of wins would generate significant profits in the long run when compounded by a robust win/loss ratio.
The kurtosis coefficient, being the fourth moment about the mean/square of the second moment is 5.6983 for in-sample period and 6.2375 for out-of-sample period, and is calculated as follows: (3) where N is the number of observations, is the mean of the series and is an estimator for the standard deviation.
This indicates that the in-sample distribution is also more peaked than normal, a characteristic that is also carried forth into the out-of-sample period, a condition otherwise known as leptokurtic.This suggests a not so significant deviation from its mean, which implies less volatility in future returns and lower probability of extreme price movements.This implies lower risks and therefore more stable returns, thus mitigating sharp drawdown risks as is feared with a significant negatively skewed distribution.
The Jarque-Bera (JB) test of normality was employed to further test the normality as it is an asymptotic or large-sample test that is appropriate given the large number of observations in this study.For a normally distributed variable, S = 0 and K = 3, hence the Jarque-Bera (JB) test serve to test the joint hypothesis that S = 0 and K = 3 and thus the null hypothesis that the series is normally distributed. (4) where N is the sample size, S is the skewness coefficient and K is the kurtosis coefficient.
The JB statistic as computed is 380.4543 with a p-value of 0 for in-sample period and 1,102.3620for out-of-sample period with a p value of 0. As to be expected from our calculation of skewness and kurtosis in the foregoing, the value of the statistic is far from zero and the p value zero.Thus, one can reject the null hypothesis of normal distribution.This only lend more credence to the significance of focusing on the third and fourth moment of distribution.
Insert Table 1: Summary Statistics
In Table 2 the autocorrelation coefficients at various lags are very high for both in-sample and out-of-sample periods, starting at 0.9980 and 0.9990 at the first lag and only declined to 0.9880 and 0.9940 at the 5th lag respectively.Autocorrelations up to 5 lags for both periods are also individually statistically significant from zero since they are all outside the 95% confidence bounds.
We also tested the statistical significance of the autocorrelation coefficients by using the Ljung-Box (LB) statistic.The LB statistic is defined as: (5 where T is the sample size and is the j-th autocorrelation.
The LB statistic tests the joint hypothesis that all the p k up to certain lag lengths are simultaneously equal to zero.
From Table 2 the value of the LB statistic up to 5 lags is 6,205 for in-sample and 12,235 for out-of-sample.
There is also zero probability of obtaining such a LB value under the null hypothesis that the sum of 5 squared estimated autocorrelation coefficients is zero.Accordingly, one can conclude significant time dependence in the return series due perhaps to some form of market inefficiency.This suggests that trends and reversal tendencies are present and can be detected.As a result, patterns in short term price changes can be exploited for significant profits by the intelligent use of mechanical trading methods.
Mechanical Trading Model
Having established that the distribution of the contract in question is negatively skewed and leptokurtic, we next determine an appropriate trading strategy to employ.Many strategies exist to trade a negatively skewed market, with statistical arbitrage and convergence trading among the more common or popular methods.
This model seeks to initiate a trade when the current price breaks above the previous high or below the previous low.So if one is trading on the basis of daily time frames as is envisaged in this paper, one would be comparing the current price with the previous day high and previous day low.
This can be expressed as follow: Buy if: P t ≥ max (P t-1 ) + 1 tick (6) Sell if: P t ≤ min (P t-1 ) -1 tick (7) where P t the price at time t and 1 tick refers to the minimum price fluctuation of the contract.
Once a trade is entered, say to buy one contract, in accordance with the rule specified in the foregoing, hold until an opposite signal is given by the market (again in accordance with the rules above) to sell.When that occurs, sell two contracts -one to square the earlier position and the other to simultaneously enter a new short position.
The process is then repeated every time the rule is triggered.
Insert Table 3: Examples of Trade Selection
As a result net position is always one contract, long or short, at any one time.This also means no adding to positions when consecutive long signals or consecutive short signals are given by the model.
The model works on the premise that the breaking of a prior high signifies new buying interest which in turn will drive prices even higher.Conversely, the breaking of a prior low is indication of renewed selling interests which would then force prices down.Being aligned with the flow complements the high incidence of wins given by the contract characteristics as determined in subsection 2.2.Accordingly, if the rule takes you long in the market, remain in that position until the rule takes you out.This will allow the market to work on your trade and more importantly allow one to be constantly in the market so as to be able to ride the big move when it comes as opposed to trying to "second-guess" when that might be, thus increasing the likelihood of achieving a robust win/loss ratio.
It is further assumed that one is able to buy at the ask and sell at the bid, with no slippage given that the i) contract in question is liquid and its bid-ask spread had been consistently 1 tick difference for most parts, and ii) bid-ask volume can easily absorb your trade size (in this paper this isn't an issue since we are looking at only one contract).In other words, one can hope to get in and out of a trade at relative ease.
Results from Trading the Model
The model generated net profits for every single year in the out-of-sample period from January 1999 to October 2008, no matter long or short positions.Combined, long and short trades netted profits of USD590,690, USD323,220 and USD267,470 respectively for the period sampled.Such sterling results were achieved on the back of robust win/loss ratios compounded by a high probability of winning trades.
Transaction costs were assumed to be USD30 per round turn.Note also that the average profit per trade of USD770 can more than cover any transaction costs and still be profitable.Consequently, slippage from execution, if any, is unlikely to have a material negative impact on profits.
Win/loss ratio is defined as gross win/gross loss and averaged 4 times for all trades, indicating the dollar value of winnings was 4 times that of losses in the period sampled.Long trades performed better than short trades, averaging 5 times compared to 4 times for shorts.The lowest win/loss ratio in a given year was still a healthy 2 times.Such robust win/loss ratios can be attributed to the efficacy of the trading model.The trading model as enumerated in section 3 is designed such that a position once initiated and profitable is allowed to run thus maximising its profit potential.Consequently, the model was able to exploit the opportunities offered by the market, thus compounding the many "small" wins envisaged by the distribution characteristics of the contract.Such favourable win/loss ratios are certain to result in longer term profitability for any trading model that has an even chance of winning, more so when we have a high probability of wins as is the case here.
% winning trades is defined as the total number of winning trades/total number of trades and was 55% of a total of 798 trades taken during the sample period.It is further noted that annual trades generated by the model were winning at least half the time to two thirds of the time.It must also be pointed out that % wins are less encouraging in short trades primarily because crude oil was on a long term uptrend for much of the period sampled.However, profits were still achieved every single year on short trades due to a robust win/loss ratio.
Long trades were more profitable, averaging 61% wins for the sample period.Again as explained in subsection 2.2, the subject contract exhibited significant time dependencies.By aiming to capitalise on new buying and new selling interests the trading model was able to capture the trends and reversal tendencies displayed, thus resulting in a high probability of wins.
Maximum continuous winning streak was 9 trades and totalled USD60,240 while maximum continuous losing trades totalled 6 trades and was USD10,620.Consistent with the leptokurtic nature described in subsection 2.2 drawdown was kept in check.Together with the strong win/loss ratios and higher probability of wins, the trading model provides one with positive expectations and hence the confidence to trade.
There is no open position as the last trade was closed out at the end of the sample period.
Insert Tables 4 To be sure, one can improve the results by employing risk/money management strategies such as trade sizing, trailing stops, pyramiding etc. Results can also be enhanced by the use of appropriate filters in the trade rules and by employing other confirmation signals which is out of the scope of this paper.
Results Evaluated Against the Weak Form EMH
As described above, the model had consistently generated positive excess return.The question remains if these excess returns were due to the efficacy of the model or did they happen by chance?And if they were due to the efficacy of the model, just how significant are they?To address these concerns, we evaluated the results in the context of the framework developed by Peterson & Leuthold (1982), a framework that essentially evolve from the works of Samuelson (1965) and Mandelbrot (1963Mandelbrot ( , 1966) ) and Fama (1970).
As any mechanical trading system must yield zero profits under the weak form efficient market conditions, the null hypothesis must be zero and any non zero results deemed a contradiction.Additionally, a zero benchmark seemed in order given that futures trading are a zero sum game, Leuthold (1976).Further, Samuelson (1965) argued that "on average … there is no way of making an expected profit" and Fama (1970) also ruled out excess profits under the assumptions of the weak form efficient market.Bachelier (1900) also concluded that "the mathematical expectation of the speculator is zero" and he described this condition as a "fair game."Accordingly, the following hypothesis is tested: Ho: mean profit = 0 Ha: mean profit ≠ 0 A trading system that consistently produces losses can just as easily be used to consistently produce profits by adopting a contrarian approach, i.e. by simply buying on a sell signal and selling on a buy signal.Such a move would obviously result in an opposite effect of the same magnitude.
Accordingly, a two-tailed Z-test is chosen to measure the significance: where is the actual mean gross profit/loss from the Model, is the expected mean gross profit/loss (= 0), is the variance of gross profits per trade and n is the number of round-turn trades.
As tabulated in Tables 4, 5 and 6 the calculated z-statistic for combined trades, long trades and short trades in the sample period was 9.43, 8.61 and 5.34 respectively.For all years in the sample period, combined trades generated net profits significantly different from 0 at the 1% level.
Long trades also exhibited the same results as above for all years.Short trades were more erratic but most years were still significant, at least at the 10% level.
The results indicate that the null hypothesis should be rejected, at least at the 10% level.The ability of the model to generate significant excess profits suggest non random price movements and accordingly, it can be concluded that the Light Sweet Crude Oil futures failed the weak form efficiency test.
Conclusion
This study confirmed that it is possible to match the distribution of a contract with an appropriate trading strategy to provide a competitive edge.Simple trading rules that are complementary to the distribution of the contract, when consistently applied, can systematically produce excess profits in the long run.The appeal of mechanical trading methods lies also in that they help set the rules and remove or at least keep guesswork and emotions to a minimum, thereby making simulation easy.
To be sure there can be more than one trading rule that matches any given distribution and vice versa.Perhaps the successful trader differs from the unsuccessful one, not because of the superiority of one model over another, but because he or she has found a model that is in-tune with his or her basic personality, outlook and experience sets.Because these models of market success are drawn from our fundamental views and aversions, I suspect they are far less amenable to modification than is commonly appreciated, which explains why market participants can and do get different results from trading identical models.
: Trading Results of Combined Trades Insert Tables 5: Trading Results of Long Trades Only Insert Tables 6: Trading Results of Short Trades Only
Notes: ( 1 )
Net Profit = Gross Profit -Transaction Costs; (2) Transaction costs is assumed to be USD30 per round turn; (3) Gross Profit = Gross Win -Gross Loss; (4) Gross Win = Gross Total Dollar Value of Winning Trades; Gross Loss = Gross Total Dollar Value of Losing Trades; Win/Loss Ratio = Gross Win/Gross Loss; Total Number of Trades = Total Number of New Trades; % Win Trades = Total Number f Winning Trades/Total Number of Trades; Mean Profit = Average Gross Profit per Trade; Z-Test is two-tailed
Notes: ( 1 )
Net Profit = Gross Profit -Transaction Costs; (2) Transaction costs is assumed to be USD30 per round turn; (3) Gross Profit = Gross Win -Gross Loss; (4) Gross Win = Gross Total Dollar Value of Winning Trades; Gross Loss = Gross Total Dollar Value of Losing Trades; Win/Loss Ratio = Gross Win/Gross Loss; Total Number of Trades = Total Number of New Trades; % Win Trades = Total Number f Winning Trades/Total Number of Trades; Mean Profit = Average Gross Profit per Trade; Z-Test is two-tailed.
Table 1 .
Light Sweet Crude Oil Futures -Summary Statistics
Table 2 .
Light Sweet Crude Oil Futures -Auto-Correlation Notes: p (lag) refers to the first 5 autocorrelations for the return series; AC refers to autocorrelation; Q-statistic refers to the Ljung-Box statistic; * denotes statistical significance at the 5% level.
Table 3 .
Examples of Trade Selection
Table 4 .
Light Sweet Crude Oil Futures -Trading Results of Combined Trades Gross Win -Gross Loss; (4) Gross Win = Gross Total Dollar Value of Winning Trades; Gross Loss = Gross Total Dollar Value of Losing Trades; Win/Loss Ratio = Gross Win/Gross Loss; Total Number of Trades = Total Number of New Trades; % Win Trades = Total Number f Winning Trades/Total Number of Trades; Mean Profit = Average Gross Profit per Trade; Z-Test is two-tailed.
Table 5 .
Light Sweet Crude Oil Futures -Trading Results of Long Trades Only
Table 6 .
Light Sweet Crude Oil Futures -Trading Results of Short Trades Only | 2017-09-09T19:44:38.191Z | 2010-04-15T00:00:00.000 | {
"year": 2010,
"sha1": "0ba5110239b9b190cc57019df0780abf01d03871",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijef/article/download/3851/4670",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0ba5110239b9b190cc57019df0780abf01d03871",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
119197316 | pes2o/s2orc | v3-fos-license | Extended-soft-core Baryon-Baryon ESC08 model III. S=-2 Hyperon-hyperon/nucleon Interaction
This paper presents the Extended-Soft-Core (ESC) potentials ESC08c for baryon-baryon channels with total strangeness S=-2. The potential models for S=-2 are based on SU(3) extensions of ESC potential models for the S=0 and S=-1 sectors, which are fitted to experimental data. Flavor SU(3)-symmetry is broken only kinematically by the masses of the baryons and the mesons. For the S=-2 channels no experimental scattering data exist, and also the information from hypernuclei is rather limited. Nevertheless, in the fit to the S=0 and S=-1 sectors information from the so called NAGARA event and the $\Xi$-well-depth has been used as constraints to determine the free parameters in the simultaneous fit of the $NN \oplus YN \oplus YY$ data. Therefore, the potentials for the S=-2 sector are determined mainly by the NN-, YN-data, and SU(3)-symmetry.Various properties of the potentials are illustrated by giving results for scattering lengths, effective ranges, bound states, elastic and inelastic phase parameters, and total cross sections. Notably is the prediction of a bound state D$^*$ in the $\Xi N(^3S_1,I=1)$-channel with a binding energy $B_E=1.56$ MeV. This state is deuteron-like, i.e. a member of the $\{10^*\}$-decuplet. As for the normal deuteron $D= pn(^3S_1-^3D_1)$ the strong tensor force is responsible for this state. The features of $\Xi$ hypernuclei predicted by ESC08c are studied on the basis of the G-matrix approch. The well-depth $U_\Xi= -7.0$ MeV, and the $\Xi -\Lambda\Lambda$ conversion width is $\Gamma_\Xi^c= 4.5$ MeV.
I. INTRODUCTION
In this paper, the third in a series of papers following [1,2], henceforth referred to as I and II respectively, on the results and predictions of the Extended-soft-core (ESC) model for low energy baryon-baryon interactions. It presents the next phase in the development of the ESC-models and is the follow up of the ESC04-models [3][4][5] and the ESC08a,b-models [6] for S = 0, −1, −2. In [7] the Nijmegen soft-core one-boson-exchange (OBE) interactions NSC97a-f for baryon-baryon (BB) systems for S = −2, −3, −4 were presented.
For the S=-2 YY and YN channels hardly any experimental scattering information is available, and also the information from hypernuclei is very limited. There are data on double ΛΛ-hypernuclei, which recently became very much improved by the observation of the Nagara-event [8]. This event indicates that the ΛΛ-interaction is rather weak, in contrast to the estimates based on the older experimental observations [9,10].
In the absense of experimental scattering information, we assume that the potentials obey (broken) flavor SU(3) symmetry. As in I and II, the potentials are parametrized in terms of meson-baryon-baryon, and meson-pair-baryon-baryon couplings and gaussian form factors as well as diffractive couplings. This enables us to include in the interaction one-boson-exchange (OBE), two-pseudoscalar-exchange (TME), and mesonpair-exchange (MPE), and diffractive contributions without any new parameters. All parameters have been fixed by a simultaneous fit to the NN and YN data, with the constraints imposed (i) for ΛΛ( 1 S 0 ) from the NAGARA event, and (ii) for the well-depth U Ξ . The latter is assumed to be attractive and is the main reason for the occurrence of the "deuteron-like" state in the ESC08-model. For the procedure see the description in I and II. This way, each NN ⊕ YN -model leads to a YY-model in a well defined way, and the predictions for the ΛΛ-and ΞN -channels contain no ad hoc free parameters. We have choosen for ESC08c the options: SU(3)-symmetry for of coupling constants, and pseudovector coupling for the pseudoscalar mesons. (In ESC04 also alternative options were investigated, but it appeared that there is no reason to choose any of these.) Then, SU(3)-symmetry allows us to define all coupling constants needed to describe the multi-strange interactions in the baryon-baryon channels occurring in {8} ⊗ {8}. Quantum-chromodynamics (QCD) is, as is generally accepted now, the physical basis of the strong interactions. Since in QCD the gluons are flavor blind, SU(3)-symmetry is a basic symmetry, which is broken by the chiral-symmetry-breaking at low energies. This picture supports our assumptions, stated above, on SU(3)-symmetry. As is shown in [3,4] the coupling constants and the F/(F + D)-ratio's used in the ESC04-models follow the predictions of the 3 P 0 -pair creation model (QPC) [11] rather closely. The same is the case for the ESC08-models, see paper I and Ref. [12] for details. Now, it has been shown that in the strong-coupling Hamiltonian lattice formulation of QCD, the flux-tube model, that this is indeed the dominant picture in flux-tube breaking [13]. Therefore, since the ESC-models are very much in line with the Quark-model and QCD, the predictions for the S = −2-channels should be rather realistic.
The material in this paper is organized by the following considerations: Most of the details of the SU(3) description are well known. In particular for baryon-baryon scattering the details can be found in papers I, II, and e.g. [7,14,15]. Here we restrict ourselves to a minimal exposition of these matters that is necessary for the readability of this paper. Therefore, in Sec. II we first review for S = −2 the baryon-baryon multi-channel description, and present the SU(3)-symmetric interaction Hamiltonian describing the interaction vertices between mesons and members of the J P = (1/2) + baryon octet, and define their coupling constants. We then identify the various channels which occur in the S = −2 baryonbaryon systems. In appendix A the potentials on the isospin basis are given in terms of the SU(3)-irreps. In most cases, the interaction is a multi-channel interaction, characterized by transition potentials and thresholds. Details were given in [7,14]. For the details on the pair-interactions, we refer to paper I and II [1,2]. In Sec. III we give a general treatment of the problem of flavor-exchange forces, which is very helpful to understand the proper treatment of exchange forces and the treatment of baryon-baryon channels with identical particles. In Sec. V we describe briefly the treatment of the multi-channel threshods in the potentials. In Sec. VI we present the results of the ESC08c potentials for all the sectors with total strangeness S = −2. We give the couplings and F/(F + D)-ratio's for OBE-exchanges of ESC08c. Similarly, tables with the pair-couplings are shown in appendix B. We give the S-wave scattering lengths, discuss the possibility of bound states in these partial waves. Also, we give the S-matrix information for the elastic channels in terms of the Bryan-Klarsfeld-Sprung (BKS) phase parameters [16][17][18], or in the Kabir-Kermode (KK) [19] format. Tables with the BKS-phase parameters are displayed in appendix C. Such information is very useful for example for the construction of the Λ-, Σ-, and Ξ-nucleus potentials. We also give results for the total cross sections for all leading channels.
Important differences among the different versions of the ESC-models appear in the ΞN sectors. Table XXV in Ref. [4] demonstrates that ESC04a,b (ESC04c,d) lead to repulsive (attractive) Ξ potentials in nuclear matter. In ESC08c, and also ESC08a/b/a" [20], the ΞN interactions is attractive enough to produce various Ξ hypernuclei. A notable advantage of ESC08 over ESC04 is the occurrence of a "deuteron-like" bound state in the I=1 channel, which is accessible in a K − K + -transition Ξ-production experiment at JPARC. Therefore, it is very interesting to study the ESC08 interactions in the G-matrix approach to baryonic matter. In Sec. VII, we represent the ΞN G-matrix interactions derived from ESC08c as densitydependent local potentials. Here, structure calculations for Ξ hypernuclei are performed with use of Ξ-nucleus folding potentials obtained from the G-matrix interactions. It is discussed how the features of ESC08c appear in the level structure of Ξ hypernuclei. We conclude the paper with a summary and some final remarks in Sec. VIII.
II. CHANNELS, POTENTIALS, AND SU(3) SYMMETRY A. Multi-channel Formalism
In this paper we consider the baryon-baryon reactions with S = −2 Like in Ref.'s [14,15] we will for the YN-channels also refer to A 1 and A 2 as particles 1 and 3, and to B 1 and B 2 as particles 2 and 4. For the kinematics and the definition of the amplitudes, we refer to paper I [3] of this series. Similar material can be found in [15]. Also, in paper I the derivation of the Lippmann-Schwinger equation in the context of the relativistic two-body equation is described.
On the physical particle basis, there are five charge channels: Like in [14,15], the potentials are calculated on the isospin basis. For S = −2 hyperon-nucleon systems there are three isospin channels: For the kinematics of the reactions and the various thresholds, see [14]. In this work we do not solve the Lippmann-Schwinger equation, but the multi-channel Schrödinger equation in configuration space, completely analogous to [15]. The multi-channel Schrödinger equation for the configuration-space potential is derived from the Lippmann-Schwinger equation through the standard Fourier transform, and the equation for the radial wave function is found to be of the form [15] where A i,j contains the potential, nonlocal contributions, and the centrifugal barrier, while B i,j is only present when non-local contributions are included. The solution in the presence of open and closed channels is given, for example, in Ref. [21]. The inclusion of the Coulomb interaction in the configuration-space equation is well known and included in the evaluation of the scattering matrix.
Obviously, the potential on the particle basis for the q = 2 and q = −2 channels are given by the I = 2 ΣΣ potential on the isospin basis. For q = 0 and q = ±1, the potentials are related to the potentials on the isospin basis by an isospin rotation. Using the indices a, b, c, d for ΛΛ, ΞN, ΣΛ, and ΣΣ respectively, we have [22] 5) and for q = +1 we have Here, when necessary an isospin label is added in parentheses.
The momentum space and configuration space potentials for the ESC-model have been described in [3] for baryon-baryon in general. Therefore, they apply also to hyperon-nucleon and we can refer for that part of the potential to paper I. Also in the ESC-model, the potentials are of such a form that they are exactly equivalent in both momentum space and configuration space. The treatment of the mass differences among the baryons are handled exactly similar as is done in [14,15]. Also, exchange potentials related to strange meson exchanhes K, K * etc. , can be found in these references.
The baryon mass differences in the intermediate states for TME-and MPE-potentials has been neglected for YN-scattering. This, although possible in principle, becomes rather laborious and is not expected to change the characteristics of the baryon-baryon potentials.
B. Potentials and SU(3) Symmetry
We consider all possible baryon-baryon interaction channels, where the baryons are the members of the The baryon masses, used in this paper, are given in Table V. The meson nonets can be written as where the singlet matrix P sin has elements η 0 / √ 3 on the diagonal, and the octet matrix P oct is given by and where we took the pseudoscalar mesons with J P = 0 + as a specific example. For the other mesons the octet matrix is obtained by the following substitutions: Introducing the following notation for the isodoublets, where we again took the pseudoscalar mesons as an example, dropped the Lorentz character of the interaction vertices, and introduced the charged-pion mass to make the pseudovector coupling constant f dimensionless. All coupling constants can be expressed in terms of only four parameters. The explicit expressions can be found in Ref. [14]. The Σ-hyperon is an isovector with phase chosen such [23] that (2.12) This definition for Σ + differs from the standard Condon and Shortley phase convention [24] by a minus sign. This means that, in working out the isospin multiplet for each coupling constant in Eq. (2.11), each Σ + entering or leaving an interaction vertex has to be assigned an extra minus sign. However, if the potential is first evaluated on the isospin basis and then, via an isospin rotation, transformed to the potential on the physical particle basis (see below), this extra minus sign will be automatically accounted for. In appendix A, Table XVI and Table XVII we give the relation between the potentials on the isospin-basis, see (2.5)-(2.6), and the SU(3)-irreps.
Given the interaction Hamiltonian (2.11) and a theoretical scheme for deriving the potential representing a particular Feynman diagram, it is now straightforward to derive the one-meson-exchange baryon-baryon potentials. We follow the Thompson approach [25][26][27][28] and expressions for the potential in momentum space can be found in Ref. [15]. Since the nucleons have strangeness S = 0, the hyperons S = −1, and the cascades S = −2, the possible baryon-baryon interaction channels can be classified according to their total strangeness, ranging from S = 0 for NN to S = −4 for ΞΞ. Apart from the wealth of accurate NN scattering data for the total strangeness S = 0 sector, there are only a few, and not very accurate, YN scattering data for the S = −1 sector, while there are no data at all for the S < −1 sectors. We therefore believe that at this stage it is not yet worthwhile to explicitly account for the small mass differences between the specific charge states of the baryons and mesons; i.e., we use average masses, isospin is a good quantum number, and the potentials are calculated on the isospin basis. The possible channels on the isospin basis are given in (2.3).
However, the Lippmann-Schwinger or Schrödinger equation is solved for the physical particle channels, and so scattering observables are calculated using the proper physical baryon masses. The possible channels on the physical particle basis can be classified according to the total charge Q; these are given in (2.2). The corresponding potentials are obtained from the potential on the isospin basis by making the appropriate isospin rotation. The matrix elements of the isospin rotation matrices are nothing else but the Clebsch-Gordan coefficients for the two baryon isospins making up the total isospin. (Note that this is the reason why the potential on the particle basis, obtained from applying an isospin rotation to the potential on the isospin basis, will have the correct sign for any coupling constant on a vertex which involves a Σ + .) In order to construct the potentials on the isospin basis, we need first the matrix elements of the various OBE exchanges between particular isospin states. Using the iso-multiplets (2.9) and the Hamiltonian (2.11) the isospin factors can be calculated. The results are given in Table I, where we use the pseudoscalar mesons as a specific example. The entries contain the flavor-exchange operator P f , which is +1 for a flavor symmetric and −1 for a flavor anti-symmetric two-baryon state. Since two-baryons states are totally anti-symmetric, one has P f = −P x P σ . Therefore, the exchange operator P f has the value P f = +1 for even-L singlet and odd-L triplet partial waves, and P f = −1 for odd-L singlet and even-L triplet partial waves. In order to understand Table I fully, we have given in the following section Sec. III a general treatment of exchange forces. This treatment shows also how to deal with the case where the initial/final state involves identical particles and the final/inition state does not.
Second, we need to evaluate the TME and the MPE exchanges. The method we used for these is the same as for hyperon-nucleon, and is described in [4], Sec. IID.
III. EXCHANGE FORCES
The proper treatment of the flavor-exchange forces is for the S ≤ −2-channels more difficult than for the S = 0, −1-channels. The extra complication is the occurrence of couplings between channels with identical and channels with non-identical particles. In order to understand the several √ 2-factors, see [7], we give here a systematic treatment of the flavor-exchange potentials. The method followed is using a multi-channel framework, which starts starts by ordering the two-particle states by assigning A i and B i for the channel labeled with the index i, like in eq. (2.1). The particles A i and B i have CM-momenta p i and −p i , spin components s A,i and s B,i . The two-baryon states |A i B i and |B i A i are considered to be distinct, leading to distinct two-baryon channels. The 'direct' and the 'exchange' T-amplitudes are given by the T-matrix elements and similarly for the direct and flavor-exchange potentials V d and V e . It is obvious from rotation invariance that A similar definition (3.1) and relation (3.2) apply for the direct and flavor-exchange potentials V d and V e . We no- tice that in interchanging A and B there is no exchange of momenta or spin-components, see Fig. 2. This is necessary for the application of Lippmann-Schwinger type of integral equations, which can produce only one type of spectral function e.g. ρ(s, t). So, the momentum transfer for V d and for V e is the same. Viewed from the coupledchannel scheme this is the standard situation. The integral equations with two-baryon unitarity, e.g. the Thompson-, Lippmann-Schwinger-equation etc., read for the T d -and T e -operator These coupled equations can be diagonalized by introducing the T ± -and V ± -operators which, as follows from (3.3), satisfy separate integral equations Notice that on the basis of states with definite flavor symmetry the T ± and V ± matrix elements are also given by A. Identical Particles Sofar, we considered the general case where A i = B i for all channels. In the case that A i = B i for some i, one has B i A i |V e |A i B i = 0, because there is no distinct physical state corresponding to the 'flavor exchange-state'. For example for a flavor single channel like pp one deduces from (3.3) that then also T e = 0, and one has in this case the integral equation where the labels i and j now denote e.g. the spincomponents.
B. Coupled ΛΛ and ΞN system
This multi-channel system represents the case where there is mixture of channels with identical and with nonidentical particles. The three states we distinguish are |ΛΛ , |ΞN , and |N Ξ . Choosing the same ordering, the potential written as a 3 × 3-matrix reads (3.9) With a similar notation for the T-matrix, the Lippmann-Schwinger equation can be written compactly as a 3 × 3matrix equation: (3.10) Next, we make a transformation to states, which are either symmetric or anti-symmetric for particle interchange. Then, according to the discussion above, we can separate them in the Lippmann-Schwinger equation. This is achieved by the transformation one gets in the transformed basis for the potential and of course, a similar form is obtained for the T-matrix on the transformed basis. Now, obviously we have that V ΛΛ;ΞN = V ΛΛ;N Ξ and V ΞN ;ΛΛ = V N Ξ;ΛΛ . Therefore, one sees that the even and odd states under particle exchange are decoupled in (3.12). Also (V ΞN ;ΛΛ + V N Ξ;ΛΛ )/ √ 2 = √ 2V ΞN ;ΛΛ , etc. showing the appearance of the √ 2-factors, mentioned before. Indeed, they appear in a systematic way using the multi-channel framework.
Analyzing the 1 S 0 -state one has because of the antisymmetry of the two-fermion state w.r.t. the exchange of all quantum labels, P f = −P σ P x = +1, where P f denotes the flavor-symmetry. Taking here the K(495) as a generic example, and using (2.10) and (2.11), one finds that Then, since |ΞN (I = 0) = |Ξ 0 n − |Ξ − p / √ 2, one obtains for the 'direct' potential the coupling (3.16) The same result is found for the 'exchange' potential V N Ξ;ΛΛ . Therefore which has indeed the (1+P f )-factor given in Table I, and is identical to Table IV in [7], for (ΛΛ|K|ΞN ). For ΞN |K|ΣΛ the entry for I = 1 consists of two parts. These correspond to V d ∝ g ΛN K g ΞΣK and V e ∝ g ΣN K g ΞΛK respectively, i.e. the direct and exchange contributions involve different couplings. Therefore, they are not added together.
D. The η-and π-exchange Potentials Next, we discuss briefly the calculation of the entries for η-and π-exchange in Table I. First, the entries with -indicate that the corresponding physical state does not exist. Next we give further specific remarks and calculations: I: Isospin factors for the various meson exchanges in the different total strangeness and isospin channels. P f is the flavor-exchange operator. The I = 2 case only contributes to S = −2 ΣΣ scattering, where the isospin factors can collectively be given by (ΣΣ|η, η ′ , π|ΣΣ) = 1 2 (1 + P f ), and so they are not separately displayed in the table. Non-exixisting channels are marked by a long-dash.
a. For η, η ′ -exchange one has that V e = 0. The matrix elements for the ΛΛ-and ΞN -state are easily seen to be correct. For the ΣΣ-states one has P f = 1 for I ΣΣ = 0, 2, and P f = −1 for I ΣΣ = 1. This explains the ΣΣ matrix element.
b. For ΞN |π|ΞN the calculation is identical to that for NN, in particular pn.
c. For ΣΣ|π|ΣΣ consider the I = 0, I 3 = 0 and I = 1, I 3 = 0 matrix elements. In these cases one has V e = 0 as one can easily check. Then, using the cartesian base, we have for one obtains the results in Table I. With the ingredients given above one can easily check the other entries in Table I.
IV. SHORT-RANGE PHENOMENOLOGY
For a detailed discussion and description of the shortrange region we refer to paper II [2].
Here, the meson-and diffractive-exchange and the quark-core in the ESC08c-modeling has been described. In this section we give the quark-core phenomenology for the S=-2 baryon-baryon channels.
A. Relation S=-2 YN,YY-states and SU f s (6)-irreps The relation between the SU f (3)-irreps and SU f s (6)irreps has been derived in paper II [2] In Appendix A the S=-2 BB-potentials are given in terms of the SU(3) firreps. Combining these two things gives the representation of the S=-2 potentials in terms of the SU f s (6)-irreps as displayed in Tables IV A and IV A. TABLE II: SU (6) f s -contents spin-space odd 1 S0, 3 P, 1 D2, ... potentials on the spin-isospin basis.
B. Parametrization Quark-core effects As introduced in II, the repulsive short-range Pomeron-like YN,YY potential is splitted linearly in a diffractive (Pomeron) and a quark-core component by writing where V BB (P OM ) represents the genuine Pomeron and V BB (P B) the structural effects of the quark-core forbidden [51]-configuration, i.e. a Pauli-blocking (PB) effect. Since the Pomeron is a unitary-singlet its contribution is the same for all BB-channels (apart from some small baryon mass breaking effects), i.e. V BB (P OM ) = V N N (P OM ). Furthermore the PB-effect for the BBchannels is assumed to be proportional to the relative weight of the forbidden [51]-configuration compared to its weight in NN where a P B denotes the quark-core fraction w.r.t. the pomeron potential for the NN-channel, i.e. V N N (P B) = a P B V P N N . Then we have A subtle treatment of all BB channels according to this linear scheme is characteristic for the ESC08c-model. The value of the PB factor a P B is searched in the fit to the NN-and YN-data. The parameter a P B turns out to be about 27.5%. This means that the Quark-core repulsion is roughly 34% of the genuine Pomeron repulsion. Then, the PB effects in the S=-2 channels are entirely determined. From Eqn. (4.3) the ratio V P BB /V P N N is given by the weights of the [51]-irrep and a P B . In Table IV B we give this ratio for the various S=-2 BB channels in the ESC08c model, With only one exception, the effective pomeron repulsion is stronger than in the NNchannels.
A. Thresholds
Clearly, the S = −2 two-baryon channels represent a number of separate coupled-channel systems, separated by the charge, see (2.2). A further subdivision is according to the total isospin. The different thresholds have been discussed in detail in [7], and we show these thresholds here in Fig. 1 for the purpose of general orientation. Their presence turns the Lippmann-Schwinger and Schrödinger equation into a coupled-channel matrix equation, where the different channels open up at different energies. In general one has a combination of 'open' and 'closed' channels. For a discussion of the solution of such a mixed system, we refer to [29].
B. Threshold-and Meson-mass corrections in Potentials
As discussed in [7], the one-meson-exchange Feynmangraph consists actually of two three-dimensional timeordered graphs. The energy denominator from these two diagrams reads s is the total energy and ω 2 = k 2 + m 2 , with m the meson mass and k = p ′ − p the momentum transfer. From (5.1) it is clear that the potential is energy dependent. We use the static approximation E i → M i and W → M 0 1 +M 0 2 , where the superscript 0 refers to the masses of the lowest threshold of the particular coupledchannel system q, see (2.2). They are in general not equal to the masses M 1 and M 2 occurring in the timeordered diagrams. For example, the potential for the ΣΣ contribution in the coupled-channel ΛΛ system has .
For a < 0 there is the extra term +2θ(−a)/(ω 2 − a 2 ) on the r.h.s. in (5.2). This integral representation makes it possible to deal with it numerically rather exactly. However, we think that such a sophistication is unnecessary at present nor for a description of the S = −1 scattering data, nor for S = −2. where there are virtually no data at all. Therefore, we handle with this energy dependence approximately as follows: 1. Elastic potentials: In this case we use (5.2), and in for 0 < a < m. Because of this condition we can not apply this to the pseudoscalars, but is possible for the vector-, scalar-, and axial-mesons. The largest effect is for ΛΛ-scattering, where the ΣΣ-channel potential is somewhat reduced by this effect. This because the ΣΣ-channel is rather far away from the others. In this paper we neglect the effects of a finite a in all elastic channels and for all mesons.
2. Inelastic potentials: In this case, like in [7] and all other papers on the Nijmegen potentials, we use the approximation of [30], using the fact that M 0 1 +M 0 2 is mostly rather close to the average of the initial and final-sate baryon masses. Then, the propagator can be written as which amounts to introducing an effective meson mass m For more details of this effect on the exchanged meson masses, we refer to [7]. The used baryon masses are about the same as in [7], and are given in Table V. The used meson masses are the same as in paper II [2], as well as the cut-off mases.
VI. RESULTS
The main purpose of this paper is to present the properties of the ESC08c potentials for the S = −2 sector. As described above, the free parameters in each model are fitted mainly to the NN and YN scattering data for the S = 0 and S = −1 sectors, respectively. Given the expressions for the coupling constants in terms of the octet and singlet parameters and their values for the six different models as presented in Ref. [14], it is straightforward to evaluate all possible baryon-baryon-meson coupling constants needed for the S ≤ −2 potentials. A complete set of coupling constants for models ESC08c is given in Table VI. In Fig's 3 and Fig. 4 we display the OBE potentials for the individual pseudoscalar, vector, scalar, and axial mesons in the case of model ESC08c. In the following we will present the model predictions for scattering lengths, bound states, and cross sections. For I = 1 we have for ESC08c: scattering lengths are found to be larger in absolute value than in the NSC97 models [7], indicating a more attractive ΛΛ interaction. The old experimental information seemed to indicate a separation energy of ∆B ΛΛ = 4 − 5 MeV, corresponding to a rather strong attractive ΛΛ interaction. As a matter of fact, an estimate for the ΛΛ 1 S 0 scattering length, based on such a value for ∆B ΛΛ , gives a ΛΛ ( 1 S 0 ) ≈ −2.0 fm [31,32]. However, in recent years the experimental information and interpretation of the ground state levels of 6 ΛΛ He, 10 ΛΛ Be, and 13 ΛΛ B [33], has been changed drastically. This because of the Nagara-event [8], identified uniquely as 6 ΛΛ He [8], which established that the ΛΛ-interaction is weaker (∆B ΛΛ ≈ 0.7 MeV).
In NSC97 [14] it was only possible to increase the attraction in the ΛΛ channel by modifying the scalarexchange potential. If the scalar mesons are viewed as being mainly qq states, one finds that the (attractive) scalar-exchange part of the interaction in the various channels satisfies suggesting indeed a rather weak ΛΛ-potential. The NSC97 fits to the YN scattering data [14] give values for the scalar-meson mixing angle which seem to point to almost ideal mixing for the scalars as qq states. We found that an increased attraction in the ΛΛ channel would give rise to (experimentally unobserved) deeply bound states in the ΛN channel. On the other hand, in the ESC-models there are in principle more possibilities because of the presence of meson-pair potentials. As one sees from the values of the a ΛΛ ( 1 S 0 ) in the ESC08c model of this paper, we can produce the apparently required attraction in the ΛΛ interaction without giving rise to ΛN bound states. Notice that also in ESC08 we have ideal scalar mixings, akin to NSC97. given in [7]. As in [7], for a general orientation, we list in Table IX this is a {8 a }-state, which was a little bit surprising, because the OBE-potential one expects to be rather repulsive in the irrep {8 a }, see [15]. In the ESC04 models this occurrence was ascribed to the inclusion of the potentials of the axial-vector-mesons, and the meson pairs. Since ESC04a-c did not show such a bound state it is considered to be accidental. However, the situation in ESC08c is completely different. Here the bound state is in the deuteron-like states where strong tensor forces are present, which causes the binding similarly to the np-deuteron. In Fig. 17 the tensor potentials are shown, where it appears that also the ΣΛ tensor potential is important. This is similar to the situation in ΛN below the ΣN -threshold where a large cusp occurs. The calculated binding energy B E (D * ) = 1.56 MeV.
C. Partial Wave Phase Parameters
For the BB-channels below the inelastic threshold we use for the parametrization of the amplitudes the standard nuclear-bar phase shifts [34]. The information on the elastic amplitudes above thresholds is most conveniently given using the BKS-phases [16][17][18]. For uncoupled partial waves, the elastic BB S-matrix element is parametrized as S = ηe 2iδ , η = cos(2ρ) . In Fig's 13-16 the BKS-phases and coupling parameters (α, β, ϕ) for ESC08c are shown. In Fig 13 and Fig. 15 we also show the 1 S 0 -phases (n.c.) for the case with no coupling to the other two-particle channels. For ΛΛ the n.c.-curve shows that the potential is repulsive, which is mainly due to the {1}-irrep. The attraction comes in particular from the coupling to the ΞN -channel.
In the Tables XIX-XXV, we give for ESC08c the phases and inelasticity parameters ρ and η 11 , η 12 , η 22 , which enable the reader to construct the N -matrix most directly.
D. Total cross sections
We next present the predictions for the total cross section for several channels. We suppose always that the beam as well as the target are unpolarized. Therefore, we incuded the statistical factors, which are 1/4 for the spin-singlet and 3/4 for the spin-triplet case.
For those cases where both baryons are charged, we do not include the purely Coulomb contribution to the total cross section, nor do we include the Coulomb interference to the nuclear amplitude. The cross section is calculated by summing the contributions from partial waves with orbital angular momentum up to and including L = 2. We find this to be sufficient for all the S = 0 sectors; inclusion of any higher partial waves has no significant effect. Inclusion of higher partial waves will shift the total cross section to slightly higher values without changing In Table X we show the ΛΛ → ΛΛ, ΞN total X-sections as a function of the laboratory momentum p Λ . Being dominantly S-wave, there is in principle has a (sharp) cusp at the ΞN -threshold, i.e. p Λ = 344.4 MeV/c 2 , which indeed is visible in the table. In Table X we also show the ΞN → ΞN, ΛΛ total X-sections as a function of the laboratory momentum p Ξ . In Table XI we show the total X-sections for the ΞN → ΞN, ΣΛ and the I = 1, L = 0 ΣΛ → ΣΛ, ΞN, ΣΣ reactions as a function of the laboratory momentum p Ξ . [35,36] shows qualitatively very similar results. The exception is the SU(3)singlet {1}-irrep. Here LQCD potential is atractive for 0 < r < ∞, whereas in ESC08c there is an attractive pocket for r ≤ 0.5 fm and is repulsive for r > 0.5 fm. This shape is due to the behavior of the spin-spin potentials from pseudoscalar and vector exchange, which have zero volume integrals. In the {1}-irrep for the SU(3)broken potential (solid line) there is no bound state, i.e. no H-particle [37]. This is in agreement with the recent experimental result studying Υ(1S, 2S)-decay [38]. ESC08c. G-matrix calculations are performed with the continuous (CON) choice, where off-shell potentials are taken into account continuously from on-shell ones in intermediate propagations of correlated pairs. Then, a twobody state is specified by spin S, isospin T , orbital and total angular momenta L and J, respectively. The imaginary parts of G-matrices appear due to energy-conserving transitions from ΞN to ΛΛ channels in the T = 0 1 S 1 and 3 P J states. The conversion width Γ c Ξ is obtained from the imaginary part of U Ξ multiplying by −2. Table XII shows the potential energy U Ξ and its partial-wave contributions at normal density ρ 0 . The U Ξ values turn out to be given by the strong cancellation between attractive contributions in 3 S 1 (T = 0, 1) states and repulsive contributions in 1 S 0 (T = 0, 1) states. Eventually, values of U Ξ become far less attractive than those of U Λ . The calculated value of Γ c Ξ (ρ 0 ) is also given in the Table XII, In Fig. 22, U Ξ values and partial-wave contributions are drawn as a function of k F . Here, U Ξ (k F ) is shown by a bold curve, and contributions in 33 S 1 , 31 S 1 , 13 S 1 and 31 S 1 states are shown by thin curves. P -state contribution summed for (S, T, J) states is shown by a dashed curve.
As well as in Table XII, we see here the cancellation between attractive contributions in spin-triplet S states and repulsive ones in spin-singlet S states. Especially, the attraction in the 3 S 1 T = 1 state is due to the ΞN -ΛΣ-ΣΣ tensor-coupling interactions in this state. If these tensor parts in this channel are switched off, the value of U Ξ becomes strongly repulsive. On the other hand, the P -state contributions are small.
It should be noted that the U Ξ curve becomes substantially attractive in the low density region due to the strong density dependence. This feature works favorably for Ξ binding energies in light systems.
For applications to finite Ξ systems, ΞN -ΞN central parts of the complex G-matrix interactions for ESC08c are represented in Gaussian forms, whose coefficients are given as a function of k F . The determined parameters are given in Table XIII. As demonstrated in Ref. [39], the observed spectra of Λ hypernuclei are described successfully with the Λ-nucleus folding potentials derived from the ΛN G-matrix interactions. Here, the same method is applied to Ξ − -nucleus systems. A Ξ-nucleus folding potential in a finite system is obtained from G T S (±) (r; k F ) as follows: where (±) denote parity quantum numbers. Here, core nuclei are assumed to be spherical, and densities ρ(r) and mixed densities ρ(r, r ′ ) are obtained from Skyrme-HF wave functions. The isospin-dependence of G T S (±) (r; k F ) leads to the Lane term. In this work, only the diagonal parts of the t Ξ · T c term are taken into account.
For k F included in G(r; k F ), we use the averageddensity approximation (ADA): An averaged value of ρ(r) is defined byρ = φ Ξ (r)|ρ(r)|φ Ξ (r) by using a Ξ-state function φ Ξ (r). Then, an averaged value of k F is given byk F = (1 + α) 1.5π 2ρ 1/3 . This valuek F is put into G(r; k F ) and determined self-consistently for each Ξ state, and α is a parameter fixed by a fine tuning to the experimental data. Hereafter, we investigate the two cases of α = 0.0 and 0.1. Table XIV shows the results for 1S and 2P bound states in Ξ − + 12 C and Ξ − + 14 N systems, where Coulomb interactions between Ξ − and 12 C ( 14 N) are taken into account. B Ξ − and r 2 are the binding energy and r.m.s. radius of Ξ − , respectively. Conversion widths Γ c Ξ − come from the imaginary parts included in T = 0 1 S 1 and 3 P states. The obtained 2P states become unbound, when the Coulomb interactions between Ξ − and 12 C ( 14 N) are switched off. Namely these 2P states are so called Coulomb-assisted bound states. They are specified by the fact that the values of r 2 are large due to their weak binding, but far smaller than those in Ξ − atomic states. For instance, we have B Ξ − =0.175 MeV and r 2 =36 fm for the Ξ − + 14 N 3D state. Experimental information for ΞN interactions can be obtained from emulsion events of simultaneous emission of two Λ hypernuclei (twin Λ hypernuclei) from a Ξ − absorption point. The Ξ − produced by the (K − , K + ) reaction is absorbed into a nucleus ( 12 C, 14 N or 16 O in emulsion) from some atomic orbit, and by the following Ξ − p → ΛΛ process two Λ hypernuclei are produced. Then, the energy difference between the initial Ξ − state and the final twin Λ state gives rise to the binding energy B Ξ − between Ξ − and the nucleus.
Two events of twin Λ hypernuclei (I) [40] and (II) [41] were observed in the KEK E-176 experiment, and recently the new event (III) [42] has been observed in the KEK E373 experiment. In the cases of (I) and (II), each event has no unique interpretation for its reaction process. However, it is possible to find a consistent understanding for these two events as follows: The events (I) and (II) were interpreted to be reactions of Ξ − captured by 12 C. Assuming that the Ξ − is absorbed from the 2P orbit in each case, we have consistently the following reactions Assuming that the Ξ − is captured from a 2P state, the calculated values of B Ξ − (2P ) in the Ξ − + 12 C system (1.10 and 0.68 MeV for α =0.0 and 0.1, respectively) turn out to be consistent with the values of 0.65 ∼ 1.00 MeV given by these data. In Ref. [43], this result was used to fit the strength of the ΞN interaction. In these two events, however, a possibility cannot be ruled out that they are captured from 3D states. The event (III) is uniquely identified as with B Ξ − = 4.38 ± 0.25 MeV, which is the first clear evidence of a deeply bound state of the Ξ − − 14 N system. This value can be reproduced by taking α = 0.14, assuming the Ξ − is captured from the 1S state. However, it is far more probable that 10 Λ Be is produced in some excited state. In Ref. [42], the excitation energies are taken from the theoretical calculations [44] [45], while the groundstate value of B Λ is taken from the emulsion data. Their estimated values of B Ξ − are 1.8 ∼ 2.0 MeV and 1.1 ∼ 1.3 MeV, when 10 Λ Be is in the first and second excited state, respectively. Our calculated values of B Ξ − (2P ) 1.85 and 1.22 MeV for α = 0.0 and 0.1 are within the former and the latter regions, respectively. The similar value of B Ξ − (2P ) was predicted in Ref. [43]. Thus, assuming that the Ξ − is in a 2P state, the experimental values of B Ξ − can be explained reasonably by small tuning of our G-matrix interaction in both cases of possible two excitations of 10 Λ Be. It should be noted that, assuming Ξ − captures from 2P states, we could get a consistent interpretation for the three emulsion events (I), (II) and (III) with use of the G-matrix interaction derived from ESC08c, It is well known that capture probabilities of Ξ − from 2P states are far smaller than those from 3D states. In spite of this fact, twin Λ hypernuclei are produced dominantly after 2P -Ξ − captures. As discussed in Ref. [46], Hereafter, calculations are performed using the Gmatrix interactions with α = 0.1. Let us demonstrate the results for heavier systems 28 Ξ − Mg and 89 Ξ − Sr, being produced by p(K − , K + )Ξ − reactions on 28 Si and 89 Y targets, respectively. In Table XV The BNL-E885 experiment [47] suggests that a Ξ − s.p. potential in 11 Ξ − Be is given by the attractive Wood-Saxon potential with the depth ∼ −14 MeV (called WS14). In this case, the calculated value of B Ξ − (2P ) is 0.41 (0.79) MeV for the Ξ − + 12 C ( 14 N) system, WS14 being slightly less attractive than the above Ξ-nucleus potentials suitable to the emulsion events of twin Λ hypernuclei.
In order to investigate the possibility of observing Ξ − hypernuclear state, we calculate K + spectra of (K − , K + ) reactions on some targets with use of our G-matrix folding potentials. Calculations are performed with the Green's function method in DWIA [48]. In Fig. 23, we show the obtained K + spectra for 12 C and 28 Si targets at forward-angle with an incident momentum 1.65 GeV/c. We can see clearly the peaks of p-and d-bound states, respectively, in the cases of 12 C and 28 Si targets. Here, the experimental resolution is assumed to be 2 MeV. Solid and dotted curves are for ESC08c and WS14, respectively. Strong enhancement of the highest-L state in the ESC08c case is due to the k F -dependent effects of Gmatrix interactions. When thek F values for the p and d states are taken as the same as those for the s states, the obtained spectra for ESC08c become similar to those for WS14. We conclude this section by making some remarks on the inclusion of the three-body repulsive (TBR) and attractive (TBA) interactions for S=-2 systems. In the case of the Λ-hypernuclei in paper II [2] an important conclusion from the G-matrix analysis is that the experimental B Λ values and excited spectra can be reproduced in a natural way by ESC08c. Although the multipomeron (MPP) repulsive contributions are decisively important in the high density region, they should be almost canceled by the three-body attractions (TBA) in the normal density region.
In the case of the Ξ-hypernuclei it is shown here that the ΞN attraction in ESC08c is consistent with the Ξ-nucleus binding energies given by the emulsion data of the twin Λ-hypernuclei. As in the case of the Λ- hypernuclei, we can expect some role of the MPP+TBA contribution. For a clear analysis, however, the experimental data of B Ξ are too scarce. On the other hand, MPP contributions are essential in the problem of Ξmixing in neutron star matter. We defer the discussion and inclusion of the three-body interactions in the S=-2 system, i.e. ESC08c+ model, to a future paper.
VIII. SUMMARY AND CONCLUSION
The ESC08c model potentials presented here are a major step in constructing the baryon-baryon interactions for scattering and hypernuclei in the context of broken SU(3)-symmetry using, apart from the gaussian repulsion from the Pomeron and inclusion of a systematic quark-core effects for all baryon-baryon channels, generalized yukawian meson-exchange for the dynamics. The potentials are based on (i) One-boson-exchanges, where the coupling constants at the baryon-baryon-meson vertices are restricted by the broken SU(3) symmetry, (ii) Two-pseudoscalar exchanges, (iii) Meson-Pair exchanges. Each type of meson exchange (pseudoscalar, vector, axial-vector, scalar) contains five free parameters: a singlet coupling constant, an octet coupling constant, the F/(F +D) ratio α, a meson-mixing angle. The potentials are regularized with gaussian cut-off parameters, which provide a few additional free parameters. As shown in Although we performed truly simultaneous fits to the NN and YN data, effectively most of these parameters are determined in fitting the rich and accurate NN scattering data, while the remaining ones are fixed by fitting also the (few) YN scattering data. This still leaves enough flexibility to accomodate the imposition of a few extra constraints. As demonstrated here, the assumption of SU(3) symmetry for the couplings then allows us to extend these models to the higher strangeness channels (i.e., YY and all interactions involving cascades), without the need to introduce additional free parameters. Like the NSC97 models, the ESC04 and ESC08 models are very powerful models of this kind, and the very first realistic ones. The most striking prediction of ESC08c is the existence of the S=-2 deuteron D * , below the ΞN -threshold. The width is expected to be small since the decay must be isospin breaking and is electromagnetic and/or weak. The experimental search for baryon-baryon bound states by the Rome-Saclay-Vanderbilt collaboration [49] in the mass range 21.-2.5 GeV/c2 was negative. It could be that the resolution in this experiment was insufficient to detect a very narrow state near the ΞN -threshold. It is important to emphasize that the existence of the Attractive contributions in spin-triplet states ( 33 S1 and 13 S1) and repulsive ones in spin-singlet states ( 31 S1 and 11 S1) are shown by thin curves. The P -state contribution, summed for (T, S, J), is shown by a dashed curve.
D * -state is strongly connected to the Ξ-nucleus attraction as indicated by experiments, see [47] and the recent emulsion-experiments results [42]. In one of the ESC04models, ESC04d, the bound S=-2 bound state occurred in the ΞN ( 3 S 1 , I = 0)-channel, which is a member of an SU(3) octet {8 a }-irrep. The ESC08c result is much more natural, fitting nicely with the existence of a {10 * } SU(3)-deuteron multiplet.
In order to illustrate the basic properties of these potentials, we have presented results for scattering lengths, possible bound states in S-waves, and total cross sections. Although the different versions ESC04 and ESC08 produce the N N and Y N data well, there are considerable differences. In the NN-sector the quality of the fit to the NN-data of the ESC08-models is superior to that for the ESC04-models. Also, they lead to notable differences in the hypernuclear structures, especially in S = −2 systems. A typical example can be seen in their ΞN sectors: The derived Ξ-nucleus potentials are different from each other even qualitatively. It is quite important that ESC04d and ESC08a,b,c solutions predicts the existence of Ξ-hypernuclei consistently with the indication given by the BNL-E885 experiment. For a discussion Ξ-nucleus attraction in the case of the ESC04 and ESC08a,b we refer to [5] and [20] respectively. The Ξnucleus attraction derived from ESC08c is owing to the situation that the ΞN interaction in the 3 S 1 ( 33 S 1 ) state is substantially attractive. This feature is intimately related to tensor-potential giving a strong Lane term. The We finally mention that these ESC08 potentials also provide an excellent starting point for calculations and predictions of multi-strange systems. The extension of this work to the S = −3, 4-systems, i.e. comprising all {8} ⊗ {8} baryon-baryon states, will be the topic of the last paper (IV) in this series. Space-spin symmetric states 3 S1, 1 P1, 3 D, ... | 2015-04-10T10:56:41.000Z | 2015-04-10T00:00:00.000 | {
"year": 2015,
"sha1": "1dfa1725094d7e8fca0da9334c164b15d52fb37b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1dfa1725094d7e8fca0da9334c164b15d52fb37b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3054791 | pes2o/s2orc | v3-fos-license | Off-label indications for antidepressants in primary care: descriptive study of prescriptions from an indication based electronic prescribing system
Objective To examine off-label indications for antidepressants in primary care and determine the level of scientific support for off-label prescribing. Design Descriptive study of antidepressant prescriptions written by primary care physicians using an indication based electronic prescribing system. Setting Primary care practices in and around two major urban centres in Quebec, Canada. Participants Patients aged 18 years or older who visited a study physician between 1 January 2003 and 30 September 2015 and were prescribed an antidepressant through the electronic prescribing system. Main outcome measures Prevalence of off-label indications for antidepressant prescriptions by class and by individual drug. Among off-label antidepressant prescriptions, the proportion of prescriptions in each of the following categories was measured: strong evidence supporting use of the prescribed drug for the respective indication; no strong evidence for the prescribed drug but strong evidence supporting use of another drug in the same class for the indication; or no strong evidence supporting use of the prescribed drug and all other drugs in the same class for the indication. Results 106 850 antidepressant prescriptions were written by 174 physicians for 20 920 adults. By class, tricyclic antidepressants had the highest prevalence of off-label indications (81.4%, 95% confidence interval, 77.3% to 85.5%), largely due to a high off-label prescribing rate for amitriptyline (93%, 89.6% to 95.7%). Trazodone use for insomnia was the most common off-label use for antidepressants, accounting for 26.2% (21.9% to 30.4%) of all off-label prescriptions. For only 15.9% (13.0% to 19.3%) of all off-label prescriptions, the prescribed drug had strong scientific evidence for the respective indication. For 39.6% (35.7% to 43.2%) of off-label prescriptions, the prescribed drug did not have strong evidence but another antidepressant in the same class had strong evidence for the respective indication. For the remaining 44.6% (40.2% to 49.0%) of off-label prescriptions, neither the prescribed drug nor any other drugs in the class had strong evidence for the indication. Conclusions When primary care physicians prescribed antidepressants for off-label indications, these indications were usually not supported by strong scientific evidence, yet often another antidepressant in the same class existed that had strong evidence for the respective indication. There is an important need to generate and provide physicians with evidence on off-label antidepressant use to optimise prescribing decisions.
Main OutCOMe Measures
Prevalence of off-label indications for antidepressant prescriptions by class and by individual drug. Among off-label antidepressant prescriptions, the proportion of prescriptions in each of the following categories was measured: strong evidence supporting use of the prescribed drug for the respective indication; no strong evidence for the prescribed drug but strong evidence supporting use of another drug in the same class for the indication; or no strong evidence supporting use of the prescribed drug and all other drugs in the same class for the indication.
results 106 850 antidepressant prescriptions were written by 174 physicians for 20 920 adults. By class, tricyclic antidepressants had the highest prevalence of off-label indications (81.4%, 95% confidence interval, 77.3% to 85.5%), largely due to a high off-label prescribing rate for amitriptyline (93%, 89.6% to 95.7%). Trazodone use for insomnia was the most common off-label use for antidepressants, accounting for 26.2% (21.9% to 30.4%) of all off-label prescriptions. For only 15.9% (13.0% to 19.3%) of all off-label prescriptions, the prescribed drug had strong scientific evidence for the respective indication. For 39.6% (35.7% to 43.2%) of off-label prescriptions, the prescribed drug did not have strong evidence but another antidepressant in the same class had strong evidence for the respective indication. For the remaining 44.6% (40.2% to 49.0%) of off-label prescriptions, neither the prescribed drug nor any other drugs in the class had strong evidence for the indication.
COnClusiOns
When primary care physicians prescribed antidepressants for off-label indications, these indications were usually not supported by strong scientific evidence, yet often another antidepressant Introduction Antidepressant use has increased substantially in the UK 1 2 and in other western countries such as Canada 3 and the USA. 4 In fact, the number of antidepressants dispensed in England increased by 3.9 million (6.8%) between 2014 and 2015-more than any other therapeutic class of prescription drugs. 2 One suspected factor underlying the widespread use of antidepressants is an expanding array of indications for these drugs, many of which are unapproved (off-label) for certain antidepressants. 5 There is a lack of epidemiological evidence on the extent to which physicians prescribe antidepressants for off-label indications because treatment indications are not documented for most prescriptions. 6 With the advent of electronic prescribing (e-prescribing) systems, however, formal documentation of treatment indications linked to prescriptions (that is, indication based prescribing) is possible. Although indication based prescribing is not broadly used at the moment, it represents a valuable means for studying off-label prescribing. 7 We recently used data from a unique, indication based e-prescribing system to describe treatment
WhAT IS AlReAdy knoWn on ThIS TopIC
Off-label drug use without strong scientific evidence is associated with an increased risk of adverse drug events About a third of all antidepressants in primary care are prescribed for off-label indications The degree to which off-label antidepressant prescriptions are supported by strong scientific evidence is unknown
WhAT ThIS STudy AddS
Most off-label antidepressant prescriptions lack strong scientific evidence, but another evidence based antidepressant from the same class could often be considered as an alternative There is an important need to produce more evidence evaluating the clinical outcomes associated with off-label antidepressant use Indication based electronic prescribing systems represent an effective means to study off-label antidepressant use and communicate evidence back to physicians to optimise prescribing decisions indications for antidepressants in primary care. 8 We found that over the past decade, primary care physicians commonly and increasingly prescribed antidepressants for non-depressive indications. Moreover, when antidepressants were not prescribed for depression, two of three prescriptions were for an off-label indication.
Off-label prescribing warrants particular attention and oversight when the drug use is not supported by scientific evidence showing greater benefits relative to risk. 9 10 Inefficacious antidepressant use is a concern because it creates unnecessary costs and puts patients at risk of experiencing burdensome side effects and serious adverse events that could be avoided. For example, even though newer generation antidepressants such as selective serotonin reuptake inhibitors (SSRIs) are considered safer and more tolerable than the older generation tricyclic antidepressants (TCAs), they are costly and have still been associated with notable side effects and safety concerns. These side effects include sexual dysfunction, drowsiness, insomnia, weight gain, and fatigue, [11][12][13] and safety concerns include an increased risk of fractures 14 and upper gastrointestinal bleeds. 15 16 Off-label antidepressant use could also expose patients to unknown health risks if their clinical characteristics differ from the patient populations studied in pre-market clinical trials. 17 Indeed, the risk of adverse drug events has been found to be 54% higher when drugs are used off-label without strong scientific evidence than when drugs are used on-label. 18 Although an estimated 29% of antidepressants are prescribed for off-label indications, 8 it is unknown to what extent these off-label prescriptions are supported by scientific evidence. Thus, the objective of this study was to examine off-label indications for antidepressants in primary care and assess the level of scientific evidence supporting these off-label prescriptions.
study design and setting
This descriptive study took place in the Canadian province of Quebec, where a universal health insurance programme covers the cost of essential medical care for all residents. By law, all residents must be covered for prescription drugs through either private plans (that is, group or employee benefit plans) or the public drug insurance plan. About 50% of residents are registered in the public drug insurance plan, including those older than 65, welfare recipients, and those not insured through an employer. At a minimum, all private plans must provide the same formulary for insured drugs as the public drug insurance plan. 19 Data source and study population The Medical Office of the XXIst Century (MOXXI) is an electronic prescription and drug management system used by consenting primary care physicians in community based, fee-for-service practices around two major urban centres in Quebec. 20 Since 2003, 207 physicians (25% of eligible physicians) and over 100 000 patients (26% of all who visited a MOXXI physician) have consented to participate in the MOXXI programme and have their information used for research purposes.
The e-prescribing tool in the MOXXI system requires physicians to explicitly record at least one treatment indication per prescription by either using a dropdown menu that lists on-label and off-label indications (without distinction) or typing the indication(s) into a free text field. In a validation study, 21 these physician documented indications had excellent sensitivity (98.5%) and high positive predictive value (97.0%) when compared with a blinded, post hoc, physician facilitated chart review. The MOXXI system also provides physicians with access to professional drug monographs that are maintained by a commercial vendor 22 and produces automated drug alerts about potential prescribing problems. Alerts are generated when potential dosing errors or drug-drug, drug-disease, drug-age, or drug-allergy contraindications are identified; however, alerts are not generated when drugs are prescribed for off-label indications. This study was approved by the McGill institutional review board.
inclusion and exclusion criteria This study included prescriptions of drugs approved for depression that were written by MOXXI physicians between 1 January 2003 and 30 September 2015 for patients aged 18 years or older. The antidepressant prescription was the unit of analysis. We excluded drugs with fewer than 150 prescriptions during the study period (roughly corresponding to a prescribing frequency of fewer than once per month). This resulted in the exclusion of all monoamine oxidase inhibitors (phenelzine, tranylcypromine, moclobemide, and isocarboxazid), nefazodone, maprotiline, and vortioxetine.
Measurements
On-label versus off-label indications Treatment indications were first categorised by use of ICD-10 (international classification of diseases, 10th revision). Each prescription-representing a drug-indication pair-was then classified as on-label or off-label, depending on whether the drug had been approved for the indication by Health Canada or the US Food and Drug Administration as of September 2015 (the end of the study period). Approved indications were determined at the end of the study period rather than the year in which the prescription was written so that all prescriptions would be classified using the same benchmark. If a physician recorded multiple indications for the drug (n=1922, 1.8% of all antidepressant prescriptions), the prescription was classified as off-label only if all the indications were not approved.
Level of scientific evidence for off-label prescriptions
Off-label prescriptions were further analysed according to the level of scientific evidence supporting the drug's use for the off-label indication. Off-label prescriptions were assigned to one of three categories: strong evidence for the prescribed drug, no strong evidence for the prescribed drug but strong evidence for another drug in the same class, or no strong evidence for the prescribed drug and all other drugs in the same class.
To determine whether off-label prescriptions had strong evidence for the prescribed drug, we used the DRUG-DEX compendium (Thomson Micromedex), 23 which is a reputable and authoritative reference used by the US Centers for Medicare and Medicaid Services to determine coverage for off-label drug uses. 24 The compendium contains evaluations of drug efficacy, strength of recommendation, and strength of evidence for off-label drug indication pairs.
Using the same criteria as in previous studies, 7 18 25 we classified off-label prescriptions as having strong evidence for the prescribed drug if evidence showed that the drug was effective or favoured efficacy for the indication, the drug was recommended for all or most patients with the indication, and at least one randomised clinical trial was included among the studies used to evaluate the drug's efficacy for the indication. If an off-label prescription did not have strong evidence for the prescribed drug, we then determined whether there was strong evidence for another drug in the same class. This condition was satisfied if another drug in the same class was either on-label or off-label with strong evidence for the indication. If an off-label prescription still did not have strong evidence for another drug in the class, then the prescription was classified as having no strong evidence for the prescribed drug and all other drugs in the same class.
statistical analysis Patient and physician characteristics were summarised by use of descriptive statistics. The prevalence of off-label indications was estimated as the number of off-label prescriptions divided by the total number of antidepressant prescriptions overall, in the class, or for the individual drug. We estimated the level of scientific evidence for off-label prescriptions as the number of off-label prescriptions in each evidence category divided by the total number of off-label antidepressant prescriptions overall or in the class. The prevalence of different treatment indications for each drug was estimated as a proportion, using the total number of prescriptions for the drug as the denominator. For all proportions, we calculated 95% confidence intervals using a cluster bootstrap approach 26 to account for within-cluster correlation among prescriptions for the same patient and from the same physician. The reported 95% confidence intervals correspond to the values of the 2.5th and 97.5th percentiles of the distribution of the respective estimates across 1000 bootstrap re-samples. 26 All analyses were conducted using SAS (SAS Institute) software, version 9.4.
Patient involvement
No patients were involved in setting the research question or the study measures, nor were they involved in developing plans for the design or implementation of the study. No patients were asked to advise on interpretation or writing up of results. The study findings will be disseminated to study participants through physician newsletters and patient-friendly handouts.
Results
During the study period, 106 850 antidepressant prescriptions (5.8% of 1.83 million prescriptions for any drug) were written by 174 primary care physicians for 20 920 adults. There was roughly an equal number of male (n=90; 52%) and female (n=84; 48%) physicians, most of whom had been trained in North America (n=160; 92%) and practicing for at least 15 years (n=131; 75%). Two thirds of patients were female (n=13 990; 66.9%), most patients were middle aged at the time of their earliest antidepressant prescription (median 53 years, interquartile range 43-65), and patients were equally likely to have public (n=10 875; 52.0%) or private (n=10 045; 48.0%) drug insurance. Over the study period, patients had a median of three (interquartile range 1-7) antidepressant prescriptions and were prescribed a median of one (1-2) type of antidepressant drug.
Prevalence of off-label indications
Overall, 29.3% (95% confidence interval 26.6% to 32.3%) of all antidepressant prescriptions were written for an off-label indication (table 1 ). By class, TCAs had the highest prevalence of off-label indications (81.4%, 77.3% to 85.5%), followed by other antidepressants (trazodone, bupropion, and mirtazapine; 42.4%, 37.1% to 47.7%) and SSRIs (21.8%, 19.0% to 25.0%). By contrast, the prevalence of off-label indications was much lower for serotonin-norepinephrine (noradrenaline) reuptake inhibitors (SNRIs; 6.1%, 4.8% to 7.5%). The high prevalence of off-label indications for TCAs was mostly due to amitriptyline, which was only approved for depression but was almost exclusively prescribed for off-label indications (93.0%, 89.6% to 95.7%)-most commonly pain (48.4%, 39.7% to 57.8%), insomnia (22.5%, 13.6% to 31.3%), and migraine (16.7%, 12.2% to 21.9%; table 2 ). The high prevalence of off-label indications among other antidepressants (trazodone, bupropion, and mirtazapine) was largely due to trazodone, which was mostly prescribed for insomnia (82.5%, 74.5% to 88.1%) even though it was not approved for this indication. SSRIs and SNRIs had a lower prevalence of off-label indications because they were more frequently prescribed for depression than TCAs, which by definition was an approved indication for all antidepressants (table 2). level of scientific evidence for off-label indications Among all off-label antidepressant prescriptions, there were 143 unique drug indication pairs-the most common of which were trazodone for insomnia (representing 26.2%, 95% confidence interval 21.9% to 30.4%, of all off-label prescriptions), citalopram for anxiety (17.8%, 14.8% to 21.3%), amitriptyline for pain (13.8%, 11.0% to 16.9%), and amitriptyline for insomnia (6.4%, 3.9% to 9.5%; data not shown). Only three of these 143 off-label drug indication pairs met the predefined criteria 7 18 25 for having strong scientific evidence: amitriptyline (a TCA) for pain, escitalopram (an SSRI) for panic disorders, and venlafaxine (an SNRI) for obsessive compulsive disorder.
Off-label antidepressant prescriptions had strong evidence for another drug in the same class-but not the prescribed drug-in 39.6% (95% confidence interval 35.7% to 43.2%) of all cases (table 1). This proportion was highest among off-label SSRI prescriptions (92.0%, 89.2% to 94.4%), and lower among off-label prescriptions for SNRIs (35.4%, 25.0% to 46.7%) and TCAs (28.3%, 20.5% to 36.6%). This proportion was not assessed for other antidepressants because trazodone, bupropion, and mirtazapine were not considered as part of the same class.
For the remaining 44.6% (95% confidence interval 40.2% to 49.0%) of off-label antidepressant prescriptions, neither the prescribed drug nor any other drug in the same class had strong evidence for the indication (table 1). All off-label prescriptions for other antidepressants (trazodone, bupropion, and mirtazapine) were classified in this evidence category. The proportion of off-label prescriptions with no scientific support for any drug in the class was also quite high for SNRIs (53.7%, 40.6% to 66.6%) and TCAs (26.0%, 21.2% to 31.1%), but was much lower for SSRIs (3.3%, 2.0% to 4.8%).
discussion
This study provides evidence on the level of scientific support for off-label antidepressants prescriptions, the prevalence of off-label indications for individual antidepressants, and the most common off-label uses for antidepressants. Nearly a third (29%) of all antidepressants in this study were prescribed for an off-label indication, as found previously. 8 Among all off-label antidepressant prescriptions, only one in six prescriptions was supported by strong scientific evidence, but there was often another antidepressant in the same class with strong evidence that could have been considered instead, especially among off-label SSRI prescriptions. Still, nearly half of all off-label antidepressant prescriptions did not have strong evidence for the prescribed drug and all other antidepressants in the same class. Among the many off-label uses for antidepressants, physicians most frequently prescribed trazodone for insomnia even though this use was not evidence based.
Comparison with other studies
Few published studies exist on off-label prescribing, owing to challenges associated with measuring diagnoses (indications) for prescriptions. Compared with our findings where 29% of antidepressant prescriptions were off-label, Chen and colleagues 27 found that 75% of people enrolled to Georgia Medicaid who were being treated with antidepressants received at least one antidepressant off-label. The rate of off-label antidepressant use was notably higher in this study because the authors classified prescriptions as off-label if the patient did not have a diagnostic code for an approved indication recorded in administrative claims data during the same year. This methodology most likely overestimated the off-label prescribing rate because diagnostic codes in administrative data are often incomplete or inaccurate, especially for psychiatric conditions. 28 Only three studies-one Canadian 7 and two US 25 29 have used documented treatment indications to study off-label prescribing, none of which focused specifically on antidepressants. Eguale and colleagues 7 combined antidepressants with other central nervous system drugs but reported fairly comparable results, with 26% of prescriptions for off-label indications-18% of which SSRI=selective serotonin reuptake inhibitors; SNRI=serotonin-norepinephrine reuptake inhibitors; TCA=tricyclic antidepressants; NA=not assessed for drugs in this category because they were not considered as part of the same class. *Calculated using the total number of prescriptions in the class as the denominator. †Calculated by a cluster bootstrap approach 26 to account for non-independence of prescriptions from the same physician and for the same patient. Reported 95% confidence intervals correspond to values at the 2.5th and 97.5th percentiles of the distribution of respective estimates across 1000 bootstrap re-samples. ‡Based on evaluations from DRUGDEX compendium in three dimensions: efficacy, strength of recommendation, and strength of evidence. Prescriptions for an off-label indication were classified as having strong evidence for a prescribed drug if evidence showed that the drug was effective or favoured efficacy for the indication, the drug was recommended for all or most patients with the indication, and at least one randomised controlled trial was included among the studies used to evaluate the drug's efficacy for the indication. §Calculated using the number of prescriptions in the class that were written for an off-label indication as the denominator. ¶Off-label prescriptions where the prescribed drug did not have strong evidence for the indication, but another drug in the class was either on-label or off-label with strong evidence for the indication based on evaluations from the DRUGDEX compendium. **Includes trazodone, bupropion, and mirtazapine. OCD=obsessive compulsive disorder; ADHD=attention deficit/hyperactivity disorder.
*Calculated using total number of prescriptions for the drug as the denominator. †Calculated by a cluster bootstrap approach 26 to account for non-independence of prescriptions from the same physician and for the same patient. Reported 95% confidence intervals correspond to values at the 2.5th and 97.5th percentiles of the distribution of respective estimates across 1000 bootstrap re-samples. 26 ‡Indications approved for drug by Health Canada or the US Food and Drug Administration as of September 2015 (end of study period). §Includes anxiety, generalised anxiety disorder, and other anxiety disorders. Excludes panic disorder, phobias, OCD, and post-traumatic stress disorder.
were supported by strong evidence. Radley and colleagues 29 combined antidepressants with anxiolytic and antipsychotic drugs, but again reported a similar off-label prescribing rate of 31%. However, the proportion of off-label prescriptions with strong scientific support in this study was notably lower than ours at only 6%, possibly due to the inclusion of other psychiatric drugs or because evidence to support some off-label antidepressant uses had not been generated at the time of the analysis. Finally, Walton and colleagues 25 presented results for only five antidepressants but similarly found that amitriptyline and trazodone were the antidepressants most frequently prescribed for off-label indications. However, their off-label prescribing rate was notably lower for amitriptyline (69%) and trazodone (43%) than our rates, possibly reflecting inter-country differences in the use of antidepressants versus other drugs to treat pain and insomnia.
In all of these studies, none of the authors assessed the proportion of off-label antidepressant prescriptions where the prescribed drug did not have strong evidence but another antidepressant from the same class existed that had strong evidence for the respective indication.
Potential explanations for off-label prescribing Several contextual factors could contribute to physicians prescribing antidepressants for off-label indications. Firstly, the vast and increasing number of drugs on the market makes it challenging for physicians to keep track of which indications are approved for specific products, 30 especially when pharmaceutical companies have been known to promote drug use for off-label indications. 31 Secondly, constraints such as the list of drugs included on patients' health plan formularies could influence which drugs physicians prescribe, especially if physicians presume that drugs in the same class are interchangeable. 32 33 For example, in our setting, escitalopram was not covered for patients enrolled in the public drug insurance plan. We found that when study physicians prescribed SSRIs to patients with public drug insurance, they infrequently prescribed escitalopram (4.7% of all SSRI prescriptions for patients with public drug insurance) but frequently prescribed citalopram (51.4%). However, for patients with private drug insurance, study physicians equally prescribed escitalopram and citalopram (29.3% and 31.7% of all SSRI prescriptions for patients with private drug insurance, respectively).
Thirdly, primary care physicians might prescribe antidepressants off-label because alternative treatments for a given indication are contraindicated or perceived as higher risk medications. For example, benzodiazepines and Z drugs such as zolpidem and zaleplon have been shown to be efficacious for treating insomnia. 34 However, these drugs have been labelled as potentially inappropriate treatments for older adults, and if prescribed, could even negatively affect providers' quality and performance measures. 35 Many physicians who are concerned about the health of their older patients might consequently prescribe trazodone instead because they believe it is a safer treatment.
Finally, many off-label indications for antidepressants are symptom based conditions for which few approved drug treatments exist. Primary care physicians could be struggling to find effective treatments for these conditions and thus prescribe antidepressants as a last resort, indicating a gap in needed pharmacotherapy.
implications of findings For both primary care physicians and specialists (since specialists could initiate antidepressant treatment that is then continued by a primary care physician), our findings emphasise the importance of considering the level of evidence supporting risk-benefit when prescribing an antidepressant, especially if the drug is known to have important adverse side effects. 36 When evidence to support efficacy is lacking, physicians should exercise caution, prescribe conservatively, and inform patients of this information via a shared decision making process. 36 This ideal, however, is challenging to achieve because physicians face time constraints, the drug market and scientific literature are vast and ever-evolving, and many physicians find it challenging to critically appraise and interpret the results of epidemiological studies. 37 Indication based e-prescribing systems that are integrated with clinical decision support tools could help overcome these obstacles by notifying physicians when drugs are being prescribed off-label without supporting evidence and providing them with access to concise, up-to-date summaries of the available evidence. Providing the public with access to patient friendly resources about the level of scientific evidence supporting different treatment options for a given indication could further facilitate the decision making process between physicians and patients.
Our finding that among off-label prescriptions, 40% were for indications where the prescribed drug did not have strong evidence but another drug in the same class was approved or supported by strong evidence is clinically important. Many physicians might view this type of off-label prescribing as different from off-label prescribing without scientific evidence for the entire class because they assume that drugs within the same class are interchangeable. 38 39 However, class effects cannot be assumed because even slight differences in chemical structure between drugs can alter their pharmacodynamic and pharmacokinetic properties, leading to clinically relevant differences in efficacy and risk. 39 For example, statins have been shown to differ not only in efficacy 40 but also in safety, as demonstrated by the withdrawal of cerivastatin from the market in 2001 because the risk of rhabdomyolysis was 10 times higher for cerivastatin than other statins. 41 Clinical guidelines recommend that when physicians select a particular drug to prescribe, they should consider the level of scientific evidence supporting the specific drug. 42 It should not be assumed that all drugs within a class are likely to be efficacious for treating an indication when one member of the class has proven efficacy.
Finally, more evidence is needed on the clinical outcomes associated with antidepressant use for off-label indications. However, within a context of limited resources, it is unlikely that randomised clinical trials will be conducted for each off-label drug-indication pair, especially for older drugs that are no longer owned by an innovator company. 9 Thus, in addition to randomised clinical trials, post-market drug surveillance systems represent valuable resources for assessing off-label antidepressant use. Such systems face challenges associated with measuring treatment indications and patient reported outcomes, but these challenges could be overcome by increasing the use of indication based e-prescribing systems and electronic health records that track patient outcomes. Indeed, this study demonstrates the benefits that indication based prescribing can have towards addressing knowledge gaps around off-label antidepressant prescribing.
strengths and limitations
The key strength of this study is that it included more than 12 years of antidepressant prescriptions from an e-prescribing system where physicians systematically documented treatment indications at the point of prescribing. However, study participants were from one Canadian province where prescribers were generally younger and patients were generally older with more health complexities. 43 These characteristics could influence the generalisability of our findings, because younger physicians are more likely to prescribe drugs off-label without scientific evidence, and patients with more health complexities are less likely to receive off-label prescriptions. 7 Another study strength is that physicians were unlikely to have altered their true responses when recording indications in the e-prescribing system because the dropdown menu did not distinguish between on-label and off-label indications for a drug. On the other hand, we could not identify when physicians consciously prescribed antidepressants off-label. Indeed, a portion of antidepressants in this study might have been prescribed off-label for a specific reason (eg, patient experienced side effects to another drug in the same class, or formulary restrictions).
study considerations
Firstly, our estimates of off-label antidepressant prescribing were conservative because we did not consider other aspects of off-label drug use (eg, dose, frequency, duration of treatment), and we used the approved indications and available evidence at the end of the study period. Secondly, we presumed that approved indications for drugs were backed by strong scientific evidence, which might not have been true in some cases given that the quality of clinical trial evidence used by regulatory agencies as the basis for approving new therapeutics and supplemental indications has been shown to vary widely. 44 45 Thirdly, to identify evidence based off-label uses for antidepressants, we used pre-established criteria that has been used in other studies. 7 18 25 However, our list of evidence based antidepressants for each indication might not always be identical to the recommendations from clinical guidelines. For example, recommendations from two national guidelines for managing anxiety related disorders 42 46 are similar but slightly more inclusive than ours. Finally, because regulatory bodies in North America and Europe are not entirely harmonised in their list of approved indications for drugs, slight discrepancies in the rate of off-label antidepressant use could exist between North America and Europe.
Conclusions
By using information from an indication based e-prescribing system, we found that when primary care physicians prescribed antidepressants for off-label indications, the prescribed drug was usually not supported by strong evidence for the respective indication. However, there was often another drug in the same class with strong evidence that could have been considered. These findings highlight an urgent need to produce more evidence on the risks and benefits of off-label antidepressant use and to provide physicians with this evidence at the point of prescribing. Technologies such as indication based e-prescribing systems and electronic health records have the potential to become essential components of effective post-market drug surveillance systems for monitoring and evaluating off-label antidepressant use. By integrating these technologies with knowledge databases and clinical decision support tools, they could also provide an effective means for communicating evidence back to physicians to optimise prescribing decisions.
We thank Claude Dagenais (academic adviser, Faculty of Pharmacy, University of Montréal) for reviewing the manuscript and providing substantive comments.
Contributors: JW extracted and had full access to all of the study data and takes responsibility for the integrity of the data and the accuracy of the data analysis. JW contributed to the study design; analysis and interpretation of the data; drafting of the manuscript; and critical revision of the manuscript for important intellectual content. AM and RT contributed to the study design; analysis and interpretation of the data; and critical revision of the manuscript for important intellectual content. MA and TE contributed to the analysis and interpretation of the data; and critical revision of the manuscript for important intellectual content. DB contributed to the interpretation of the data and critical revision of the manuscript for important intellectual content. All authors read and approved the final manuscript. JW is the guarantor. Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: support from the Canadian Institutes of Health Research for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.
Ethical approval: This study was approved by the McGill institutional review board.
Data sharing: No additional data available.
The lead author affirms that the manuscript is an honest, accurate, and transparent account of the study being reported, and that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work | 2017-09-08T15:59:22.736Z | 2017-02-21T00:00:00.000 | {
"year": 2017,
"sha1": "edff87bc2f027f8ccbb8fea3b4e4cfbcda385851",
"oa_license": "CCBYNC",
"oa_url": "https://www.bmj.com/content/bmj/356/bmj.j603.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ea3e8bb06a7e98593a88133a9861543286a2b5d4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257532687 | pes2o/s2orc | v3-fos-license | ES-ENAS: Efficient Evolutionary Optimization for Large Hybrid Search Spaces
In this paper, we approach the problem of optimizing blackbox functions over large hybrid search spaces consisting of both combinatorial and continuous parameters. We demonstrate that previous evolutionary algorithms which rely on mutation-based approaches, while flexible over combinatorial spaces, suffer from a curse of dimensionality in high dimensional continuous spaces both theoretically and empirically, which thus limits their scope over hybrid search spaces as well. In order to combat this curse, we propose ES-ENAS, a simple and modular joint optimization procedure combining the class of sample-efficient smoothed gradient techniques, commonly known as Evolutionary Strategies (ES), with combinatorial optimizers in a highly scalable and intuitive way, inspired by the one-shot or supernet paradigm introduced in Efficient Neural Architecture Search (ENAS). By doing so, we achieve significantly more sample efficiency, which we empirically demonstrate over synthetic benchmarks, and are further able to apply ES-ENAS for architecture search over popular RL benchmarks.
Introduction and Related Work
We consider the problem of optimizing an expensive blackbox function f : (M, R d ) → R, where M is a combinatorial search space consisting of potentially multiple layers of categorical and discrete variables, and R d is a high dimensional continuous search space, consisting of potentially hundreds to thousands of parameters. Such scenarios broadly encompass the space of large non-differentiable networks, particularly useful in the thriving field of Automated Reinforcement Learning (AutoRL) (Parker-Holder et al., 2022), where m ∈ M represents an architecture specification and θ ∈ R d represents a collection of possible neural network weights, together to form a policy π m,θ : S → A mapping from search space S to action space A in which the goal is to maximize total reward in a given environment.
There have been a flurry of previous methods for approaching complex, combinatorial search spaces, especially in the evolutionary algorithm domain, including the well-known NEAT (Stanley and Miikkulainen, 2002). More recently, the neural architecture search (NAS) community has also adopted Code: github.com/google-research/google-research/tree/master/es_enas a multitude of blackbox optimization methods for dealing with NAS search spaces, including policy gradients via Pointer Networks (Vinyals et al., 2015) and more recently Regularized Evolution (Real et al., 2018). Such methods have been successfully applied to applications ranging from image classification (Zoph and Le, 2017) to language modeling (So et al., 2019), and even algorithm search/genetic programming Co-Reyes et al., 2021). Combinatorial algorithms allow huge flexibility in the search space definition, which allows optimization over generic spaces such as graphs, but many techniques rely on the notion of zeroth-order mutation, which can be inappropriate in high dimensional continuous space due to large sample complexity (Nesterov and Spokoiny, 2017). Figure 1: Representation of the ES-ENAS aggregator-worker pipeline, where the aggregator proposes models m i in addition to a perturbed input θ + σg i , and the worker the computes the objective f (m i , θ + σg i ), which is sent back to the aggregator. Both the training of the weights θ and of the model-proposing controller p φ rely on the number of worker samples to improve performance.
On the other hand, there are also a completely separate set of algorithms for attacking high dimensional continuous spaces R d . These include global optimization techniques including the Cross-Entropy method (de Boer et al., 2005) and metaheuristic methods such as swarm algorithms (Mavrovouniotis et al., 2017). More local-search based techniques include the class of methods based on Evolution Strategies (ES) (Salimans et al., 2017), such as CMA-ES (Hansen et al., 2003;Krause et al., 2016;Varelas et al., 2018) and Augmented Random Search (ARS) (Mania et al., 2018a). ES has been shown to perform well for reinforcement learning policy optimization, especially in continuous control (Salimans et al., 2017) and robotics Song et al., 2020a). Even though such methods are also zeroth-order, they have been shown to scale better than previously believed (Conti et al., 2018;Liu et al., 2019a;Rowland et al., 2018) on even millions of parameters due to advancements in heuristics (Choromanski et al., 2019a) and Monte Carlo gradient estimation techniques (Choromanski et al., 2019b;Yu et al., 2016). Unfortunately, these analytical techniques are limited only to continuous spaces and at best, basic categorical spaces via softmax reparameterization.
One may thus wonder whether it is possible to combine the two paradigms in an efficient manner. For example, in AutoRL and NAS applications, it would be extremely wasteful to run an end-to-end ES-based training loop for every architecture proposed by the combinatorial algorithm. At the same time, two practical design choices we must strive towards are also simplicity and modularity, in which a user may easily setup our method and arbitrarily swap in continuous algorithms like CMA-ES (Hansen et al., 2003) or combinatorial algorithms like Policy Gradients (Vinyals et al., 2015) and Regularized Evolution (Real et al., 2018), for specific scenarios. Generality is also an important aspect as well, in which our method should be applicable to generic hybrid spaces. For instance, HyperNEAT (Stanley et al., 2009) addresses the issue of high dimensional neural network weights by applying NEAT to evolve a smaller hypernetwork (Ha et al., 2017) for weight generation, but such a solution is domain specific and is not applicable to broader blackbox optimization problems. Similarly restrictive, Weight Agnostic Neural Networks (Gaier and Ha, 2019) do not train any continuous parameters and apply NEAT to only the combinatorial spaces of network structures, and other works (Moriguchi and Honiden, 2012;Miikkulainen et al., 2017) similarly mainly target neural networks specifically. Works that do address blackbox hybrid spaces include Bayesian Optimization (Deshwal et al., 2021) or Population Based Training (Parker-Holder et al., 2021), but only in hyperparameter tuning settings whose search spaces are significantly smaller.
One of the first cases of combining differentiable continuous optimization with combinatorial optimization was from Efficient NAS (ENAS) (Pham et al., 2018), which introduces the notion of weight sharing to build a maximal supernet containing all possible weights θ s where each child model m only utilizes certain subcomponents and their corresponding weights from this supernet. Child models m are sampled from a controller p φ , parameterized by some state φ. The core idea is to perform separate updates to θ s and φ in order to respectively, improve both neural network weights and architecture selection at the same time. However, ENAS and followup variants (Akimoto et al., 2019) were originally proposed in the setting of using a GPU worker with autodifferentiation over θ s in mind for efficient NAS training.
In order to adopt ENAS's joint optimization into the fully blackbox (and potentially nondifferentiable) scenario involving hundreds/thousands of CPU-only workers, we introduce the ES-ENAS algorithm, which is practically implemented as a simple add-on to a standard synchronous optimization scheme commonly found in ES, shown in Fig. 1. We explain the approach formally below.
ES-ENAS Method
Preliminaries In defining notation, let M be a combinatorial search space in which m are drawn from, and θ ∈ R d be the continuous parameter or "weights". For scenarios such as NAS, one may define M's representation to be the superset of all possible child models m. Let φ represent the state of our combinatorial algorithm or "controller", and let p φ its current output distribution over M.
Algorithm
Algorithm 1: Default ES-ENAS Algorithm, with the few additional modifications to allow ENAS from ES shown in blue. Data: Initial weights θ, weight step size η w , precision parameter σ, number of perturbations n, controller p φ . while not done do Sample i.i.d. vectors g 1 , . . . , g n ∼ N (0, I); We concisely summarize our ES-ENAS method in Algorithm 1. Below, we provide ES-ENAS's derivation and conceptual simplicity of combining the updates for φ and θ into a joint optimization procedure.
The optimization problem we are interested in is max m∈M,θ∈R d f (m, θ). In order to make this problem tractable, consider instead, optimization on the smoothed objective: Note that this smoothing defines a particular distribution P m,θ across (M, R d ), and can be more generalized to the rich literature on Information-Geometric Optimization (Ollivier et al., 2017), which can be used to derive different variants and update rules of our approach, such as using CMA-ES or other ES variants (Wierstra et al., 2014;Heidrich-Meisner and Igel, 2009;Krause, 2019) to optimize θ. For simplicity, we use vanilla ES as it suffices for common problems such as continuous control. Our particular update rule is to use samples from m ∼ p φ , g ∼ N (0, I) for updating both algorithm components in an unbiased manner, as it efficiently reuses evaluations to reduce the sample complexity of both the controller p φ and the variance of the estimated gradient ∇ θ f σ .
Updating the Weights
The goal is to improve f σ (φ, θ) with respect to θ via one step of the gradient: Note that by linearity, we may move the expectation E m∼p φ inside into the two terms f (m, θ + σg) and f (m, θ − σg), which implies that the gradient expression can be estimated with averaging singleton samples of the form: where m + , m − are i.i.d. samples from p φ , and g from N (0, I).
Thus we may sample multiple i.i.d. child models m + 1 , m − 1 ..., m + n , m − n ∼ p φ and also multiple perturbations g 1 , ..., g n ∼ N (0, I) and update weights θ with an approximate gradient update: This update forms the "ES" portion of ES-ENAS. As a sanity check, we can see that using a constant fixed m = m + 1 = m − 1 = ... = m + n = m − n reduces Eq. 4 to standard ES/ARS optimization.
Updating the Controller
For optimizing over M, we update p φ by simply reusing the objectives f (m, θ + σg) already computed for the weight updates, as they can be viewed as unbiased estimations of E g∼N (0,I) [f (m, θ + σg)] for a given m. Conveniently, we can use common approaches such as Policy Gradient Methods: φ are differentiable parameters of a distribution p φ (usually a RNN-based controller), with the goal of optimizing the smoothed objective The ES-ENAS variant can be seen as estimating a "simultaneous gradient" consisting of the two updates over θ and φ.
One may wonder why simply using original gradientless evolutionary algorithms such as Regularized Evolution or Hill-Climbing over the entire space (M, R d ) is not sufficient. Many algorithms such as the two mentioned use a variant of the arg max operation for deciding ascent direction, and only require a mutation operator (m, θ) → (m , θ ), where the most common and natural way of continuous mutation is simple additive mutation: θ = θ + σ mut g for some random Gaussian vector g.
The answer lies in efficiency: for e.g. convex objectives, in terms of convergence rate, ES can be O(d) times more sample efficient than a mutation-based arg max procedure such as Hill-Climbing. More formally, we prove the following instructive theorem over continuous spaces (full proof in Appendix E) assuming standard concave/convex optimization settings (Boyd and Vandenberghe, 2004):
and let ∆ ES (θ) be the expected improvement of an ES update, while ∆ M U T (θ) be the expected improvement of a batched hillclimbing update, with both starting at θ and using B = o(exp(d)) parallel evaluations / workers for fairness. Then assuming optimal hyperparameter tuning, ∆ ES
From the above, to achieve the same level of 1-step improvement as ES, a mutation-based approach must use Ω(exp(d)) evaluations, effectively brute forcing the entire R d search space! Since the number of iterations required for convergence is inversely proportional to the improvement ratio (Boyd and Vandenberghe, 2004), this also implies ∆ ES (θ) ∆ M U T (θ) more samples overall are required, which can be a factor of O(d) when B is subexponential. The above establishes the theoretical explanation over the effect of large d. However, this does not cover the case for non-convex objectives, hybrid spaces, or other types of update schemes, all of which may lack possible theoretical analysis, and thus we also experimentally verify this issue below.
BBOB Experiments
We begin by benchmarking over a simple hybridized variant of the common Black-Box Optimization Benchmark (BBOB) (Hansen et al., 2009). We define our hybrid search space as (M, R dcon ), where M consists of d cat categorical parameters, each of which may take feasible values from an unordered set of equally spaced grid points. An input (m, θ) is then evaluated using the native BBOB function f originally operating on the input space R dcat+dcon . We report the average normalized optimality as common in e.g. (Müller et al., 2021), where f * and f * are the true and algorithm's estimated optimums respectively.
The set of original algorithms we use are: Regularized Evolution (Real et al., 2018), NEAT (Stanley and Miikkulainen, 2002), Random Search, Gradientless Descent/Batch Hill-Climbing (Golovin et al., 2020;Song et al., 2020b) and PPO (Schulman et al., 2017) as a policy gradient baseline 1 . To remain fair and consistent, we use the same mutation (m, θ) → (m , θ ) across all mutation-based algorithms, which consists of θ = θ + σ mut g for a tuned σ mut , and uniformly randomly mutating a single categorical parameter from m. All algorithms start at the same randomly sampled initial point. More hyperparameters can be found in Appendix A.3 along with continuous optimizer comparisons (e.g. CMA-ES) in Appendix B.
In Figure 2, we experimentally demonstrate the severe degradation of vanilla combinatorial evolutionary algorithms compared to their ES-ENAS-modified counterparts. In the first row, when we only evaluate on the continuous space, we verify that the original ES algorithm significantly outperforms the other vanilla algorithms, as d con increases. Similarly, when the space becomes hybridized in the following rows, each ES-ENAS variant will also outperform against its corresponding original algorithm.
Neural Network Policy Experiments
In order to benchmark our method over more nested combinatorial structures, we apply our method to two combinatorial problems, Sparsification and Quantization, on standard Mujoco (Todorov et al., 2012) environments from OpenAI Gym, which are well aligned with the use of ES and also have hundreds to thousands of continuous neural network parameters. Furthermore, such problems are also reducing parameter count, which can also greatly improve performance and sample complexity.
Setup
We can view a feedforward neural network as a standard directed acyclic graph (DAG), with a set of vertices containing values {v 1 , ..., v k }, and a set of edges contains a weight w i,j , as shown in Figures 3a and 3b. The goals of sparsification and quantization are to maintain high environment reward while maintaining respectively, a low target number of edges or partitions (for weight sharing). These scenarios possess very large combinatorial policy search spaces (calculated as |M| > 10 68 , comparable to 10 49 from NASBench-101 (Ying et al., 2019)) that will stress test our ES-ENAS algorithm and are also relevant to mobile robotics (Gage, 2002). Given the results in Subsection 3.1 and since this is a NAS-based problem, for ES-ENAS we use the two most domain-specific controllers, Regularized Evolution and PPO (Policy Gradient) and take the best result in each scenario. Specific details and search space size calculations can be found in Appendix A.4.
Results
As we have already demonstrated comparisons to blackbox optimization baselines in Subsection 3.1, we now focus our comparison to domain-specific baselines for the neural network. These include a DARTS-like (Liu et al., 2019b) softmax masking method (Lenc et al., 2019), which applies a trainable boolean matrix mask over weights for edge pruning. We also include strong mathematically grounded baselines for fixed quantization patterns such as Toeplitz and Circulant matrices . In all cases we use the same hyper-parameters, and train until convergence for three random seeds. For masking, we report the best achieved reward with > 90% of the network pruned, making the final policy comparable in size to the quantization and edge-pruning networks. All results are for feedforward nets with one hidden layer. More details can be found in Appendices C.1 and A.4.
For each class of policies, we compare various metrics, such as the number of weight parameters used, total parameter count compression with respect to unstructured networks, and total number of bits for encoding float values (since quantization and masking methods require extra bits to encode the partitioning via dictionaries). In Table 1, we see that both sparsification and quantization can be learned from scratch via optimization using ES-ENAS, which achieves competitive or better rewards against other baselines. This is especially true against hand-designed (Toeplitz/Circulant) patterns which significantly fail at Walker2d, as well as other optimization-based reparameterizations, such as softmax masking, which underperforms on the majority of environments. The full set of numerical results over all of the mentioned methods can be found in Appendix C. In the rest of the experimental section, we provide ablations studies on the properties and extensions of our ES-ENAS method. Because of the nested combinatorial structure of the neural network space (rather than the flat space of BBOB functions), certain behaviors for the algorithm may differ. Furthermore, we also wish to highlight the similarities and differences from regular NAS in supervised learning, and thus raise the following questions:
Neural Network Policy Ablations
1. How do controllers compare in performance? 2. How does the number of workers affect the quality of optimization? 3. Can other extensions such as constrained optimization also work in ES-ENAS?
Controller Comparisons
As shown in Subsection 3.1, Regularized Evolution (Reg-Evo) was the highest performing controller when used in ES-ENAS. However, this is not always the case, as mutation-based optimization may be prone to being stuck in local optima whereas policy gradient methods (PG) such as PPO can allow better exploration.
We thus compare different ES-ENAS variants, when using Reg-Evo, PG (PPO), and random search (for sanity checking), on the edge pruning task in Fig. 4. As shown, while Reg-Evo consistently converges faster than PG at first, PG eventually may outperform Reg-Evo in asymptotic performance. Previously on NASBENCH-like benchmarks, Reg-Evo consistently outperforms PG in both sample complexity and asymptotic performance (Real et al., 2018), and thus our results on ES-ENAS are surprising, potentially due to the hybrid optimization of ES-ENAS.
Random search has been shown in supervised learning to be a surprisingly strong baseline (Li and Talwalkar, 2019), with the ability to produce even ≥ 80-90 % accuracy (Pham et al., 2018;Real et al., 2018), showing that NAS-based optimization produces most gains ultimately be at the tail end; e.g. at the 95% accuracies. In the ES-ENAS setting, this is shown to occur for easier RL environments such as Striker (Fig. 4) and Reacher (shown in Appendices C.2, C.3). However, for the majority of RL environments, a random search controller is unable to train at all, which also makes this regime different from supervised learning.
Controller Sample Complexity
We further investigate the effect of the number of objective values per batch on the controller by randomly selecting only a subset of the objectives f (m, θ) for the controller p φ to use, but maintain the original number of workers for updating θ s via ES to maintain weight estimation quality to prevent confounding results. We found that this sample reduction can reduce the performance of both controllers for various tasks, especially the PG controller. Thus, we find the use of the already present ES workers highly crucial for the controller's quality of architecture search in this setting. (Tan et al., 2018b)) plotted in corresponding lighter colors.
Constrained Optimization
Following (Tan and Le, 2019;Tan et al., 2018b) on similar techniques for constrained optimization, the controller may optimize multiple objectives (ex: efficiency) towards a Pareto optimal solution (Deb, 2005). We apply (Tan et al., 2018b) and modify the controller's objective to be a hybrid combination f (m, θ) |Em| |E T | ω of both the total reward f (m, θ) and the compression ratio |Em| |E T | where |E m | is the number of edges in model m and |E T | is a target number, with the search space expressed as boolean mask mappings (i, j) → {0, 1} over all possible edges. For simplicity, we use the naive setting in (Tan et al., 2018b) and set ω = −1 if |Em| |E T | > 1, while ω = 0 otherwise, which strongly penalizes the controller if it proposes a model m whose edge number |E m | breaks the threshold |E T |. Figure 6: Environment reward plotted alongside the average number of edges used for proposed models. Black horizontal line corresponds to the target |E T | = 64. In Fig. 6, we see that the controller eventually reduces the number of edges below the target threshold set at |E T | = 64, while still maintaining competitive training reward, demonstrating that ES-ENAS is also capable of constrained optimization techniques, potentially useful for explicitly designing efficient CPU-constrained robot policies (Unitree, 2017;Gao et al., 2020;Tan et al., 2018a).
Conclusions, Limitations, and Broader Impact Statement
Conclusion: We presented a scalable and flexible algorithm, ES-ENAS, for performing optimization over large hybrid spaces. ES-ENAS is efficient, simple, and modular, and can utilize many techniques from both the continuous and combinatorial evolutionary literature.
Limitations:
In certain scenarios where m specifies a model and thus the continuous parameter size d is dependent on m, there may not be an obvious way to form a global θ. This is a common issue that usually requires domain-specific knowledge (e.g. NAS) to resolve. Furthermore, due to reasons of simplicity, the joint sampling distribution P m,θ over (M, θ) was made as a product between independent distributions over M and θ in this paper. However, it may be worth studying distributions and update rules in which m and θ are sampled dependently, as it may lead to even more effective algorithms.
Broader Impact: We believe that many large-scale evolutionary projects once prohibited by the curse of continuous dimensionality may now be feasible by the efficiency of ES-ENAS, potentially reducing computation costs dramatically. For example, one may be able to extend to also search for continuous parameters (e.g. neural network weights) via ES-ENAS. Furthermore, ES-ENAS is applicable to several downstream applications, such as architecture design for mobile robotics, and recently new ideas in RNNs for meta-learning and memory (Bakker, 2001;Najarro and Risi, 2020). ES-ENAS can potentially also be used for more broad scenarios involving evolutionary search, such as genetic programming (Co-Reyes et al., 2021), circuit design (Ali et al., 2004), and compiler optimization (Cooper et al., 1999). Other potential applications include flight optimization (Ahmad and Thomas, 2013), protein and chemical design (Elton et al., 2019;Zhou et al., 2017;Yang et al., 2019), and program synthesis (Summers, 1977).
Policy Gradient:
We use a gradient update batch size of 64 to the Pointer Network, while using PPO as the policy gradient algorithm, with its default (recommended) hyperparameters from (Peng et al., 2020). These include a softmax temperature of 1.0, 100 hidden state size with 1 layer for the RNN, importance weight clipping of 0.2, and 10 update steps per weight update, with more values found in (Vinyals et al., 2015). We grid searched PPO's learning rate across {1 × 10 −4 , 5 × 10 −4 , 1 × 10 −3 , 5 × 10 −3 } and found 5 × 10 −4 was the best.
ARS/ES:
We always use reward normalization and state normalization (for RL benchmarks) from (Mania et al., 2018b). For BBOB functions, we use η w = 0.5 while σ = 0.5, along with 64 Gaussian directions per batch in an ES iteration, with 8 used for evaluation. For RL benchmarks, we use η w = 0.01 and σ = 0.1, along with 75 Gaussian directions, with 50 more used for evaluation.
The each parameter in the raw continuous input space is bounded within [−L, L] where L = 5. For discretization + categorization into a grid, we use a granularity of 1 between consecutive points, i.e. a categorical a parameter is allowed to select within {−L, −L + 1, ..., 0, ..., L − 1, L}. Note that each BBOB function is set to have its global optimum at the zero-point, and thus our hybrid spaces contain the global optimum.
Because each BBOB function may have a completely different scaling (e.g. for a fixed dimension, the average output for Sphere may be within the order of 10 2 but the average output for BentCigar may be within 10 10 ), we thus normalize the output of each function when reporting results. The normalized valuation of a BBOB function f is calculated by dividing the raw value by the maximum absolute value obtained by random search.
Since for the ES component we use a step size of η w = 0.5 and precision parameter of σ = 0.5, we thus use for evolutionary mutations, a Gaussian perturbation scaling σ mut of 0.07, which equalizes the average norms between the update directions on θ, which are: η w ∇ θ f σ and σ mut g.
A.4 RL + Neural Network Setting
In order to allow combinatorial flexibility, our neural network consists of vertices/values V = {v 1 , ..., v k }, where the initial block of |S| values {v 1 , ..., v |S| } corresponds to the environment state, and the last block of |A| values {v k−|A|+1 , ..., v k } corresponds to the action output values. Directed edges E ⊆ E max = {e i,j = (i, j) | 1 ≤ i < j ≤ k, |S| < j} are constructed with corresponding weights W = {w i,j | (i, j) ∈ E}, and nonlinearities G = {σ |S|+1 , ..., σ k } for the non-state vertices. Thus a forward propagation consists of for-looping in order j ∈ {|S| + 1, ..., k} and computing output values v j = σ j (i,j)∈E v i w i,j . By default, unless specified, we use Tanh non-linearities with 32 units for each hidden layer.
Edge pruning:
We group all possible edges (i, j) into a set in the neural network, and select a fixed number of edges from this set. We can also further search across potentially different nonlinearities, e.g. f i ∈ {tanh, sigmoid, sin, ...} similarly to Weight Agnostic Neural Networks (Gaier and Ha, 2019). In terms of API, this search space can be described as pyglove.manyof(E max ,|E|) along with pyglove.oneof(σ i ,G). The search space is of size |Emax| |E| or 2 |Emax| when using a fixed or variable size |E| respectively.
We collect all possible edges from a normal neural network into a pool E max and set |E| = 64 as the number of distinct choices, passed to the pyglove.manyof. Similar to quantization, this choice is based on the value max(|S|, H) or max (|A|, H), where H = 32 is the number of hidden units, which is linear in proportion to respectively, the maximum number of weights |S| · H or |A| · H.
Since a hidden layer neural network has two weight matrices due to the hidden layer connecting to both the state and actions, we thus have ideally a maximum of 32 + 32 = 64 edges.
For nonlinearity search, we use the same functions found in (Gaier and Ha, 2019). These are: {Tanh, ReLU, Exp, Identity, Sin, Sigmoid, Absolute Value, Cosine, Square, Reciprocal, Step Function.} Quantization: We assign to each edge (i, j) one color of many colors c ∈ C = {1, ..., |C|}, denoting the partition group the edge is assigned to, which defines the value w i,j ← w (c) . This is shown pictorially in Figs. 3a and 3b. This can also programmically be done by concatenating primitives pyglove.oneof(e i,j ,C) over all edges e i,j ∈ E max . The search space is of size |C| |E| .
The number of partitions (or "colors") is set to max(|S|, |A|). This is both in order to ensure a linear number of trainable parameters compared to the quadratic number for unstructured networks, as well as allow sufficient parameterization to deal with the entire state/action values.
A.4.1 Environment
For all environments, we set the horizon T = 1000. We also use the reward without alive bonuses for weight training as commonly used (Mania et al., 2018a) to avoid local maximum behaviors (such as an agent simply standing still to collect a total of 1000 reward), but report the final score as the real reward with the alive bonus.
A.4.2 Baseline Details
We consider Unstructured, Toeplitz, Circulant and a masking mechanism Lenc et al., 2019). We introduce their details below. Notice that all baseline networks share the same general (1-hidden layer, Tanh nonlinearity) architecture from A.4. This impplies that we only have two weight matrices W 1 ∈ R |S|×h , W 2 ∈ R h×|A| and two bias vectors b 1 ∈ R h , b 2 ∈ R |A| , where |S|, |A| are dimensions of state/action spaces. These networks differ in how they parameterize the weight matrices. We have: Unstructured: A fully-connected layer with unstructured weight matrix W ∈ R a×b has a total of ab independent parameters.
Toeplitz: A toeplitz weight matrix W ∈ R a×b has a total of a + b − 1 independent parameters. This architecture has been shown to be effective in generating good performance on benchmark tasks yet compressing parameters .
Circulant:
A circulant weight matrix W ∈ R a×b is defined for square matrices a = b. We generalize this definition by considering a square matrix of size n × n where n = max{a, b} and then do a proper truncation. This produces n independent parameters.
Masking: One additional technique for reducing the number of independent parameters in a weight matrix is to mask out redundant parameters (Lenc et al., 2019). This slightly differs from the other aforementioned architectures since these other architectures allow for parameter sharing while the masking mechanism carries out pruning. To be concrete, we consider a fully-connected matrix W ∈ R a×b with ab independent parameters. We also setup another mask weight Γ ∈ R a×b . Then the mask is generated via where softmax is applied elementwise and α is a constant. We set α = 0.01 so that the softmax is effectively a thresolding function wich outputs near binary masks. We then treat the entire concatenated parameter θ = [W, Γ] as trainable parameters and optimize both using ES methods. Note that this softmax method can also be seen as an instance of the continuous relaxation method from DARTS (Liu et al., 2019b). At convergence, the effective number of parameter is ab · λ where λ is the proportion of Γ components that are non-zero. During optimization, we implement a simple heuristics that encourage sparse network: while maximizing the true environment return f (θ) = T t=1 r t , we also maximize the ratio 1 − λ of mask entries that are zero. The ultimate ES objective is: where β ∈ [0, 1] is a combination coefficient which we anneal as training progresses. We also properly normalize f (θ) and (1 − λ) before the linear combination to ensure that the procedure is not sensitive to reward scaling. BBOB Benchmarks Across (d cat , d con ) Figure 7: Comparison when regular ES/ARS is used as the continuous algorithm in ES-ENAS, vs when CMA-ES is used as the continuous algorithm (which we name "CMA-ENAS"). We use the exact same setting as Figure 2 in the main body of the paper. We use Regularized Evolution (Reg-Evo) as the default combinatorial algorithm due its strong performance found from Figure 2. We find that ES-ENAS usually converges faster initially, while CMA-ENAS achieves a better asymptotic performance. This is aligned with the results (in the first row) when comparing vanilla ES with vanilla CMA-ES. For generally faster convergence to a sufficient threshold however, ES/ES-ENAS usually suffices.
C Extended Neural Network Experimental Results
As standard in RL, we take the mean and standard deviation of the final rewards across 3 seeds for every setting. "L", "H" and "H, H" stand for: linear policy, policy with one hidden layer, and policy with two such hidden layers respectively.
C.1 Baseline Method Comparisons
In terms of the masking baseline, while (Lenc et al., 2019) fixes the sparsity of the mask, we instead initialize the sparsity at 50% and increasingly reward smaller networks (measured by the size of the mask |m|) during optimization to show the effect of pruning. Using this approach on several Open AI Gym tasks, we demonstrate that masking mechanism is capable of producing compact effective policies up to a high level of pruning. At the same time, we show significant decrease of performance at the 80-90% compression level, quantifying accurately its limits for RL tasks (see: Fig. 8).
Figure 8: The results from training both a mask m and weights θ of a neural network with two hidden layers. 'Usage' stands for number of edges used after filtering defined by the mask. At the beginning, the mask is initialized such that |m| is equal to 50% of the total number of parameters in the network.
D.2 Visualizing and Verifying Convergence
We also graphically plot aggregate statistics over the controller samples to confirm ES-ENAS's convergence. We choose the smallest environment, Swimmer, which conveniently works particularly well with linear policies (Mania et al., 2018a), to reduce visual complexity and avoid permutation invariances. We also use a boolean mask space over all possible edges (search space size |M| = 2 |S|×|A| = 2 8×2 ). We remarkably observe that for all 3 independently seeded runs, PG converges toward a "local maximum" architecture, demonstrated in Fig. 10 with the final architectures also presented for both PG and Reg-Evo. This suggests that there may be a few "natural architectures" optimal to the state representation. Figure 10: Left: Final architectures that PG and Reg-Evo converged to on Swimmer with a linear (L) policy, from the above runs. Note that the controller does not select all edges even if it is allowed in the boolean search space, but also ignores some state values. Right: Edge pruning convergence over time, with samples aggregated over 3 seeds from ES-ENAS using the PG controller on Swimmer. Each edge is colored according to a spectrum, with its color value equal to 2|p − 1 2 | where p is the edge frequency. We see that initially, each edge has uniform (p = 1 2 ) probability of being selected, but as both controller progress, their samples converge toward a single pruning. | 2021-01-20T02:15:40.340Z | 2021-01-19T00:00:00.000 | {
"year": 2021,
"sha1": "c281f850cbf765dd7a437f74c9445798e8dbf55a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c281f850cbf765dd7a437f74c9445798e8dbf55a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
221614664 | pes2o/s2orc | v3-fos-license | Sulforaphane response on aluminum-induced oxidative stress, alterations in sperm characterization and testicular histomorphometry in Wistar rats
Abstract Background The exposure of male individual to environmental toxicant is regarded as a channel that results in reduced sperm counts and infertility. Objective This study investigated the ameliorative response of Sulforaphane (SFN) on Aluminum trichloride (AlCl3) induced testicular toxicity in adult male Wistar rats. Materials and Methods A total of 32 adult male Wistar rats (180-200 gm between 8-10 wk) were divided into four groups (n = 8/each). Group A) received distilled water orally as placebo; Group B) received 100 mg/kgbw AlCl3 only orally; Group C) received 100 mg/kgbw AlCl3 and 100 mg/kgbw SFN orally; and Group D) received 100 mg/kgbw SFN only orally. After 28 days of experiment, animals underwent cervical dislocation, blood serum was obtained for analysis, and testes were harvested for biochemical assays, histology, hormonal profile, and sperm characterization. Results The sperm parameters showed a significant difference within the AlCl3 only group compared with the control and SFN only groups (p = 0.02). However, AlCl3 and SFN co-treatment showed improvement in the motility, viability, and sperm count compared with the AlCl3 only group (p = 0.02). Furthermore, there was a significant decline in the levels of hormones profile and antioxidant status in AlCl3 only group compared to the control and SFN only (p = 0.02). The testicular histoarchitecture of the AlCl3 only group showed shrinkage of seminiferous tubules, spermatogenesis disruption, and empty lumen compared to the control and SFN only groups. Conclusion The present study revealed the ameliorative response of SFN on AlCl3-induced testicular toxicity on serum hormone profiles, antioxidant status, lipid peroxidation, and histomorphometric analysis through oxidative stress.
Introduction
The exposure of male individual to environmental toxicant is regarded as a channel that results in reduced sperm counts and infertility (1,2). Aluminum is considered as the most common metallic element detectable in natural waters, animal, and plant tissues (3) leading to a significant upsurge in both gastrointestinal absorption and urinary elimination of aluminum in exposed individuals (4). The affinity of aluminum to other elements stimulates free radical-mediated reproductive cytotoxicity causing impairment of testicular tissues to both humans and animals (5). Aluminum compounds are widely used as by-products for the manufacture of several household cooking utensils andpharmaceutical drugs (such as antacids, vaccines, anti-diarrhea drugs, phosphate binders, and injections of allergy immunotherapy) (6). Increase in the level of exposure to aluminum-containing products will boost the concentration of this metallic element in different organs thereby causing harmful effects to the well-being of humans (7).
In addition, elevated concentrations of Aluminum in human sperm and seminal plasma were observed to decrease sperm viability and motility (8). Testicular Aluminum accumulations cause spermatocyte necrosis and trigger other reproductive toxicity through several mechanisms such as oxidative stress, which ultimately interferes with spermatogenesis and steroidogenesis, bloodtestis barrier, and endocrine disruption (9). The application of nutritional antioxidant supplements has increased over the years to tackle oxidative stress-induced tissue damage since they act as defense regulators and scavengers of reactive oxygen species. Sulforaphane (SFN) is the most active natural products found in crucifers such as broccoli sprout, cabbage, and kale with the potential of lowering the risk of cancer, oxidative stress-induced tissue injury, and age-related diseases (10). SFN possess antiproliferative activities and can effectively halt the initiation and progression of chemically induced tissue damage in animals (11). In addition, SFN has been suggested to have antidiabetic properties for normalizing changes in blood glucose and insulin sensitivity (12)(13)(14), and is used in cardiovascular and antihypertensive protection (15,16). It has been reported that SFN can promote elimination and detoxification of aflatoxin (17), acetaldehyde (18), methylmercury (19), acrolein (20), benzene, crotonaldehyde (21), and free radicals (22) through the Nrf2mediated mechanism. Furthermore, some clinical studies have demonstrated the effectiveness of SFN supplements in the prevention and/or improvement of skin erythema (23), autism (24), insulin resistance (13), Helicobacter pyloriinfection (25), and liver abnormality (26). SNF also has the ability to cause programmed cell death(apoptosis) and cell cycle arrest linked to their ability to regulate several proteins such as Bcl-2 and Bax family proteins, caspases, p21, cyclins, and cyclin-dependent kinases (27).
This study was therefore designed to investigate the ameliorative response of SFN on the histomorphometric and enzymatic antioxidants on Aluminum chloride (AlCl 3 )-induced testicular toxicity of adult Wistar rats.
Animals
In this prospective cohort study, a total of 32 adult male Wistar rats, weighing 180-200 gr and aged 8-10 wk (Rattus norvegicus) were obtained from the animal house, Department of Human Anatomy, Ladoke Akintola University of Technology, Ogbomosho. The rats were collected in an isolated cages in the experimental house of the Department of Human Anatomy, Federal University of Technology, Akure. They were maintained under constant 12 hr light/dark cycle.
Experimental protocol
The rats were divided into four groups (n = 8/each) Group A) represent control and received water as placebo. Group B) were administered (orally) with 100 mg/kgbw AlCl 3 only (in 0.5 ml of distilled water). Group C) were administered (orally) with 100 mg/kgbw AlCl 3 (in 0.5 ml of distilled water) and 100 mg/kgbw SFN (in 0.5 ml of distilled water), and Group D) was administered (orally) with 100 mg/kgbw SFN only (in 0.5 ml of distilled water). The experiment lasted for 28 days after which the animals were sacrificed.
All animals were observed for any behavioral anomalies, illness, and physical anomalies. The experimental procedures were conducted in accordance with the provided recommendations in the "Guide for the Care and Use of Laboratory Animals" prepared by the National Academy of Sciences. The rats were fed with standard rat chow and drinking water was supplied ad libitum. The weight of the animals was recorded at procurement, during acclimatization, at commencement of the experiment, and weekly throughout the experimental period using a CAMRY electronic scale (EK5055, Indian).
Surgical procedure
"After the last administration, the rats were administered intraperitoneal pentobarbital sodium (40 mg/kg) and their abdominal region was opened and the testes of all the animals were immediately removed. The testicular weight of each rat were recorded. The rats were decapitated and blood samples were collected for analysis. The blood samples were centrifuged at 4°C for 10 min at 250× gr and the serum obtained was stored at 20°C until assayed. The harvested testis specimens were fixed in Bouin's fluid for histological analysis" (28).
Epididymis sperm count, viability, and motility
"The spermatozoa from the cauda epididymis were obtained by cutting into 2 ml of medium (Hams F10) containing 0.5% bovine serum albumin (29). After 5 min of incubation at 37°C (with 5% CO 2 ), the cauda epididymis sperm reserves were determined using a hemocytometer". Sperm motility, viability (live spermatozoa/death spermatozoa ratio), and morphology (percentage normal spermatozoa, abnormal head defect, and abnormal tail defect) were analyzed with a microscope (Leica DM750) and reported as the mean percentage of motile sperm according to the method developed by the World Health Organization (30).
Biochemical estimations
The level of lipid peroxidation products were estimated in accordance with the method published by Adelakun and co-workers. (31). Nonenzymatic antioxidants such as reduced glutathione (GSH) and catalase (CAT) were estimated as described by Adelakun and coworkers. The SOD activity in the testes was also
Testicular histology preparation
"The testes of the rats were harvested and fixed in Bouin's fluid for 24 hr before being transferred to 70% alcohol for dehydration. The tissues passed through 90% and absolute alcohol and xylene for different durations before being transferred into molten paraffin wax for 1 hr each in an oven at 65°C for infiltration. The tissues were embedded and serial sections cut on a rotary microtome set at 5 microns were performed. The tissues were picked up with albumenized slides and allowed to dry on hot plates for 2 min. The slides were dewaxed with xylene and passed through absolute alcohol (two changes), 70% alcohol, 50% alcohol, (in that order), and then in water for 5 min. The slides were then stained with Hematoxylin and Eosin, mounted in DPX, and photomicrographs were taken at a magnification of 100 × on a Leica DM750 microscope" (31).
Morphometric studies
"Morphometric studies were carried out with modification of Akang and co-workers (32). Briefly, four sections per testis and six microscope fields per section were randomly chosen for analysis.
Fields were sampled as images captured on a Leica DM750 bright field microscope (Germany) via LAZ software. Volume densities of testicular ingredients were determined by randomly superimposing a transparent grid comprising 35 test points arranged in a quadratic array. Test points falling on a given testis and its ingredients were summed over all fields from all sections. The total number of points hitting on a given ingredient (lumen (EL), epithelium (EE), interstitium (EI)), divided by the total number of points hitting on the testis sections (ET) multiplied by 100 provided an unbiased estimate of its percentage volume density/volume fraction. The estimation of the volumes of seminiferous tubule EE (seminiferous EE) and EI in the testes was done in accordance with Howard & Reed (33) and Baines and coworkers" (34).
Quantitative evaluation of germ cells
"This was carried out according to the method described by Adelakun and co-workers (31). Briefly, quantitative evaluation of spermatogonia, preleptotene, pachytene spermatocytes, and round spermatid cells was performed using 50 round tubules per group selected in stage VII/VIII of the seminiferous tubule cycle at × 400. The diameters of nuclei of various germ cell types were measured by means of an ocular micrometer and a correction factor was used to obtain the actual numerical density of germ cells" (31).
Ethical consideration
All animal handling procedure and research activities was approved by the Ethics Committee of the College of Medicine, University of Lagos, Nigeria (CM/ HREC/07/19/120).
Statistical analysis
Where applicable, data obtained were analyzed statistically using one-way ANOVA, followed by Dunnett's comparison test. Data were expressed as Mean ± SEM. The level of significance was at p < 0.05. Data were analyzed using GraphPad Prism 5 Windows (GraphPad Software, San Diego, California, USA).
Testes and body weight
In addition, there was a significant decline in the relative weight of the testes in the rat that received AlCl 3 compared with the control (p = 0.02; Table I). However, the co-administration of SFN and AlCl 3 showed a recovery in the relative testicular weight and was not statistically different compared with the control.
There was a significant decrease in the body weight in the rats administered with AlCl 3 compared with the control (p = 0.02; Table I). However, there were no significant differences in body weight in the group administered with SFN only and combined administration of SFN and AlCl 3 compared with the control.
Effect of AlCl 3 on sperm parameters
The spermatozoa concentration showed a significant difference within the AlCl 3 only group compared with the control (p = 0.02; Figure 1A). The co-administration of AlCl 3 and SFN group showed increase in sperm count compared with the AlCl 3 only group. However, the sperm count of the co-administration of AlCl 3 and SFN was significantly lower compared with the control and SFN only group but was not statistically significant.
In addition, there was a significant decrease in sperm motility in the group administered with AlCl 3 only compared with the control (p = 0.02; Figure 1B). The group administered with a combination of AlCl 3 and SFN showed improvement in the motility of the spermatozoa compared with the AlCl 3 only group (p = 0.03; Figure 1B). However, there was no significant difference between the control and SFN only group.
Furthermore, the spermatozoa viability was significantly decreased after AlCl 3 administration compared with the control (p = 0.02; Figure 1C). However, the viability of the spermatozoa in the SFN + AlCl 3 group showed significant difference compared to the control and SFN only group (p = 0.03; Figure 1C).
The AlCl 3 only group had significantly (p = 0.02) higher sperm head defects compared to the control ( Figure 1D). However, there was no significant difference in the abnormal head defeat in the groups that received SFN only and a combination of AlCl 3 and SFN compared with the control. Furthermore, the AlCl 3 only group showed a significantly higher percentage of sperm abnormalities compared to the control (p = 0.02; Figure 1D). The percentage level of sperm abnormalities was drastically reduced in the combined administration of SFN and AlCl 3 , which was not statistically significant compared to the SFN only and control groups ( Figure 1D).
Serum TT, FSH, and LH
There was a significant decline in the levels of serum TT, FSH, and LH in rats treated with AlCl 3 only compared to the control (p = 0.02; Figure 2 A-C). However, the levels of serum TT, FSH, and LH were significantly improved in the group administered with a combination of SFN and AlCl 3 compared with AlCl 3 only group. Although, in comparison to control rats and SFN only groups, the recovery in hormonal level was partial and less but it was not statistically significant.
Lipid peroxidation and antioxidant status
The rats administered with AlCl 3 only showed a significant increase in MDA levels and corresponding decrease in SOD, CAT, and GSH levels compared with the control (p = 0.03; Figure 3 A-D). However, treated group that received a combined administration of SFN and AlCl 3 showed a significant improvement of Lipid peroxidation and antioxidant status compared with AlCl 3 only group but it was not statistically significant compared with the SFN only and control groups.
Testicular histology
The testicular histoarchitecture of the AlCl 3 only group showed necrosis and degeneration with decrease in germinal EE thickness and reduction in the diameter of the seminiferous tubules when compared with the control. In addition, AlCl 3 caused distortion in the seminiferous tubules with loss of normal distribution of epithelial lining and vacuolar cytoplasm compared with the control. However, testicular photomicrograph of the control section had similar characteristics with the SFN only group showing oval or circular presentation with distinctive stratified seminiferous EE whose EL possesses spermatogenic cells and prominent Leydig cells. The testicular section of the group administered with both SFN and AlCl 3 showed restored microarchitecture of the testicular morphology showing mild distortion of the tubular architecture and disorganization of the spermatogenic cells in seminiferous tubules (Figure 4 A-D).
Stereological analysis
The volume density of the germinal of the AlCl 3 only group showed a significant decrease compared with the control (p = 0.02; Figure 5). However, there was no significant difference in the volume of the germinal EE after the administration of SFN and AlCl 3 compared with the SFN only and control groups, respectively. Furthermore, the EL density significantly decreased in the AlCl 3 only group compared to the control (p = 0.02; Figure 5), while the combination of SFN and AlCl 3 showed no significant difference in the EL density compared to the SFN and control groups.
Concerning the EI, the AlCl 3 only group showed a significant increase compared to the control (p = 0.02; Figure 5), while a corresponding decrease was observed in the combined SFN and AlCl 3 group but it was not statistically significant compared to the SFN and control groups, respectively.
The testicular germ cell count such as spermatogonia, preleptotene and pachytene spermatocytes, and round spermatids count in the seminiferous tubules showed a significant decrease in the counts compared to the control (p = 0.02; Figure 6 A-D). Although, the germ cell count after the administration of SFN and AlCl 3 was significantly improved compared to the AlCl 3 only group, but it was not statistically significant compared to the SFN only and control groups, respectively.
International Journal of Reproductive BioMedicine
Effects of Sulforaphane on testicular toxicity
Discussion
An emerging pandemic global public health issue after cancer and cardiovascular diseases is infertility due to increase in testicular cancer (35) and based on the analysis on semen parameters such as reduction in sperm counts and qualities in various countries (36,37). The exposure of male individual to environmental toxicant is regarded as the channel that results in reduced sperm counts and infertility (1,2). Aluminum is considered as the most common metallic element detectable in natural waters, animal, and plant tissues (3). Compounds of Al due to its reactivity with other elements such as Sulphur and chloride are widely used in many products such as storage utensils, household cookware, food additives, toothpaste ,and pharmaceuticals (antacids, vaccines, allergy injections, and anti-diarrhea) (3). The enormous rate of exposure to Al increases the chances of health-related issues to human due to increase metallic concentration in various organs thereby damaging various tissue of the body including testicular tissues of animals and humans (38). Testicular weight is crucial in the evaluation of male fertility test due to its important association in sperm production (39). In our study, the decrease in body and testicular weights observed after AlCl 3 only administration could be correlated to the deleterious effect of the toxicant on body metabolism and testicular architecture, thereby resulting in spermatogenesis disruption. Previous research also concur that Al intoxication causes drastic decrease in testicular weight resulting in germinal EE disruption and inadequate TT production (40,41). However, SFN attenuated the body and testicular weight loss in combined administration of SFN and AlCl 3 group thereby restoring the testicular function.
The seminal fluid analysis (sperm count, sperm motility, sperm viability) were significantly declined in the AlCl 3 only group thereby causing oligospermia due to increased oxidative stressinduced damage and decreased concentration of scavenging enzymes (42,43). Previous studies also showed similar decrease in sperm count and sperm motility after exposure to various environmental toxicants in different experimental animal models (44)(45)(46). However, the combined administration of SFN and AlCl 3 increases the motility, concentration, and viability of the spermatozoa thereby mitigating the effects of AlCl 3 intoxication on testicular tissue.
The process of spermatogenesis has been implicated to be under the regulation of reproductive hormones such as TT, FSH, and LH. In this present study, our results showed a decrease in this reproductive hormone after AlCl 3 administration, thereby suggesting a decline in the role of anterior pituitary and Leydig cells. Previous studies have also observed that the decrease in the level of TT, FSH, and LH hormones in adult rats were due to several environmental agents (47)(48)(49). In addition, previous research deduced that the decrease in the level of TT synthesis could be due to the deleterious effects of testicular toxicant (such as NO, AlCl) on the Leydig cells and also the conversion of androsterone to TT due to decreased activity of 17-ketosteroid reductase enzyme (50,51). However, the SFN and AlCl 3 combined-treated group showed a significant improvement in serum FSH, LH, and TT levels that can be linked to the ameliorative potential of SFN on AlCl 3 testicular toxicity in the release of gonadotrophin-releasing hormone (GnRH) secretion in the hypothalamus (52). The antioxidant defense system prevents the cells of the body against the injurious effect of Reactive Oxygen Species (ROS) produced due to exposure to environmental toxicants, ultimately inducing toxicity to the reproductive system by perturbing the pro-oxidant and thereby leading to oxidative stress (53). Our study showed that exposure to AlCl 3 decreased the antioxidant enzymes such as SOD, CAT, GSH and correspondingly increased the MDA level. The decline in the activities of the antioxidant enzymes observed in this study revealed that the antioxidant system was impaired, thereby inducing oxidative stress induced-testicular toxicity. Previous research have showed that the production of oxidative stress due to metallic exposure decrease enzyme defense mechanism, thereby causing spermatozoa cytotoxicity (54). In addition, the inhibition of sperm functions and male infertility was also reported to occur through toxicity of lipid peroxides via generation of reactive oxygen species (54,55). However, the co-administration of SFN and AlCl 3 in this study showed ameliorative effects against oxidative injury by increasing the levels of antioxidant enzymes (SOD, CAT, GSH) with corresponding decrease in lipid peroxidation. It could be deduced that SFN decrease the free radicals levels via its free radical scavenging activity, especially oxygen radicals, and modulates several cytokines release and activities of testicular enzymes.
The histomorphological features of the testis are critical and usually refer to as the endpoint in the evaluation of male fertility assessment and reproductive toxicity (56). In our study, histological observation of animals that received AlCl 3 only showed various distortions such as shrinked seminiferous tubules, degeneration of Leydig cells, thinner germinal EE, spermatogenesis disruption, and absence of spermatozoa in the EL. Previous studies have also reported similar changes in the histoarchitecture of the testis after exposure to different environmental toxicant (49,57). The alteration in testicular histomorphology by metallic toxicant might be due to oxidative stress, thereby causing distortion of the steroidogenic activity of the Leydig cells by penetration through the bloodtestis barrier. However, the administration of SFN in AlCl 3 -induced testicular damage showed its protective ability on spermatogenesis and tubular atrophy that was confirmed in our study by the histomorphological observation that showed distinctive increase in seminiferous tubules diameter and presence of spermatozoa in the EL. Previous studies have showed that normal spermatogenesis is achievable in oxidative stress-induced testicular toxicity caused by environmental toxicant by several antioxidant-rich agents, thereby increasing the endocrine activity of the Leydig and Sertoli cells (58)(59)(60). Furthermore, the histomorphological observation in our study corroborates with the decrease in the number of spermatogonia, preleptotene, pachytene spermatocytes, and round spermatids in AlCl 3 only group suggesting declined spermatogenic activity. The increased oxidative stress and lipid peroxidation could increase apoptosis of the germ cells. Previous studies also showed that apoptosis of the spermatogonia and primary spermatocyte can occur via microtubule targeting and mitotic arrest after exposure of environmental toxicant (61) and decreased diameter of the seminiferous tubule could also be an indicator of defective spermatogenesis (62, 63).
Conclusion
The present study revealed the ameliorative response of SFN on AlCl 3 -induced testicular toxicity on blood LH, FSH, and TT through oxidative stress. The protective function of SFN may preserve the functional integrity of the testis against environmental toxicity. | 2020-08-27T09:01:49.412Z | 2020-08-01T00:00:00.000 | {
"year": 2020,
"sha1": "f30642abc8176dfefb08f62a5a9cc2942d60f09d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.18502/ijrm.v13i8.7503",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78883dae9551ca6130e0ffa7490cbe210c789c20",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
237553231 | pes2o/s2orc | v3-fos-license | Estimation of Aerodynamic Load on Radome Structure Mounted on Top of a Surveillance Aircraft
The design and development of radome external structure, requires aerodynamic forces acting on it and its distribution. This paper discusses the wind tunnel studies carried out for estimating the incremental effects due to the installation of large ellipsoidal radome along with the support structure pylons on the dorsal side of the fuselage. Effect of locations of radome at 36 m and 31.5 m from the nose of the fuselage is discussed. Further using the scan-valve pressure transducer, the pressure distribution on the radome measured at different aerodynamic angles required for the structural design of radome structure is also brought out. Flow visualization study which are useful for qualitative check for the effect of installation of the radome with support structure on the effectiveness of the empennage is attempted.
I. INTRODUCTION
The development of any surveillance aircraft involves modification in both external to the airframe as well as inside the aircraft cabin. The external modification comprises installation of large antennae inside an oblate ellipsoidal radome with the necessary support structure (pylons) which completely alters the aerodynamic characteristics of the aircraft whereas the inside modification affects only the weight distribution which may be configured in such a way that the center of gravity is within the limits with respect to the basic aircraft. It is imperative to know the incremental effects due to modification on aerodynamic characteristics of the aircraft as well as the aerodynamic loads acting on the radome for the design as well as the support structure. The surveillance aircraft is one of the strategic force multiplier asset of the defence forces of every nation. A suitable platform is identified such that it can be modified and used for the surveillance. It is imperative to check the feasibility of modification on the identified platform from its flight mechanics standpoint that is in terms of performance, stability and control. Once it is identified feasible from the initial analysis, the detailed study on the effect of modification on aerodynamics / flight mechanics is to be attempted by launching multiple campaigns of wind tunnel studies, CFD studies, etc., Radome (Radar & dome) is an ellipsoidal body mounted on top of the surveillance aircraft using suitable interface structures. Radome structure houses antenna and associated electronics for surveillance purposes. Aerodynamic loads at critical design conditions are essential for the structural design of radome. Aerodynamic studies were initiated through wind tunnel tests. The forces on the radome are obtained by wind tunnel tests through incremental approach. The location of the radome on the aircraft is also an important aspect as wakes behind the radome may affect the tail plane. Wind tunnel tests were carried out in open circuit wind tunnel facility at Indian Institute of Science, Bangalore and has 10.5 m 2 cross section with the ability to generate wind speed of 50m/s. Reynolds number per foot of chord is about 1 million.
II. WIND TUNNEL MODEL CONFIGURATION
A 1:18 scale model for surveillance aircraft with suitable interface structure was designed and fabricated [1] in a modular way such that the incremental forces are measured. The model had provisions for mounting a six component strain gauge force balance inside fuselage. The model is held in the test section using a sting that is attached to the sector.
Fig. 1. Basic aircraft
The model had provision for two locations of radome on fuselage also. The wing effect is neglected as the primary objective of the test is to estimate the pressure distribution on radome for the structural design. The effect of wing on the radome would be minimal as the radome is positioned at 6 m above the wing. Fig. 1 Forward and rear portion of radome are populated with more number of pressure ports. Electronic pressure scanners were used to measure the pressure distribution. The coefficient of pressure is (C P ) is computed as below.
where q is the dynamic pressure of air is, P ref is reference pressure.
III. RESULTS AND DISCUSSION
Aerodynamic coefficients are estimated from the loads measured by the balance, with free stream dynamic pressure, reference area of wing of 1.116 m 2 and mean aerodynamic chord of length 0.333 m. The test was carried out at two locations of radome at 36 m and 31.5 m from the nose of the fuselage.
The incremental force for different configuration is shown below. The variation of lift coefficient for different configuration is shown in Fig. 5. The lift generated by radome alone is calculated from the coefficient obtained from configuration 3 (C3) to configuration 2 (C2).
(2) ∆C L is the increment in lift coefficient. The contribution by pylon alone is very marginal and is obtained by subtracting the co efficient from configuration 1 and configuration 2. (3)
Fig. 5. Variation of lift coefficient for different surfaces at β = 0 0
The effect of dome location on fuselage is very important, as the dome may affect the effectiveness of vertical fin and also location is important from stability point of view. The minimum drag is noticed for angle of attack of 4 0 to 6 0 . This is due to the reason that the dome is kept inclined at -4 0 with respect to fuselage reference line. It is obvious here that as the angle of attack increases the drag increases and this is due to the reason that radome is projected with higher frontal area. Also the drag increase is noticed as angle of side slip increases. This could be due to the interference of fuselage and pylon.
Fig. 8. Variation of drag coefficient with alpha for different angle of side slip
The Side force coefficient (Cy) variation with respect to sideslip angles for different angle of attack for two different locations on fuselage are shown in the Fig. 9. β is side slip angle. Fig. 10 shows the variation of side force coefficient for different surfaces for angle of attack at 0 0 . Due to the symmetry of the model, the coefficients pass through the origin. The contribution by radome and pylon is small and the main contribution of the side force is due to fuselage. Fig. 11 to 13 show the variation of pressure for different angle of attack. The red and blue lines of the figure indicate negative and positive relative pressures respectively. The stagnant pressure is seen on the top part of the radome due to the negative incidence of the radome. A cusp is seen on the bottom part of the dome, which is due to the presence of pylon. As the angle of attack increases the stagnation point moves is shifted to the bottom side of the radome. The Cp variation for α = 0 0 , 4 0 , 12 0 for top surface and bottom surface of radome is shown in Fig. 14 & 15 respectively. A cusp is seen on the bottom portion of the dome, which is due to the presence of pylon. The effect of pylon is clearly seen for all the angle of attack. (Fig. 15).
Fig. 15. Variation of C P for different angles for Bottom surface
The coefficient obtained was extrapolated using Prandtl -Glauert compressibility correction [3] for higher Mach number. The Prandtl -Glauret transformation is found by linearizing the potential equations associated with compressible, inviscid flow. The correction factor is given by (4) C p is the compressible pressure coefficient, C po is the incompressible pressure coefficient, M is the free stream Mach number. This formula works up to low transonic mach number up to M = 0.7.
Fig. 16. Flow visualization using tuft technique
Flow visualization using tuft technique [4] was carried out. The model was painted using black with tufts around 850 in numbers are stuck on to it. Tufts selected are cotton number 60 sewing threads cut in to 25 mm long. Tests are carried out for different aerodynamic angles and visualization effects are video recorded and analyzed. The wakes behind the radome affect the vertical fin and affect the effectiveness of tail plane. This may require additional surfaces to restore directional stability. Further study on this aspect will provide more insights.
IV. CONCLUSION
Low speed wind tunnel tests have been carried out for range of aerodynamic angles. All the plots display general aerodynamic trend. The data generated is useful for structural design of radome structure of surveillance aircraft. The effect of radome location on fuselage is not affecting the aerodynamic characteristic. Flow visualization study using tuft technique reveals that wakes behind the radome affect the vertical fin and hence affect the effectiveness of tail plane. Positive aerodynamic pressure values act from outside towards inside and negative values act in the opposite direction. In the absence of critical design cases, as a conservative measure extreme pressure (maximum and minimum) values generated for a range of aerodynamic angles is considered for finite element structural analysis of radome. | 2020-03-12T10:46:28.703Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "ef8364abbee3fc5b05585efaa743e381b442e363",
"oa_license": null,
"oa_url": "https://doi.org/10.35940/ijeat.c5652.029320",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5973b6b315bd9bfa2f90b55358d06799fd84ae63",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
258872393 | pes2o/s2orc | v3-fos-license | Recovery of the Ratio of Closure Time during Blink Time in Lacrimal Passage Intubation
(1) Background: We aim to find a novel blink parameter in nasolacrimal duct obstruction (NDO) patients and analyze parameters that could reflect subjective symptoms and objective indicators at the same time through a blink dynamic analysis. (2) Methods A retrospective study was conducted with 34 patients (48 eyes) who underwent lacrimal passage intubation (LPI) and 24 control groups (48 eyes). All patients’ blink patterns were measured using an ocular surface interferometer before and after LPI, including total blink (TB) and partial blink (PB) and the blink indices blink time (BT), lid closing time (LCT), closure time (CT), lid opening time (LOT), interblink time (IBT), closing speed (CS) and opening speed (OS). The tear meniscus height (TMH) was measured, and the questionnaire “Epiphora Patient’s Quality of Life (E-QOL),” which includes daily activity restriction as well as static and dynamic activities, was completed. (3) Results: Compared to CT and the ratio of CT during BT (CT/BT) in control (89.4 ± 20.0 msec, 13.16%), those in NDOs were longer (140.3 ± 92.0 msec, 20.20%) and were also related to TMH. After LPI, CT and CT/BT were recovered to 85.4 ± 22.07 msec, 13.29% (p < 0.001). CT and CT/BT showed a positive correlation with the E-QOL questionnaire score, particularly with dynamic activities. (4) Conclusions: CT and CT/BT, which are objective indicators associated with subjective symptoms of patients, are considered new blink indices for the evaluation of NDO patients with Munk’s score.
Introduction
There are two types of causes for epiphora: primary and secondary. Sung et al. developed the punctal reserve (PR) as a novel and clinically beneficial index for evaluating punctum parameters in nasolacrimal duct obstruction (NDO) patients using spectralis anterior segment optical coherence tomography (AS-OCT) scans [1].
There are various subjective ways for measuring the severity of epiphora, including Munk's score, the watery eye quality of life (WEQOL) questionnaire, and the TEARS (Times wiping eyes, Effects, Activities affected, Reflex tearing, Success) score [2,3]. The widely renowned Munk's score ranges from 0 to 5 and depicts the subjective discomfort of patients; however, there is a limit to just checking the daily numbers of wiping. To circumvent this constraint, the WEQOL questionnaire has been developed to assess the quality of life of epiphora patients [3]. The TEARS score gives a short and simple summary of the subjective and objective clinical severity of epiphora in patients, which can be utilized in a busy clinical setting [2]. TMH and PR are used to perform an objective evaluation of the amount of collected tears. Depending on the patient, the TMH and PR may objectively indicate the amount of collected tears, but further studies are necessary to investigate the subjective experiences of discomfort.
It is difficult to immediately detect subjective symptoms and objective indicators of tears due to the complexity of epiphora. Indicators that may simultaneously assess the subjective symptoms of patients and the objective parameters of epiphora are required for more precise diagnosis and evaluation. 2 of 10 NDO patients have a peculiar blinking pattern in which they squeeze their eyelids, and we attempted to identify novel indications by analyzing their blinking patterns. When NDO patients close their eyelids, they typically squeeze them as well. Thus, the pattern of their blinking is peculiar. By analyzing the blink pattern, we aimed to determine if it could be an additional diagnostic method of NDO.
A blink is an accurate reflection of the ocular surface and is related to the tear film's stability and the health of the ocular surface [4]. During a 20 s blink imaging analysis, blink patterns and blink dynamic indices were identified in blepharospasm, a facial nerve palsy [5,6]. In this study, we analyzed the link dynamic index for NDO patients using an ocular surface interferometer. In addition, we intended to identify important blink characteristics in NDO patients and assess how effectively subjective symptoms and objective indicators in NDO patients are reflected by indicators.
Participants
From July 2019 to January 2022, a retrospective study was performed on 34 patients (48 cases) who had lacrimal passage intubation (LPI), as well as 24 age-and gendermatched control groups (48 cases). To rule out other eyelid blinking effects, patients with a history of eyelid surgery, ocular disorders, and treatment were removed from the study. Although the relationship between dry eye disease symptoms and signs is weak and variable [7], individuals with dry eye disease were excluded. Age affects blinking [8]; thus, a retrospective study was conducted on 24 control groups (48 cases) with patients of similar ages and genders, with a mean age of 59.5 years in patients in their 40s and 60s who visited clinics for early cataract screening during medical visits.
Quality of Life in Epiphora Patients (E-QOL) Questionnaire
This is an E-QOL questionnaire utilized in this study that focuses on a patient's symptoms in addition to the Munk scale for patients who are currently visiting the outpatient clinic for epiphora. We identified subjective indicators that are more specific with regard to limitations on everyday living, static activities, and dynamic activities.
The total E-QOL questionnaire score was 60. The degree to which epiphora interferes with daily living was rated out of 10 points, whilst the degree to which it interferes with activities was rated out of 5 points. The daily activity limitation questionnaire has a score of 20 and consists of four questions: to what extent does tearing limit your daily activities? (0 = no limitations, 10 = severe limitations) To what extent does epiphora restrict your interpersonal relationships? (0 = no difficulty, 5 = constant difficulty) and how irritated are you owing to your tears? (0 = no trouble whatsoever, 5 = constant trouble). The static activity restriction questionnaire has a score of 25 and consists of five questions: how much difficulty do you have with reading, driving during the day, driving at night, using a computer, and watching television due to tearing? (0 = no trouble whatsoever, 5 = constant trouble). The dynamic activity restriction questionnaire has a score of 15 and consists of three questions: how much difficulty do you have with the following tasks at work, at home, and outside due to tearing?
We define the blink cycle as the blink time (BT), which is the duration of the upper eyelid closing action, and the interblink time (IBT), which is the duration of the upper eyelid remaining open. BT was defined as the sum of the lid closing time (LCT-time taken by the interpalpebral fissure (IPF) to reach the maximum closure from the minimum closure), lid opening time (LOT-time taken by the upper eyelid to change from the minimum to maximum IPF), and closure time (CT-time that the upper eyelid remained completely closed).
Using these data, the blink index curve could be derived. For both the LCT and LOT periods, at least two time points were selected and calculated to determine the blink dynamic curve's curvature. Closing speed (CS) and opening speed (OS) were defined as the dynamic index of an eyelid's closure and opening, respectively. The CS and OS were computed using IPF per LCT (mm/s) and IPF per LOT (mm/s), respectively. The measurements were taken on all blinks that occurred within 20 s, and the results were averaged. The adjusted IPF versus time graph was used to generate each blink curve ( Figure 1). For data gathering and analysis, a desktop computer running Windows 11 and video applications were utilized. Every measurement was taken by a single examiner (Y.K.).
We define the blink cycle as the blink time (BT), which is the duration of the upper eyelid closing action, and the interblink time (IBT), which is the duration of the upper eyelid remaining open. BT was defined as the sum of the lid closing time (LCT-time taken by the interpalpebral fissure (IPF) to reach the maximum closure from the minimum closure), lid opening time (LOT-time taken by the upper eyelid to change from the minimum to maximum IPF), and closure time (CT-time that the upper eyelid remained completely closed).
Using these data, the blink index curve could be derived. For both the LCT and LOT periods, at least two time points were selected and calculated to determine the blink dynamic curve's curvature. Closing speed (CS) and opening speed (OS) were defined as the dynamic index of an eyelid's closure and opening, respectively. The CS and OS were computed using IPF per LCT (mm/s) and IPF per LOT (mm/s), respectively. The measurements were taken on all blinks that occurred within 20 s, and the results were averaged. The adjusted IPF versus time graph was used to generate each blink curve ( Figure 1). For data gathering and analysis, a desktop computer running Windows 11 and video applications were utilized. Every measurement was taken by a single examiner (Y.K.).
The subjective satisfaction of patients following LPI whose TMH was less than 300 μm and who passed irrigation test were deemed successful.
Surgical Technique
As previously described, a single surgeon (H.L.) performed all of the procedures [9]. Using dacryoen doscopy and the insertion of a silicone tube, surgical treatment was performed under general or local anesthesia. After extending the punctum with the punctum dilator and spring scissors and inserting the 0.9 mm diameter probe tip and bent type dacryoendoscope (RUIDO Fiberscope, FiberTech Co., Tokyo, Japan) through the punctum, the internal conditions of the lacrimal duct system were evaluated by passing saline through the upper and lower canaliculus, lacrimal sac, lacrimal duct, and inferior meatus. The obstructive lesion was dislodged by a sheath guided by endoscopy and perfusion solution pressure with a syringe attached to a probe. Under visual guidance, a 0.94 mmdiameter bicanalicular silicone tube (Yoowon Meditec, Seoul, Republic of Korea) was inserted into the sheath. The sheath and tube were retrieved, and both ends of the tube were The subjective satisfaction of patients following LPI whose TMH was less than 300 µm and who passed irrigation test were deemed successful.
Surgical Technique
As previously described, a single surgeon (H.L.) performed all of the procedures [9]. Using dacryoen doscopy and the insertion of a silicone tube, surgical treatment was performed under general or local anesthesia. After extending the punctum with the punctum dilator and spring scissors and inserting the 0.9 mm diameter probe tip and bent type dacryoendoscope (RUIDO Fiberscope, FiberTech Co., Tokyo, Japan) through the punctum, the internal conditions of the lacrimal duct system were evaluated by passing saline through the upper and lower canaliculus, lacrimal sac, lacrimal duct, and inferior meatus. The obstructive lesion was dislodged by a sheath guided by endoscopy and perfusion solution pressure with a syringe attached to a probe. Under visual guidance, a 0.94 mm-diameter bicanalicular silicone tube (Yoowon Meditec, Seoul, Republic of Korea) was inserted into the sheath. The sheath and tube were retrieved, and both ends of the tube were locked and fixed near the inferior meatus. The tube was removed six months later. According to the clinical course after surgery, levofloxacin 0.5% (cravit ® ; Santen, Osaka, Japan) and fluorometholone 0.1% (Flumetholone ® ; Santen, Osaka, Japan) were prescribed.
Statistical Analysis
SPSS for Windows, version 27.0, was used for all statistical analyses (IBM Corp., Armonk, NY, USA). Parameters were compared using the paired t-test, Mann-Whitney test, Kruskal-Wallis test, and Chi-square test. A p-value of less than 0.05 was considered statistically significant. Using the Chi-square test, we reported dichotomous outcomes as odds ratios (ORs) and continuous outcomes as the mean and their respective 95% confidence intervals (CIs). Using correlation analysis, the relationships between the categorical variables were examined.
Results
There were more women than men in the study (male:female = 8:26). The mean age was 59.50 ± 9.53 years, and the direction of occurrence was comparable. The mean Munk scale was 4.04 ± 1.35, the mean Schirmer test value was 34.77 ± 4.62 mm, and the mean BT was 7.74 ± 2.80 s. The average score on the E-QOL questionnaire was 34.81 ± 17.09 out of 60, the score for daily activity restriction was 13.67 ± 4.50, the score for static activity limit was 13.65 ± 7.58, and the score for dynamic activity limit was 8.92 ± 4.50 (Table 1). Comparable to normal, individuals with NDO had a longer CT, and in the case of unilateral NDO, they had a CT that was approximately 60% longer than the opposite eye. After LPI, this reduced similarly to the usual situation. Comparing CT measurements before and after LPI reveals that it reduced by around 60%, leading to an overall decline in BT. After LPI, the TMH had fallen by more than half in all groups. The preoperative CT was longer in the experimental group than in the control group (89.4 ± 20.0 msec versus 140.3 ± 92.0 msec, p = 0.001), but there were no other differences. After surgery, the BT (694.4 ± 135.5 msec, 642.4 ± 116.0 msec, p = 0.032) and the CT decreased (140.3 ± 92.0 msec, 85.4 ± 22.7 msec, p < 0.001), the TMH decreased significantly from 451.5 ± 254.3 µm to 213.7 ± 112.3 µm, the tear film lipid layer thickness remained unchanged, and the BT, CT, CS and OS values were similar to that of the control group (Table 2).
According to the graph of blink dynamics prior to and after surgery, the IPF along the Y-axis fell to 7.61 mm from 8.26 mm. CT reduced from 140.3 msec to 85.4 msec, which is near normal, and BT decreased from 694.4 msec to 642.4 msec, which is also close to normal (Figure 1). The normal control group had a CT during BT ratio of 13.3%, while patients with NDO had a ratio of 19%. After surgery, the CT during BT ratio returned to normal levels ( Figure 2A). Before and after surgery, both unilateral and bilateral patients had the same difference, and it can be seen that they have returned to normal after surgery; therefore, the CT during BT is significant ( Figure 2B,C). Interestingly, as reported by Su et al. in the 2018 dry eye blink research, CT was also extended in dry eye disease [10]. Secondary epiphora due to dry eye disease similarly lengthened the CT, but when assessed by the ratio in the blink time, it was 7.3% in dry eye patients and 19.1% in patients with NDO, allowing the distinction between the two groups. Via an ocular surface-reflecting blink movement study, we sought to identify new indicators that can simultaneously reflect subjective and objective indicators. The blink parameters associated with subjective indicators were BT, CT, and CT during BT; the longer BT and CT were, the greater the ratio of CT during BT, and the stronger the correlation between the E-QOL questionnaire and the objective blink parameters. Objective criteria TMH and punctual reserve showed a positive correlation with the E-QOL questionnaire ( Figure 3). Via an ocular surface-reflecting blink movement study, we sought to identify new indicators that can simultaneously reflect subjective and objective indicators. The blink parameters associated with subjective indicators were BT, CT, and CT during BT; the longer BT and CT were, the greater the ratio of CT during BT, and the stronger the correlation between the E-QOL questionnaire and the objective blink parameters. Objective criteria TMH and punctual reserve showed a positive correlation with the E-QOL questionnaire (Figure 3).
Discussion
Several attempts have been undertaken to objectively quantify the quantity of tears shed by individuals who complain of various tear symptoms. TMH, punctal diameter and PR are used to objectively measure the number of stagnant tears [1] as well as the irrigation test, dacryocystography, and dacryoendoscopy, which are used to figure out the precise etiology of NDO. A blink accurately reflects the ocular surface and is associated with the stability of the tear film and the health of the ocular surface [4]. Tse et al. tried to evaluate the clinical-anatomical assessment of NDO patients, including blink dynamics. They suggest the BLICK mnemonic as a useful adjunct to the workup of epiphora (Blink dynamics, Lid malposition, Imbrication, Conjunctivochalasis, and Kissing puncta) [11]. Here, we mainly focus on eyelid blinking in NDO patients because it is believed that the discomfort of retained tears has an effect on eyelid blinking.
The authors' blink profile analysis revealed that NDO patients had a much longer CT and CT during BT than normal controls, and even if only one eye had NDO, the other eye's indices were significantly longer, too. In other words, the eyes were closed for an extended period of time due to greater tear retention caused by NDO, and it returned to normal after LPI. The TMH had decreased after LPI, and the patient's CT and CT during BT normalized after surgery. During eyelid closure or opening, positive/negative pressure spikes are formed throughout the lacrimal duct system, and tears flow via the canaliculus to the lacrimal sac and lacrimal duct, according to Sato et al. [12]. It is believed that longer CT and CT during BT in patients with NDO close eyelids more tightly than usual because tears did not drain under the pressure of normal blinking. They have to squeeze their eyelids to drain the tears. Moreover, CT and CT during BT have demonstrated a positive connection with objective markers (TMH, PR) used to evaluate the absolute amount of tearing in NDO patients (Figure 3, p < 0.05). Hence, CT and CT during BT are new measures that can establish the severity of tear retention in patients with NDO.
After LPI, patients' subjective satisfaction may have increased as a result of higher contrast sensitivity, improved vision, decreased blink frequency, and enhanced optical
Discussion
Several attempts have been undertaken to objectively quantify the quantity of tears shed by individuals who complain of various tear symptoms. TMH, punctal diameter and PR are used to objectively measure the number of stagnant tears [1] as well as the irrigation test, dacryocystography, and dacryoendoscopy, which are used to figure out the precise etiology of NDO. A blink accurately reflects the ocular surface and is associated with the stability of the tear film and the health of the ocular surface [4]. Tse et al. tried to evaluate the clinical-anatomical assessment of NDO patients, including blink dynamics. They suggest the BLICK mnemonic as a useful adjunct to the workup of epiphora (Blink dynamics, Lid malposition, Imbrication, Conjunctivochalasis, and Kissing puncta) [11]. Here, we mainly focus on eyelid blinking in NDO patients because it is believed that the discomfort of retained tears has an effect on eyelid blinking.
The authors' blink profile analysis revealed that NDO patients had a much longer CT and CT during BT than normal controls, and even if only one eye had NDO, the other eye's indices were significantly longer, too. In other words, the eyes were closed for an extended period of time due to greater tear retention caused by NDO, and it returned to normal after LPI. The TMH had decreased after LPI, and the patient's CT and CT during BT normalized after surgery. During eyelid closure or opening, positive/negative pressure spikes are formed throughout the lacrimal duct system, and tears flow via the canaliculus to the lacrimal sac and lacrimal duct, according to Sato et al. [12]. It is believed that longer CT and CT during BT in patients with NDO close eyelids more tightly than usual because tears did not drain under the pressure of normal blinking. They have to squeeze their eyelids to drain the tears. Moreover, CT and CT during BT have demonstrated a positive connection with objective markers (TMH, PR) used to evaluate the absolute amount of tearing in NDO patients (Figure 3, p < 0.05). Hence, CT and CT during BT are new measures that can establish the severity of tear retention in patients with NDO.
After LPI, patients' subjective satisfaction may have increased as a result of higher contrast sensitivity, improved vision, decreased blink frequency, and enhanced optical quality [13,14]. Decreased IPF was noted after surgery alongside CT and CT during BT. The exact explanation of this is uncertain; however, it is assumed that the eyes open less due to relative dryness when the LPI-induced tear retention resolves.
For a long time, Munk's scale has been used to measure the level of subjective discomfort. Unfortunately, the Munk scale now in use only represents the number of tears wiped away on a scale of 0 to 5 points, making it difficult to depict the level of discomfort caused by tears in the patient's everyday life. More recently, the TEARS scale and WE-QOL questionnaire [2,3] have been developed. However, in clinical settings, there are a surprising variety of cases in which the objective quantity of stagnant tears (e.g., TMH) and the patient's subjective tear-induced symptoms (e.g., subjective satisfaction, decreased vision) are unrelated. For more precise diagnosis, evaluation and proper treatment of NDO patients, parameters that might reflect subjective symptoms and objectively measured tear amount are becoming necessary. Additionally, the WEQOL questionnaire screened for irritated skin due to tears and primarily focused on unpleasant emotions [3]. On the other hand, there is a distinction in that the E-QOL questionnaire distinguished between static and dynamic activities when confirming the discomfort caused by tears during diverse activities with better specificity. The E-QOL questionnaire also showed a positive correlation with objective parameters, TMH and punctual reserve. Additionally, it also has strong correlation with the other objective parameter, CT, CT during BT (Figure 3, p < 0.05).
In this study, 33.3% of patients complained of subjective discomfort due to tears, despite their TMH being below 300 µm. Their CT, CT during BT were higher (175.00 msec) than in the normal group (89.4 msec). In addition, among patients with no subjective discomfort, 12.5% of those show a TMH of 300 µm or above. There might be a considerable gap between objective indicators and subjective symptoms of tears, and it is true that there are complicated interactions among the multiple elements. Since TMH is only the summation of tear secretion and excretion, patients suffering from both dry eye disease and NDO could not demonstrate high TMH. Dry eye syndrome is a typical example of a disorder that causes a significant amount of tear evaporation. NDO, combined with this condition, may suggest normal TMH, which may confound the combined NDO for lower TMH than expected in NDO patients with normal tear secretion. This is the reason why we investigate to discover the clinical parameters of subjective discomfort (E-QOL questionnaire) in accordance with objective parameters (TMH, CT, CT during BT).
Therefore, considering that CT and CT during BT demonstrated a favorable connection with the E-QOL questionnaire scores, CT and CT during BT could be considered as a complementary tool to TMH or the E-QOL questionnaire scores of epiphora patients in diagnosis based on this study. Among them, this was strongly associated with dynamic activities, particularly subjective discomfort, and the correlation between the index value and the CT during BT value showing that patients had poorer quality of life due to tears was significant (p = 0.001, R = 0.489). Unexpectedly, there was no correlation between Munk's scale and the blink index and E-QOL questionnaire score measured by the investigators. Considering that CT and CT during BT have a correlation with subjective symptoms, it is expected that further studies comparing Munk's scale and blink index with the E-QOL questionnaire will be conducted on a larger number of patients. In addition, CT and CT during BT were correlated with TMH and PR but not with the irrigation test, which could indicate the severity of epiphora symptoms. Future applications may include analyzing instantaneous acceleration and electrode signals during blinking and analyzing them in greater detail than 20 s and 600 frames using a method of tracking eyelid blinking by utilizing continuous blink tracking methods (e.g., a wearable blink tracking device) [4].
CT also demonstrated a protracted pattern of secondary tears resulting from dry eye disease; however, CT during BT was distinct from 19.3% in NDO patients and 7.3% in dry eye patients. According to Su et al., patients with dry eye had longer LCT, LOT, CT, PB, and IBT. Hence, patients with dry eyes blink frequently and slowly. In contrast, LCT, LOT, PB, and IBT did not differ significantly between NDO patients. Hence, it may be noticed that NDO patients close their eyes longer than those with dry eyes, but other blinking parameters remain the same. This allows us to differentiate this from CT that has been prolonged by secondary tears. Furthermore, additional studies with a larger sample size are anticipated to distinguish between NDO and dry eyes, as well as tears caused by eyelid malposition and tears caused by external stimuli. In addition to blepharospasm, facial palsy, and NDO, it is anticipated that 20 s of blinking image analysis will substantiate the blink pattern of other ophthalmic diseases.
Conclusions
CT and CT during BT, which are objective parameters that can also be associated with patients' subjective epiphora symptoms, are considered to be useful parameters for evaluating NDO patients in clinical settings. Informed Consent Statement: Patient consent was waived because this study reported on the results of an observational study and complied with the STROBE guidelines.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to privacy.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-05-25T15:05:35.682Z | 2023-05-23T00:00:00.000 | {
"year": 2023,
"sha1": "39607035071de84e8eed50928b800866ed32fc27",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jcm12113631",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e87f28a5f058eafe7391378f9fcea0c4be94e55",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218972956 | pes2o/s2orc | v3-fos-license | Physical violence during pregnancy in Peru: Proportion, geographical distribution, and associated factors
Estimate the proportion, geographic distribution and sociodemographic factors associated with physical violence during pregnancy between 2016 and 2018.Secondary analysis of the Demographic and Family Health Survey, which included respondents whether they presented physical violence during pregnancy in the last 12 months.The proportion of physical violence was 9,9% [95%CI:9,6–10,4%] during 2016, 9,2% [95%CI:8,8–9,6%] during 2017 and 8,6% [95%CI:8,3–8,9%] during 2018, The regions with the highest proportion were Puno, Arequipa and Apurímac during the 3 years. Among the associated factors, the residue in rural areas (RP:0,49; p=0,011) and be “very rich” (RP:0,63; p=0,029) was protective; while they were at risk of not presenting studies (RP:1,87; p=0,014), the cohabiting marital status (RP:1,51; p=0,001), separated (RP:3,56; p<0,001), showing an age between 40 a 49 years (RP:1,79; p=0,012) and that partner drinks alcohol (RP:1,61; p<0,001).The proportion of violence in Peru has been decreasing. The factors that predispose this phenomenon are the wealth index, educational level, marital status, and the age of the pregnant woman.
Introduction
Violence during pregnancy is a global health problem that usually occurs in poverty settings, resulting in not only physical damage, but also psychiatric morbidity, with a higher proportion of disorders such as post-traumatic stress and postpartum depression 1-3 . The proportion of this type of violence differs between countries, reaching 30% in European countries 4-7 and 36% in African countries 8 . In Peru, few studies have focused on aggression during pregnancy, among them some texts mention that in the hospitals of the cities of Moquegua 9 and Ica 10 the proportion reaches 9.5% and 28%, respectively; however, there is no report that estimates the values at the national level.
Due to the period in which violence occurs, there are related obstetric repercussions, such as the risk of preterm delivery, low birth weight, small for gestational age and insufficient prenatal controls 11 . These results are precisely those that can be avoided with timely and specific interventions in those high-risk populations, which according to previous studies is not yet effective 12 . Therefore, the objective of this study is to identify the proportion and geographical distribution of cases of violence during pregnancy in Peru and its associated sociodemographic factors during the period 2016 to 2018.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 29, 2020. The sample, as specified in the technical sheet, was probabilistic of the stratified, threestage and self-weighted type.
The proportion of "physical violence during pregnancy" was evaluated using the affirmative or negative answer to question 1019 of the family violence survey. Likewise, it was also evaluated whether the participant sought help based on her answer to the question "When they have mistreated you, have you asked for help from people close to you?" (question 1022 of the questionnaire, variable D119Y). The following were included as factors: the region of origin (variable HV023), maximum level of education approved (HV106), area of residence (HV025), place of residence (HV026), marital status (HV115), wealth index (V190), alcohol consumption by the couple (D113) and age (HV105), the latter being categorized in periods of 10 years. All variables were available in the 3 years evaluated.
The analysis was carried out in the STATA version 14 software, where the stratification, clusters and weighting factors established by ENDES were taken into account, in order to estimate all the results according to complex sampling and . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 29, 2020. . https://doi.org/10.1101/2020.05.27.20115030 doi: medRxiv preprint considering the svy command for analysis. Base binding was performed considering the binding variables HHID, HVIDX and CASEID. Proportions with their respective confidence intervals were estimated for each of the years evaluated; Likewise, when analysing the proportion of violence by region, the confidence intervals and the coefficient of variation (CV) were considered, assuming that CV> 15 as "referential".
Associated factors were evaluated using a generalized linear model of the Poisson family for complex samples with a link (log), reporting the sense of association using the adjusted Prevalence Ratio (aRP) and its respective 95% confidence interval. The geographical distribution was evaluated using the QGIS software, where the geographic reference of the National Institute of Statistics and Informatics was taken and cut points for the proportions were established, which were generated from the maximum and minimum values found in the 3 years. of studies.
The analysed bases are freely accessible, and the records did not present identifiable information of the participants, for which the approval of an ethics committee was not required. Since the present investigation derives from an undergraduate thesis, 13 it was reviewed and approved prior to its execution by the research committee of the obstetrics school of the Universidad Nacional Mayor de San Marcos, as part of the thesis evaluation process. university.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 29, 2020. Table 1) Table 2 reports the factors associated with violence during pregnancy.
Regarding the highest level of studies approved, the proportion of violence increased when they had a lower academic level, where pregnant women without studies were up to 87% more likely to present violence compared to those with higher education. Also, residing in rural areas reduced the probability by 51%. Another factor with associated categories was marital status, where when taking married status as a reference, being . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Regarding age, it was found that those between 40 and 49 years old have a greater probability (79%) of violence. When evaluating the wealth index, it was found that those who are categorized as "very rich" have a 37% less probability of violence, while alcohol consumption increases this probability (61%).
Finally, the proportion of violence during pregnancy was homogeneously reduced in a large part of the evaluated variables; except in the area of residence, where there was a greater reduction in those who live in urban settings (-1.58%) than in rural settings (-0.76%), and in the wealth index, where those who were categorized as " very poor "showed an increase in violence during this period (+0.04%).
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Discussion
The proportion of violence during pregnancy found is similar to that reported in previous studies 4 , although studies that have developed specific screening strategies during prenatal care have found values close to 30% 5,7 . In Peru, the proportion of violence during pregnancy has been decreasing, which may be due to the policies implemented during 2015 and 2016 against gender violence. These sought to modify sociocultural patterns that legitimize violence (strengthen capacities in schools, promote information in the media, empower community agents and involve new actors), guarantee access to services (generate a comprehensive process for prevention, care and protection of assaulted persons, strengthen the capacities of operators, re-educate the aggressors and establish an information system) and establish legal sanctions for those who violate women 14 .
The proportion of physical violence during pregnancy was also evaluated in each region, where it is difficult to establish a direct comparison due to the lack of information in this regard. However, the national observatory reports that the highest proportions of gender violence are found in Apurímac, Junín, Puno and Cusco, which are regions that in the present study showed a high proportion of violence during pregnancy during the 3 years evaluated 15 . It was also observed that those who come from rural areas and with an extremely poor wealth index reduced the proportion of violence in low amounts or even increased it. This is due to the fact that low-income participants are less likely to have optimal prenatal care, either because of the distance to the facility or because of the control that the couple usually exerts in environments of economic dependency, thus avoiding screening and intervention. timely 16,17 Regarding the associated factors, education in the pregnant woman showed to be decisive in the presence of violence, where a higher academic degree reduces the probability of physical abuse, which coincides with previous national studies 18 .
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 29, 2020. . https://doi.org/10.1101/2020.05.27.20115030 doi: medRxiv preprint Likewise, presenting a marital status divorced and separated showed to be strong risk factors. In this regard, a systematic review indicates that this is due to the fact that much of the violence begins in the period of separation 19 ; Therefore, due to the cross-sectional nature of the study, it is possible that the rupture of the link may have occurred after the end of the pregnancy. On the other hand, people who had a pregnancy older than 40 years had a higher proportion of violence; range in which there is a low probability of requesting help from the family environment, which increases the risk of repeated episodes of violence 7 .
Finally, the present study identifies populations susceptible to violence during pregnancy, where it is essential to strengthen screening for violence, which, although it is contemplated in the perinatal card, generates a superficial search; therefore, it is also necessary to evaluate the effectiveness of the tools used 20 . Likewise, it is recommended that the consultation on violence address previous years 6 and, if identified, strengthen work with family members 7 .
Regarding the limitations of the study, the collection of information was carried out by means of a self-report and through a single question, with which there may be an underestimation in the proportion found, this could be favoured if we consider that there may be a bias of social desirability, where the participant responds according to what is socially acceptable; likewise, the absence of temporary records of the violence events and / or the memory bias that the participants present do not allow determining causality between the variables; However, we consider that the findings give us an approximation to the situation of violence during pregnancy at the national level and allow estimating populations at risk, also evaluating how these indicators have varied in recent years.
We conclude that the proportion of violence during pregnancy has been decreasing lately and has reached 8.64% in 2018, although there are regions in the south . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 29, 2020. . https://doi.org/10.1101/2020.05.27.20115030 doi: medRxiv preprint of the country that maintain values above 10%. Among the associated factors constant over time are the wealth index, the educational level of the pregnant woman, marital status, and the age range of the pregnant woman.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review)
The copyright holder for this preprint this version posted May 29, 2020. 14. Bellido PC, Alegre MH, Chanduví JS, Valdivia AV, Ríos AV. Plan nacional contra la violencia de género 2016-2021, Ministerio de la Mujer y Poblaciones Vulnerables.
. CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 29, 2020. . https://doi.org/10.1101/2020.05.27.20115030 doi: medRxiv preprint For printing: Black and white . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted May 29, 2020. . https://doi.org/10.1101/2020.05.27.20115030 doi: medRxiv preprint | 2020-05-29T13:05:49.738Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "879ad6e6ee7dc92008032cfa1ab353c0c8820378",
"oa_license": "CCBY",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/05/29/2020.05.27.20115030.full.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "2578f51e1ad86b93f67e89a3e3369ca9d1cb3d8c",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226068232 | pes2o/s2orc | v3-fos-license | Issues bordering the French language in Sub-Saharan Africa: Nigeria as a case study
Aim: The study examined questions related to the French language that Nigeria uses in South Saharan Africa. We look at some explanations why in some countries of Africa, particularly Nigeria, the French language could or could not have been adopted. Method: Qualitative approach was used to achieve the above objective. Findings: The results predict that the best way to tackle the issues raised in this paper is to pay attention to psycho-pedagogic issues while considering the other issues. To ascertain the French language’s survival in Sub-Saharan Africa, especially in Anglophone Africa, French should be made compulsory at all levels, especially in public schools in Nigeria. This paper has called for a re-examination of the French language at all education levels, especially in Nigeria. Implications/Novel Contribution: This paper was intended to discuss the questions of Sub-Saharan Africa, in particular in West Africa, which use Nigeria as a case study, as a borderline language. This showed that in Sub-Saharan Africa, particularly in the English language countries, the lack of recognition and language’s value is one of the main factors that militate against the language.
INTRODUCTION
Sub-Saharan African countries are those regions in the southern part of the Sahara Desert. According to the United Nations, Africa has 54 countries. North Africa has 5 countries: Egypt, Libya, Tunisia, Algeria and Morocco which are not in the Sub-Sahara. The remaining 49 countries are in the south of the Sahara which are termed Sub-Saharan African countries. They are Angola, Benin, Botswana, Burkin Faso, Burundi, Cape Verde, Cameroon, Central African Republic, Chad, Comoros, Democratic Republic of the Congo, Republic of the Congo, Cote d'Ivoire, Djibouti, Equatorial Guinea, Eritrea, Ethiopia, Gabon, Gambia, Ghana, Guinea, Guinea-Bissau, Kenya, Lesotho, Liberia, Madagascar, Malawi, Mali, Mauritania, Mauritius, Mozambique, Nigeria, Niger, Tome and Principe, Senegal, Seychelles, Sierra Leone, Somalia, South Africa, South Sudan, Sudan, Swaziland, Tanzania, Togo, Uganda, Zambia and Zimbabwe (Abimbola, 2016;Danladi, 2013;Hilao, 2016).
In this paper, we are also looking at Sub-Saharan Francophone African countries where French is taught in schools and spoken as an official language and Sub-Saharan Anglophone African countries where French is taught in schools as a foreign language or considered as the second official language. The advent of French in Sub-Saharan African countries dates back from the late 19th century to mid-20th century during the colonial era. The long history of the interaction of French people with Africans made Africans to use French. Even after colonization, French is still relevant in such African nations today. French has played a very significant role in the development of the African countries. The introduction of French in Anglophone countries is as a result of numerous reasons among which are political, geographical etc (Danladi, 2013;Wu, 2017).
LITERATURE REVIEW
It is not possible to consider the issues bordering French in Sub-Saharan African countries using Nigeria as a case study without taking into the educational systems in Francophone and Anglophone countries. In Francophone countries, the attitude of the French was to create French citizens rather than French-speaking Africans. This led to teaching French in Francophone countries as the official language (Aladekomo, 2004;Boonyarattanasoontorn, 2017). In Anglophone African countries, the British see English as a world language not necessarily gearing to any one part of the English-speaking world. In Francophone Sub-Saharan African countries the school system is divided into two parts: the primary section consist of 6 years (3 years each) ending in Certificat d'Etudes Primaires Elementaires (CEPE) examination. The secondary section consists of 4 years ending in Brevet d'Etudes du Premier Cycle (BEPC) examination or 7 years ending in the baccalaureat examination as a pre-requisite for entrance into universities. In Anglophone Sub-Saharan Africa, there are various types of school systems (Araromi, 1996). It is divided into the primary section which consists of 5 -9 years and the secondary section of 5 to 6 years Junior Secondary School (JSS), Senior Secondary School (SSS), Technical/Vocational Education) in order to acquire Ordinary Level:O/L in Senior School Certificate Examination (SSCE) or General Certificate Examination (GCE). After this, there is advanced examination such as Higher School Certificate or GCE at Advanced level: A/L, Diploma of various kinds, Nigeria Certificate in Education (NCE) eading to 4-7 years of University education. Two Anglophone countries like Sierra Leone and Liberia ravaged by war or epidemy are restructuring their educational system to fit into a standardized form. Sierra Leone's education is 6:3:3:4 system which is 6 years of primary school, 3 years of Junior Secondary School, 3 years of Senior Secondary School and 4 years of University. In Liberia, the system of education is 6:3:3 which is 6 years of primary school, 3 years of Junior school, 3 years of Senior school/High school, Vocational(Teacher Training), Vocational (Technical) (Technical education) and of College (University). In Ghana 6:3:3:4 which is 6 years of primary school, 3 years of Junior High school or Middle Education (French is taught). 3 years of Junior Secondary School (JSS), 3 years of Senior Secondary (SSS), 4 years of tertiary Education (though in some tertiary institutions can last from 3 to 4 years and some courses can last up to 5 or 7 years). A secondary school system can be in form of a technical education.
The explanation above is given in order to ascertain when French was introduced into the educational systems in Sub-Saharan Francophone and Anglophone Africa as an official language, second language, second official language (Igboanusi & Pütz, 2008). In Sub-Saharan Francophone African countries, French is an official language and it is taught in schools as a second language (L2) changing from L1 to L2. In Anglophone countries like Nigeria, Kenya, Ghana, Gambia, Sierra Leone, the mother tongue or English is used as a medium of instruction in primary schools because pupils speak a wide variety of mother tongues. In Nigeria especially, French is not taught at all in government primary schools (Federal and States) except in a few private schools. It is only introduced in a few government secondary schools and a private schools because of non-availability of teachers of French. This is one of the issues bordering the language in Nigeria (Ihenacho, 1981).
Some people believe that a higher population of Africans speak French but the number of Africans who speak, read and write French are not up to 20%. Presently only a tiny percent of millions of people in these countries can read, write and speak French. Africa may in future be the only continent compared to others where there may be the greatest number of French speakers because as more people get educated so will there be more French speakers.
Issues bordering the French Language in Sub-Saharan Africa Teaching methods
In Francophone Sub-Saharan African countries, the method of teaching French varies from kindergarten to university levels.In some countries such as Togo, Chad, Senegal, Benin, Gabon, Cote d'Ivoire, the mother tongue or local language is taughtat an early stage; French is taught much later but may not fully be implemented. Children are faced with their mother tongues or native languages which make it difficult to learn French. In countries such as Burkina Faso, Cameroon, Chad, Eritrea, Burundi, Guinea Bisau, Mali, Malawi, Democratic Republic of Congo (North kivu), Mozambique, some parts of Northern Nigeria, all because of several reasons such as poverty, decrease in in population growth, school enrolment, age, internal instability, conflict and war, weak governance, corruption. There are poor teaching methods of teaching French, (Nunan, 2004). At advanced level, audio visual or laboratory method is used though students may still find it difficult to comprehend the lesson. The laboratory method is based on repetition. In Ghana for instance, method of teaching French is based on repetition and translation of words.
Teaching facilities
Some schools in Francophone and Anglophone West Africa do not have facilities or adequate facilities such as satellite, television, projector, VCD, radio etc. to teach French. Textbooks which are used in some schools are either outdated or difficult to get in the bookshops. Even when there are French textbooks in some school libraries, students may not have access to them or they (textbooks) may be too high for their (students) levels and they may find it difficult to understand either because the teacher does not teach the language well or the culture is too foreign to them.
Societal acceptance
French is L2 in Sub-Saharan West Africa because it is either a second or foreign language. French is more accepted in Francophone West Africa such as parastatals, establishments, banking sectors etc, because the citizens are colonized by French and it is the official language. In the markets and lower levels of the society, African dialects or African French (French which is mixed with African words) are spoken.
In Ghana, according to Nunan (2004) ". . . Parents and students recognize the opportunities created with sufficient knowledge in French. Most students, teachers and other stakeholders recognized the importance of the French language. . . They went further to stress that even if there is a growing demand for the French language, access to qualitative French teaching and learning, non-availability and deployment of teachers hinders a higher participation levels of students at all levels in taking French as a subject or course in the public education system.
The Case of Nigeria
Nigeria is an English speaking country because she was a colony of England before she gained her independence in 1960. Nigeria is a country in Sub-Saharan West Africa which is surrounded by French-speaking West African countries; the republic of Benin in the West, Cameroon in the East and the republic of Niger in the North. French is a foreign language in Nigeria; foreign because it is from France. It is a language which majority of the Nigerian population (educated and uneducated) do not speak or know nothing about . French is a language which is learnt in the classrooms in Nigeria. It is spoken mostly by i. Only the qualified teachers of higher institutions. ii. Some teachers in secondary schools who are exposed to travelling out or rubbing shoulders with the native speakers.
iii. Students or pupils who are born and bred in French countries but settled in Nigeria. iv. People who have travelled to France or Francophone countries. The third and fourth categories can speak but cannot write a single correct sentence in French. v. Some students who are studying in higher institutions.
Some of the fifth category of speakers, exhibit language interference while speaking. For example, a child would naturally mention table in French as in English.
French is very vital to Nigerians because of her geographical location. In Nigeria, the method of teaching French from Private Kindergarten, primary schools to private and public Junior Secondary schools is as the discretion of the French teacher if there is no existing syllabus which are not detailed enough to enable the students of such levels to communicate or if the teacher is not aware of any. It also depends on the availability of the French teacher. In schools where French is taught, the translation method is used. Students only offer French because it is compulsory though French is not taught in all Junior schools. According to Anyanwu, Ezejiofor, Igweze, and Orisakwe (2018), students translate a whole passage without having an iota of the meaning of such a passage. The teacher translates words while teaching and most times pupils and students are given exercises and assignments either from comprehension passages to translate or what is not related to the lesson. At tertiary level, French is taught in French departments though methods vary but geared towards one goal of exposing the students to the socio-cultural and political lives of the French speaking people, equipping the students with adequate knowledge to be able to communicate with the French world.
On 14th December 1996, the late Nigerian Military Head of State, by his Minister of Education at the Nigerian Institute of International Affairs, that French would soon be introduced as a second official language in the country, to be studied compulsory in the primary, secondary and post-secondary levels of our educational system. The policy, according to the Minister, would in a short time make Nigeria to become bilingual (Igboanusi & Pütz, 2008). They further that the and late Abacha's speech and later decision led to the formal recognition of French for the first time as a second official language in Nigeria. Immediately after that declaration, recognition and adoption of the policy, elaborate preparations were made for its commencement. Eminent French scholars from Nigeria universities were assembled and mandated to write the curriculum for the subject in the primary and post primary institutions nationwide. Unfortunately, after Abacha's demise, the policy was completely abandoned and it has remained only on paper ever since.
On 30th January 2016, the Minister of State for Education, Professor Anthony Anwukah announced again that the Federal Government was soon going to reintroduce French as the nation's second official language. The reason why French was introduced in Nigeria by the Federal Government is to enable the country interact easily and more effectively with its francophone neighboring countries and participate effectively in international conferences and seminars. The declaration made some policy makers to include it in the National Policy on Education (NPE) as a teaching subject.
It is worthy to note that the state and value accorded to French in Nigerian educational system by the Educational Policy Makers has not been very satisfactory when compared to other subjects or courses offered at our three levels of education. On the status of the French language education in Nigeria as stated in the NPE, Aladekomo (2004) observes that it is just a mere paper work. To him, it was partially defined. He opines that French is not only optional at secondary schools but a non-vocational subject. Many primary and secondary schools (public and private) in Nigeria do not offer French. Even those Nigerians who study French and have academic or professional degree in the language usually do not see many people to speak it to. In secondary schools only few pupils have interest in French owing to the fact that teachers are not available to teach it or that they are available and do not teach it well. Araromi (1996) cited by Igboanusi and Pütz (2008), accurately describes the plight of French students and French language as a subject in the secondary schools in Nigeria.
Quite often, a class of between 35 and 40 students begin the learning of French in the secondary school only to thin out to 4 or 5 by the time they get to senior secondary classes where they have to choose their examination subjects.
In Nigeria, French language is still being looked down upon. Some parents prefer their children to study medicine, law, engineering etc. than French. The number of speakers is few compared to the population in Nigeria. One would expect that as a a second official language one third or nearly half of the population should be able to express themselves in French but this is not the case. French has not developed to such state. In schools where French is taught, some students leave the classes because they find the subject boring especially when it has to do with French verbs and culture which are different from ours. Some French teachers did not receive adequate training to know and understand the methodology of teaching it. They are supposed to be models for their students to imitate; they must know how to speak the language before they are able to teach it. As a result of inadequacy on their part, their students tend to perform poorly. Finally the idea of declaring French as the second official language in Nigeria is not a bad one. The State and Federal Government has a lot to play in ensuring that French is considered as Nigeria's second official language.
CONCLUSION, RECOMMENDATIONS AND IMPLICATIONS
Language is central to the development of an equitable educational program that leads to self-fulfillment and societal transformation. French is a language that is widely spoken in Francophone Sub-Saharan Africa and less spoken in Anglophone Sub-Saharan Africa. This paper was designed to bring out the issues bordering the French language in Sub-Saharan Africa especially in West Africa using Nigeria as a case study. It revealed that lack of recognition and importance of the language is one of the contributory factors militating against the language in Sub-Saharan Africa especially in the Anglophone countries. French is a language that is not widely spoken in some parts of Sub-Saharan West Africa especially in the Anglophone countries. The best way to tackle the issues raised in this paper is to give a great attention to psycho-pedagogic issues while taking into cognizance the other issues. To ascertain the survival of the French language in Sub-Saharan Africa especially in Anglophone Africa, French should be made compulsory at all levels especially in public schools in Nigeria. This paper has called for a re-examination of the French language at all levels of education especially in Nigeria. What is needed is to avoid rigidity and the use of outdated methods which are not relevant to current methods of teaching. There is no how Nigerians will not mix with their West African French speaking neighbors by expanding their horizons to such environments and other French environments and cultures in the world.
This paper recommends the teaching of French in Sub-Saharan Africa especially in Nigerian schools. If Nigeria must quickly be developed scientifically and technologically, it will be necessary to encourage a solid teaching and learning of French in schools. No country can speak of technological transfer without paying attention to the instrument of learning which is language. In order for French to be an effective subject at the primary and secondary school levels, the Federal and State Government and owners of private schools should work hand in hand with the French government in the supply of current teaching materials to schools. A committee should be set up to review from time to time the books and materials used in schools. Government should set up bodies to monitor the full implementation of compulsory teaching and learning of French from the primary to the senior secondary schools. The State and Federal Government should put in place policies which will sensitize and give adequate orientation to parents, students and the entire Nigerian citizenry on the importance of French to the country. This is because when Nigerians know the importance of French in the country, they will fully embrace the language.
It is worrisome that teachers teach using the grammar translation method. Teachers should be exposed to pedagogical workshops in order for them to know how to use the communicative approach in the class. By so doing, they will know how to create tasks for their students to engage in which will lead to their competence in the language Nunan (2004). French experts in Nigeria tertiary institutions should be invited to Abuja to restructure secondary school curriculum which will be based on the communicative abilities of the students in the four language skills: listening, speaking, reading and writing. French should be taught at public primary schools and taken at the National Common Entrance Examination for primary six pupils just like English and other subjects (Sanni, 2015).
Workshops on French teaching through the communicative approach should be organized for primary and secondary school teachers of both public and private schools. If such teachers undergo Center for French Teaching and Documentation (CFTD) workshops, they will be taught how to plan their lessons in French. When the steps highlighted above are taken into consideration and emulated, parents, students and the Nigerian society will begin to take French language seriously and this will go a long way in enhancing the French language and French teaching will be rewarding at the end. | 2020-07-09T09:09:11.267Z | 2020-04-13T00:00:00.000 | {
"year": 2020,
"sha1": "33405a22b5efbae2c28aa8b08bbd710396f1897f",
"oa_license": null,
"oa_url": "https://doi.org/10.26500/jarssh-05-2020-0204",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "378d08f84bc3a220c421b5eab929d55a31a017b4",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Geography"
]
} |
220297699 | pes2o/s2orc | v3-fos-license | Integrating Global Proteomic and Genomic Expression Profiles Generated from Islet α Cells
Systematic profiling of expressed gene products represents a promising research strategy for elucidating the molecular phenotypes of islet cells. To this end, we have combined complementary genomic and proteomic methods to better assess the molecular composition of murine pancreatic islet glucagon-producing αTC-1 cells as a model system, with the expectation of bypassing limitations inherent to either technology alone. Gene expression was measured with an Affymetrix MG_U74Av2 oligonucleotide array, while protein expression was examined by performing high-resolution gel-free shotgun MS/MS on a nuclear-enriched cell extract. Both analyses were carried out in triplicate to control for experimental variability. Using a stringent detection p value cutoff of 0.04, 48% of all potential mRNA transcripts were predicted to be expressed (probes classified as present in at least two of three replicates), while 1,651 proteins were identified with high-confidence using rigorous database searching. Although 762 of 888 cross-referenced cognate mRNA-protein pairs were jointly detected by both platforms, a sizeable number (126) of gene products was detected exclusively by MS alone. Conversely, marginal protein identifications often had convincing microarray support. Based on these findings, we present an operational framework for both interpreting and integrating dual genomic and proteomic datasets so as to obtain a more reliable perspective into islet α cell function.
The proglucagon gene is expressed in selected brainstem neurons, gut endocrine L cells, and ␣ (alpha) cells within the endocrine pancreas. Because glucagon excess contributes to the metabolic derangements of diabetes, while acquired defects in glucagon secretion are associated with an increased risk of hypoglycemia in diabetic subjects, understanding the molecular features of the normal and dysregulated ␣ cell is of considerable relevance for strategies design to optimize ␣ cell function in diabetic patients.
Although glucagon-producing ␣ cells are the second most abundant islet cell type within the pancreatic islets of Langerhans, a majority of studies aimed at identifying islet gene products have used either a mixture of islet cells, or purified  cells or  cell lines (1)(2)(3)(4)(5). The current paucity of data on gene and protein expression in islet ␣ cells is due in part to the almost insurmountable technical challenges in isolating sufficient numbers of pure primary ␣ cells for exhaustive molecular studies. Our group (6 -8) and others (9,10) have therefore studied several transformed rodent ␣ cell-derived cell lines as model systems suitable for systematic experimental analyses aimed at determining the genetic, biochemical, and physiological mechanisms that underlie islet ␣ cell function.
While several laboratories have reported investigations of whole islet or islet  cell gene expression profiles (1,2), only limited information is currently available concerning gene expression and proteomic profiles linked to the specification of the differentiated phenotypes associated with ␣ cells. To this end, we have initiated extensive molecular analyses of gene expression patterns in mature ␣ cells using an islet ␣ cell line as a model system, with the goal of deciphering key biochemical, regulatory, and cell biological properties. The ␣TC-1 cell line, in particular, was chosen as a focus of this study because it is well differentiated, preserving normal endocrine characteristics such as correct hormone processing and secretion of glucagon in response to stimulation (11).
Several types of high-density DNA microarrays apt for global analysis of gene expression profiles are readily available to researchers today. These vary in terms of the technology used in their manufacture and use, as well as in terms of price, ease of use and analysis, and overall sensitivity and reproducibility. The main types of high-throughput gene expression arrays are spotted cDNA arrays (12)(13)(14), short oligonucleotide arrays (15), long oligonucleotide arrays (Agilent: www.agilent.com) (16), and fiber optic arrays (Illumina: www.illumina.com) (17). An informative web resource outlining the various platforms, as well as sources of commercial arrays, is available online at ihome.cuhk.edu.hk/ϳb400559/array.html.
A particularly popular commercial research platform is the Affymetrix GeneChip TM short oligonucleotide arrays platform, which consist of panels of ϳ25-nucleotide-long probes synthesized in situ on the surface of a microscope slide. Each probe is built up consecutively on a predefined area (square element) on the array (typically called a cell or feature). The probe sequence is complementary to a unique target sequence present in a putative mRNA transcript, allowing specific hybridization (15). Affymetrix chip densities have been growing quickly in recent years, with newer arrays commonly capable of probing tens of thousands of gene transcriptsoften representing the entire genome complement of an organism-simultaneously (18). Between 1998 and 2004, for instance, the average number of probes packed onto a single Affymetrix array shot up from ϳ2,000 to Ͼ50,000, due to significant reductions in probe feature dimensions, which have now decreased from Ͼ50 to ϳ11 m on average (18).
To interrogate the expression level of a single gene, a typical "probe set"-generally consisting of 11-20 distinct 25-mer oligonucleotide probes-is designed to hybridize to discrete sequences in a particular target gene transcript. These so-called "perfect match probes" (PM) 1 are aimed at detecting a specific mRNA presumed to be present in a biological sample and are usually optimized to prevent fortuitous similarity to off-target sequences resulting in confounding cross-hybridization to other gene products (19). In addition to the PM probes, Affymetrix chips also contain adjacent "mismatch probes" (MM), which are virtually identical to PM probes except for a single-nucleotide substitution at the 13th nucleotide of the sequence so as to perturb hybridization to the intended target. The PM and MM probe cells from the same probe set are usually evenly distributed over the entire chip area to minimize the effects of systematic variabilitysuch as experimental artifacts due to localized hybridization defects-which often lead to inconsistencies in the inferences that can be drawn from a study.
The MM features were initially designed to measure nonspecific binding so as to control for spurious background signal, but their use for estimating and removing noise has been criticized in recent years (20,21). Indeed, it has been shown that more accurate estimates of relative transcript abundance between different biological samples can be obtained from interrogation of respective mRNA profiles using data derived from PM probes alone (22). Nevertheless, the MM probe data are used by the native Affymetrix software to determine if a target transcript is in fact present in a given sample. It has been shown that for target at low concentrations (Ͻ8 pM) the combination of PM-MM probe pairs can offer significantly higher sensitivity as compared with PM probes alone (19).
Because of steady improvements in technology, chip design, industrial standardization, and analysis methods, DNA microarrays have become a widespread experimental medium in the endocrine research community. This popularity, combined with the increasing density of microarray readouts, has resulted in a corresponding exponential increase in data production (23), a trend that is putting increasing strain on effective data analysis and biological interpretation (24). This is compounded by the many sources of irrelevant obscuring variance that also need to be carefully considered when dealing with microarray data, including irrelevant fluctuations due to scanner perturbations, changes in hybridization conditions such as variable humidity, different array batches, or even different personnel performing the experiments. Because these artifacts can substantially affect the results (and conclusions) obtained from a study (25)(26)(27), they need to be systematically identified, corrected for, or removed to ensure the reliability of an experiment. In certain platforms (e.g. cDNA arrays), the use of internal reference mRNA populations and reciprocal chromo/fluorophore label swaps can be used to minimize spurious measurements (24). Alternatively, with the Affymetrix array platform, multiple repeat analyses can be performed to increase confidence in distinguishing genuine signal from spurious deviances and baseline noise (28).
Ongoing challenges in evaluating the integrity of microarray data are spurring the development of novel computational methods for methodical data interpretation (29 -31). Thanks to a concerted effort by the entire bioinformatics community, both academic and commercial algorithms for improving the reliability of quantification of chip signal and global comparative analyses have been steadily improving. For instance, seemingly effective methods for data quantification and normalization have been introduced in recent years (20,24,(32)(33)(34). Indeed, there is now a bewildering array of open-source programs and proprietary software tools available for automated data preprocessing and analysis (35) (see ihome. cuhk.edu.hk/ϳb400559/array.html for a comprehensive list).
Many of the more commonly used tools are tailored specifically to the Affymetrix data format. Most of these employ some of the now standard statistical and machine learning methods (such as various types of clustering algorithms (e.g. SOM, k-means, hierarchical) and/or dimension reduction methods (e.g. PCA and many others) to extract meaningful information from typically complex, feature-rich gene expression datasets (36), but the list of interesting new approaches to the data analysis challenge is steadily growing.
Despite this progress, preliminary studies of gene expression patterns in endocrine cell lines have revealed a number of unforeseen obstacles that confound effective interpretation of microarray data, 2 several of which are the focus of this report. The most noteworthy of these concerns the problem of deciding on optimal transcript detection p values threshold cut-offs to faithfully determine whether an mRNA target is indeed present or absent in a sample of interest. It is now broadly accepted that all microarray results should be treated as preliminary, and therefore putative mRNA expression patterns of special interest must necessarily be validated using an independent screening method, such as Northern blot analysis, RT-PCR, or real-time RT-PCR. Proper detection calls are an especially important factor for selecting optimal data points for use as features in pattern recognition and machine learning analyses aimed at defining robust molecular signatures (such as biomarkers). Detection p values can also be useful for assessing the reliability of extracted gene expression values for cross-sample comparative quantification analyses. 3 Considering that proteins represent the ultimate effectors of gene function, and given that previous proteomic studies have suggested a relatively weak concordance between mRNA and protein levels (37), it may, in fact, be preferable to measure protein expression in islet ␣ cells directly. Effective methods for large-scale protein identification and quantitation have recently been developed (38,39). Of these, highthroughput protein shotgun sequencing procedures coupling high-resolution multidimensional liquid chromatographic separation of proteolytic peptide digests to ultra-sensitive MS/MS (LC-MS) appear to be among the more powerful new emerging methods for examining complex protein mixtures like cell extracts (38,39). By eliminating the need to first separate proteins on polyacrylamide gels, these gel-free profiling methods circumvent most limitations associated with gel-based procedures.
Although rapidly increasing in popularity, current LC-MSbased shotgun profiling methods are by no means nearly as robust, simple to execute, or as widely accessible as microarray technology. Moreover, gel-free proteomic screening methodologies still suffer from significant detection bias, leading to preferential detection of higher-abundance housekeeping proteins that are rarely of particular biological interest (38,39) and generally do not achieve the same extent of global coverage as compared with that typically obtainable by microarray-based analysis (37). Despite these caveats, shotgun protein profiling methods can potentially be used to both guide the optimal interpretation of microarray data and to confirm the results of a gene expression study, and vice-versa.
To evaluate the advantages and challenges of combining proteomic and functional genomic screening platforms for deducing functional adaptations associated with enteroendocrine cells, we have performed pilot parallel large-scale analyses of gene and protein expression in mitotically active asynchronous cultures of murine ␣TC-1 cells (11). We used an Affymetrix MG_U74Av2 GeneChip, containing 12,488 distinct murine gene probe sets, to measure global mRNA levels, and the multidimensional protein identification technology (Mud-PIT) procedure developed by Yates and colleagues (40,41) to examine the global protein profile of nuclear-enriched cell extracts. Here, we summarize our key findings to date, outlining both the advantages and challenges of comparative cross-platform analyses so as to obtain a more complete, and rigorous, insight into the biochemical makeup of enteroendocrine cells. We compare and contrast the relative merits and limitations associated with each of the two platforms and list factors that influence the sensitivity and specificity of analysis with a particular emphasis on the effects of p value thresholds on the rate of false discovery and overall detection coverage. We also describe empirically derived criteria regarding optimal experimental design, together with simple guidelines that we believe allow for more reliable (and comprehensive) interpretation of complex global molecular profiling datasets. While centered on an experimental setting directly relevant to ␣ islet cell biology, the analytical approach reported here are broadly applicable to a range of analogous endocrine-related research problems and hence should be relevant to any researcher currently using (or contemplating) large-scale expression profiling at the mRNA and/or protein levels as a means of investigating the molecular makeup of endocrine cells and tissues of interest.
EXPERIMENTAL PROCEDURES
Cell Culture-The immortalized rodent islet ␣-TC-1 cell line (11) was passaged in Dulbecco's modified essential medium containing 25 mM glucose (DMEM, high glucose; Hyclone, Logan, UT) supplemented with 15% heat-inactivated FCS (Invitrogen, San Diego, CA) and Pen/Strep (Sigma-Aldrich, St. Louis, MO) as described previously (6,11). Cell cultures were grown in triplicate to 80% confluence, with the tissue culture media replaced daily. After harvesting, the cells were snap-frozen and stored at Ϫ80°C prior to protein and RNA extraction.
Microarray/Statistical Analysis-Total RNA was prepared using Trizol extraction (Invitrogen) and RNAeasy kit (Qiagen, Valencia, CA) according to the manufacturers' instructions. Profiling was performed in triplicate for three distinct preparations of total RNA, using the Affymetrix GeneChip MG_U74Av2 microarray. Each experiment was performed separately on different days using chips derived from different fabrication batches. Hybridization and washing were performed using standard conditions. Image processing was performed using an Affymetrix GeneArray 2500 scanner, with 570-nm, 3-m laser parameter settings. The data file was processed using Affymetrix Microarray Suite 5.0 (MAS5.0) using default parameter settings, and the average intensity of all probes on the array was scaled to target intensity of 150. Detection p value alone, or together with signal intensity, was used to make a prediction (present, marginal, or absent) as to gene expression. A freeware version of the Robust Multichip Average (RMA) algorithm was obtained from Bioconductor (www.bioconductor.org). Freely available statistical software "R" (www.r-project.org) and proprietary Matlab 7.0, Statistical and Bioinformatics Toolboxes from MathWorks TM (www.mathworks.com) were also used for the analysis.
Validation RT-PCR-First-strand cDNA synthesis was generated from total RNA using the SuperScript Pre-amplification System (Fermentas, Hanover, MD) following the manufacturer's recommended protocol. Target cDNAs were amplified by PCR using gene-specific 3 P. Hu, personal communication.
primer pairs. PCR products were loaded onto a 1% agarose gel, electrophoresed in Tris-acetate-EDTA buffer, transferred onto nylon membranes, and hybridized using internal oligonucleotide probes labeled by T4 kinase reaction with [␥-32 P]ATP (Amersham Bioscience, Piscataway, NJ). Visualization was performed by standard film-based autoradiography. Primer sequences and detailed conditions used for RT-PCR are available upon request.
Extracts-Nuclear-enriched soluble protein extracts were prepared using a commercial protocol (Nu-CLEAR protein extraction kit; Sigma-Aldrich). Briefly, three separate cell pellets were thawed, resuspended in 5 volumes of hypotonic lysis buffer (10 mM HEPES, pH 7.9, 1.5 mM MgCl 2 , 10 mM KCl), and incubated on ice for 15 min. After repelleting by centrifugation for 5 min at 420 ϫ g, the nuclei were rinsed twice in 400 l of lysis buffer, and resuspended in high-salt extraction buffer (1 M HEPES, pH 7.9, 1 M MgCl 2 , 5 M NaCl, 0.5 M EDTA, pH 8.0, 25% (v/v) glycerol). Soluble proteins were salt-extracted from the isolated nuclei by incubation on ice for 15 min with occasional vigorous vortexing. After addition of Nonidet P-40 to a final concentration of 0.04% (v/v), the nuclei were disrupted with a glass homogenizer. Insoluble debris was removed by centrifugation at 13,000 rpm for 30 min, and the supernatants were retained for proteomic analysis (final protein concentration ϳ2 mg/ml).
Proteolytic Digestion and Sample Preparation-Equivalent amounts of total protein (100 g) from each processed cell batch were precipitated with 5 volumes of ice-cold acetone, followed by centrifugation at 13,000 ϫ g for 15 min. The pellets were resolubilized in 40 l of 8 M urea, 50 mM Tris-HCl, pH 8.5, 1 mM DTT, for 1 h at 37°C, followed by dilution to 4 M [urea] using an equal volume of 100 mM ammonium bicarbonate, pH 8.5 (AmBic Sigma-Aldrich). The denatured proteins were digested with a 1:150-fold ratio of endoproteinase Lys-C (Roche, Basel, Switzerland) at 37°C overnight. The next day, the samples were further diluted to 2 M urea with an equal volume of AmBic supplemented with 1 mM CaCl 2 . Digestion was continued by adding poroszyme trypsin beads (Applied Biosystems) followed by incubation at 30°C with rotation. The resulting peptide mixtures were solid phase-extracted with a SPEC-Plus PT C18 cartridge (Ansys Diagnostics, Lake Forest, CA). The eluates were concentrated by Speedvac to near dryness and stored at Ϫ80°C until analysis.
MudPIT Analysis-A fully automated 12-cycle, 20-h long MudPIT procedure was set up as described previously (42). Briefly, an HPLC quaternary pump was interfaced with an LCQ DECA XP quadrupole ion trap mass spectrometer (Thermo Finnigan, Woburn, MA). A P-2000 laser puller (Sutter Instruments, Novato, CA) was used to pull a fine tip on one end of a 100-m inner diameter fused silica capillary microcolumn (Polymicro Technologies, Phoenix, AZ). The column was packed first with 6 cm of 5-m Zorbax Eclipse XDB-C 18 resin (Agilent Technologies, Palo Alto, CA), followed by 6 cm of 5-m Partisphere strong cation exchange resin (Whatman, Middlesex, United Kingdom). The extracts were loaded separately on to the column using a pressure vessel and analyzed sequentially using a fully automated 12-step, four-solvent, 24-h chromatographic cycle. A detailed description of the exact chromatographic conditions is provided in the supplementary information. Data-dependent MS/MS acquisition was performed in real time, with the ion trap instrument operated using dynamic-exclusion lists.
Protein Identification and Statistical Validation-The SEQUEST database search algorithm (43) was used to match the acquired tandem mass spectra to a minimally redundant database of mouse and human Swiss-Prot/TrEMBL protein sequences downloaded from the European Bioinformatics Institute (www.ebi.ac.uk). To further evaluate, and thereby minimize, the number of incorrect (false-positive) identifications, the spectra were searched against protein sequences in both the normal (forward) and fully inverted (reverse) amino acid orientations. The STATQUEST algorithm was used to compute a confidence score (indicating the percentage likelihood or probability of being correct) for every candidate match. The Contrast and DTA-Select software tools were used to manage, organize, and filter the resulting dataset (44) (fields.scripps.edu). Spectral counts were summed across the replicates and used as a semiquantitative measure of protein abundance, as described by Liu et al. (45).
Dataset Cross-referencing-Matches between sequences identified by the proteomic and the complete Affymetrix probe sets were cross-mapped via annotation. Accession numbers corresponding to the MG_U74Av2 GeneChip were extracted from the annotated gene table "MG_U74Av2_annot.csv" (Oct. 12,2004) from the Affymetrix website: www.affymetrix.com/support/technical/. The probe IDs were cross-referenced to UniGene IDs, which were then mapped to the corresponding Swiss-Prot/TrEMBL accessions obtained for the identified proteins. Proteins matching multiple genes on the Affymetrix array were removed, as were matches where multiple MutPIT-derived proteins matched to one or more probe sets.
Functional Classification-Gene Ontology (GO) categorical annotations (www.ebi.ac.uk/GOA) were used as indicators of biological function and other properties, where annotations were available (ϳ70% of the proteins examined in this study). Statistical testing for enrichment of functional categories among the set of identified proteins and the mRNA probe sets was based on a hypergeometric distribution model using the method of Hughes and co-workers (46). This method returns the probability (p value) that the intersection of a given protein list with any given annotation class occurs by chance. A Bonferroni scaling factor was applied to correct for spurious significance due to repeat testing (multihypotheses) over the many GO terms; scores were adjusted by dividing the preliminary p value by the number of tests conducted. A threshold cutoff p value of 10 Ϫ3 was used as a final selection criterion to highlight statistically significant, potentially biologically interesting clusters.
RESULTS
Microarray Analysis-Mouse-derived islet ␣TC-1 cells were cultured under standard conditions and analyzed for both global gene and protein expression (see "Experimental Procedures"). Because it has been persuasively argued that biological noise is far more of a concern than technical artifacts (24,36), we performed both the proteomic and the microarray screening in triplicate, using three independently processed cell batches grown under identical tissue culture conditions. In practice, experimental reproducibility, as assessed by relative quantification (see further below), proved to very good at the mRNA level, and respectable at the protein level, indicating modest and acceptable levels of biological and experimental variability.
For the gene expression analysis, the total RNA sample was extracted and separately hybridized to Affymetrix Murine Genome U74Av2 GeneChip microarrays (MG_U74Av2). The chip represents ϳ6,000 oligonucleotide sequences obtained from the Mouse UniGene database (build 74) that have been functionally characterized. The remaining ϳ6,000 probes represent ESTs. The chip has 16 probe pairs per probe set, with a 20-m feature size and a putative sensitivity rated at 1:100,000. Extensive gene annotation files are available (www.affymetrix.com) (47), providing detailed information for each probe, such as sequence source, transcript ID, target description, UniGene ID, alignments, gene title and symbol, chromosomal location, identifiers for Ensembl, LocusLink, MGI, and Prot/TrEMBL databanks, as well as GO functional annotations, just to name a few.
Data Processing-The result of the microarray profiling experiments was a series of image files, corresponding to the intensities recorded for each feature, which needed to be processed and analyzed. Probe set quantification and data normalization was performed based on Absolute Analysis, using the Statistical Algorithm in the Affymetrix-supplied analytical software (Microarray Suite 5.0, MAS5.0). Signal was scaled to a target intensity of 150 across all probes ( Fig. 1). At this step, the information extracted represented the absolute signal intensity for each probe set, together with a calculated detection p value, and an overall detection call ("present," "marginal," or "absent"), which is in turn based on a predefined default p value threshold. For this study, the default Affymetrix default p value threshold of 0.04 was used (that is, a gene was considered present if its calculated expression detection p value was equal to or below 0.04 in two of three experiments). The suitability of this detection p value threshold is examined below.
Additionally, to better account for systematic artifacts, the data files were next quantified using the RMA algorithm (20). This open-source algorithm corrects for spurious variations in background by compensating for nonspecific binding as determined based on the distribution of PM values, as opposed to PM-MM ratios as in the case of the Affymetrix-supplied algorithm. The algorithm performs probe-level quantile normalization (48 -50) across all chips to unify their PM distributions, and lastly it summarizes the probe-set signals by median polishing. The output of RMA is log-transformed signal intensity for each probe (21). Supplemental Fig. 1 provides a comparison of the distributions of probe signal intensities obtained in two repeat experiments (chip A versus B), both before and after data normalization using either the Affymetrix Statistical Algorithm (MAS5.0) or RMA. The effects of normalization were also apparent by comparing a scatter plot of the average probe intensity versus the log-transformed ratio of signals on respective chips (so-called M versus A plot) before (Supplemental Fig. 1, left insets) and after RMA normalization (right insets). If the data was reproduced perfectly, the expected ratio (r) of signals produced by scanning the two chips would be exactly 1 for all probes, resulting in a tight distribution around y ϭ r ϭ 1. Any deviation from this indicates spurious differences in signal readings between experiments. Clearly, there is a lot of variability seen in the unprocessed data, particularly in the low-intensity end. The quantile normalization procedure used by the RMA algorithm corrected this type of variability very well, with the RMA-processed data being much more closely reproduced across the entire intensity range.
Data Analysis and Detection of Gene Expression-Despite the effectiveness of normalization, a key, and frequently un-derestimated, step in Affymetrix microarray signal processing is the determination of a probe detection call, wherein transcript signal of a probe set (and by inference, the corresponding gene product) is deemed to be present or absent. There are two pieces of information available per probe set on an Affymetrix chip that can be used to determine whether a particular transcript is indeed present (or absent) in a sample, signal intensity and the detection p value. The aim is to derive a rigorous, statistically based, and reliable measure of genuine signal over background (baseline), with a sufficiently high signal-to-noise ratio to unambiguously confirm the existence of a specific target mRNA species in the sample of interest.
The simplest method for deciding whether a transcript is expressed is to look at the "detection call" made by the Affymetrix algorithm, which is based on the calculated detection p value. The detection p value is a statistically derived confidence measure or indicator of whether the signal generated from a probe set is sufficiently above background (or not) to be deemed present. Using the default software settings, any transcript detected with a detection p value threshold cutoff defined as ␣ 1 (or lower) will be defined as "present," while those between ␣ 1 and a second threshold ␣ 2 will be called "marginal" and those with p values greater than ␣ 2 deemed "absent." Default values for ␣ 1 and ␣ 2 can differ between chip types and are defined by Affymetrix based on extensive in-house analysis of test arrays (51).
The detection p value is determined in two steps. First, a discrimination score, R, is calculated based on the signal intensities of the group of PM probes relative to the corresponding set of MM probes per probe set. Then, the scores from each of the probe pairs are tested against a user-defined threshold (tau), which specifies the minimum R that "votes" for presence, to calculate a detection p value for the entire probe set. These steps are described in more detail below.
From a set of intensity values for each group of PM and MM probes defining a probe set, R is calculated for each PM and MM pair according to the following formula: This can be rewritten as: In other words, one can see that R depends strictly on the ratio of PM/MM, which varies between -1 and 1. The bigger the calculated ratio, the closer R will be to 1. This would happen in cases of high signal-to-noise. Conversely, if the ratio is close to 1 (signal marginally above background), R will be close to -1.
The R scores are rank-sorted and evaluated using the onesided Wilcoxon's signed rank test (see Refs. 48 and 49 for details). For every probe set, the algorithm ranks the probe pairs based on how far their discrimination score is from and uses the information to calculate the detection p value. The Wilcoxon's signed rank test is a commonly used, simple nonparametric statistical method that has several desirable properties and certain limitations. Two notable advantages are that it is insensitive to spurious outliers and does not assume a normal distribution of the data (49). It is not necessarily the optimal test for class discrimination, however, and more advanced statistical measures (including methods better suited to the principles of machine learning) are likely to be far more effective for determining true probabilities, resulting in fewer incorrect or marginal classification calls. 3 The calculated R scores are compared with , and if the R score is higher than , a "present" call is made for a given probe pair, while those with R scores falling below this value result in an "absent" call. The votes across all probe pairs comprising a complete probe set are tabulated and summarized in the detection p value, which is a confidence measure reflecting the detection call. The higher the R score is above , the closer the detection p value is to 0, and hence the greater confidence that a transcript is indeed expressed. Conversely, the lower the R score is below , the closer the p value is to 1 and hence a lower confidence is placed on the detection call and, by inference, on the possible existence of the corresponding mRNA in the sample.
Increasing increases the stringency of the test by decreasing the rate of incorrect predictions or false-positives. Although this effectively increases the specificity and reliability of the assay, in doing so the overall sensitivity of an experiment (namely, the detection of bona fide mRNA transcripts) is reduced. For example, increasing from 0.015 to 0.15 de-creases the proportion of present calls on a Rat Genome U34A array by 25% (4). Conversely, lowering decreases the specificity of an analysis, thereby increasing the sensitivity. There is a default value for for each particular type of chip, which is based on extensive controlled experimentation. In general, Affymetrix recommends that this default not be changed (52).
Redefining the threshold has a somewhat different effect on the detection call than modifying the detection p value threshold. The former influences the votes of individual probe pairs for presence or absence, which in turn may drastically affect the final detection p value of a probe set-in other words, the controls the number of probe pairs that vote for presence. The detection p value, on the other hand, considers the probe set as a whole. Ultimately, however, the effect of changing either threshold leads to a similar result-both modify the percentage of present calls made per dataset. Modifying the detection call by changing the detection p value threshold is much easier to do in practice, however, because it does not require reanalysis of the data outputted by Affymetrix analysis software, which can be very time consuming for large datasets. For our purposes, we decided not to modify (or the statistical measure for calculating R), but rather work with detection p value thresholds in order to optimize the detection call for our ␣TC-1 dataset.
We also considered the differences in detection p values obtained in the three replicate arrays and found them to be very precisely reproduced between replicates, especially for probes with very low detection p values (Ͻ0.1). In other words, we found that most probes with a detection p value below the default detection call threshold (0.04 in the case of the MG_U74Av2 array) in a single experiment were likewise similarly detected across the other replicates. Indeed, 78% of probes deemed present in any one of the three replicates were likewise present in all three, while 12% were present in exactly two of three replicates, with the remaining 10% detected in just one dataset. However, because roughly 50% of all probes on the 12,000-gene array were called present using the default p value cutoff (Supplementary Table I), the decision as to whether some 600 or so transcripts are indeed expressed remains unclear. One may need to consider additional information when deciding detection in these cases, such as absolute signal intensity.
A detection p value close to 0.5 indicates that there is no significant difference in the relative abundance of the PM probes with respect to the MM probes. There are several possible causes for this: one is that the transcript is indeed not detected, so the signal will be very low (close to background) for both the PM and MM probes. In this case, the absent call is correct. Another reason may be nonspecific binding to the MM probe and possibly also the PM probe. In this case, the signal recorded for both may be very high, but the difference in signal between PM and MM may not be significant, resulting in a high detection p value and an absent detection call. This is problematic, because it can be difficult to decide whether the probe is actually absent and what is detected is truly nonspecific binding, or that the mRNA is very abundant and so binds significantly even to the MM probes. Indeed, we have observed that highly abundant mRNAs often have detection p values as high as 0.8 (and hence are deemed "absent" by the Affymetrix analysis software) only to confirm these as truly highly expressed gene products using RT-PCR (as discussed below).
Signal Intensity-Signal is a quantitative metric summarizing the intensity of the probe set and representing the relative abundance of expression of transcript. Signal calculation takes into consideration the absolute and relative intensities of PM and MM probes in each probe pair and uses them to calculate an "idealized mismatch" (IM) signal which is then used instead of MM. If MM Ͻ PM, then IM ϭ MM, otherwise IM is calculated from PM and depends on various other factors such as background intensity around the probe cell. The Affymetrix-supplied software uses a One-Step Tukey's Biweight Estimate (a standard statistical-weighted average method that is relatively unaffected by outlier probe values) to first integrate the individual probe signals to obtain a robust mean of signal across the entire probe set. The algorithm starts by finding the median, and then the distance between each data point from this median is calculated. Based on these distances, the algorithm assigns a weight such that data points far away from the median will have low or even zero influence on the final value. The median is then recalculated once per probe set by incorporating these weights (see Ref. 49 for details).
In this manner, each probe pair contributes to the final estimate of signal intensity. The significance of this contribution is greater if PM signal is higher than MM and if the probe pair signal value is close to the mean of all probe signals. If the PM signal in a probe pair is higher than MM, the MM signal is considered to be informative and is often used as an estimate of stray nonspecific (background) signal. If MM signal for a given probe pair is higher than PM (which may occur in cases of significant cross-hybridization or when transcript levels are below detection limits), the MM signal is considered to be uninformative and is generally not used as a measure of noise. In this alternate scenario, an imputed value, IM, is used instead of MM signal. This IM value is usually deemed to be slightly smaller than PM so as to prevent negative signal values being generated for a given probe pair. If there are many such probe pairs present in a probe set, the detection algorithm will usually call the probe absent, even if its absolute signal intensity is high (53).
In order to increase the sensitivity of the transcript detection procedure, we now routinely consider probe signal intensity as well as the detection p value for probes with detection p values greater than the default cutoff. 2 But first, we performed independent measurements of transcript levels using the RT-PCR method (and, as discussed further below, proteomics) to guide our analyses.
RT-PCR Validation-RT-PCR validation experiments are considered to be the "gold-standard" technique for transcript detection, both in terms of sensitivity and especially specificity. For this reason, we designed primers to amplify a selected set of 67 disparate transcription factors that were deemed either absent or present in all three replicate experiments to interrogate using RT-PCR. Based on our results, we found the detection p value threshold to be overly conservative-many gene products predicted to be absent (some with p values as high as 0.8) were in fact readily detectable by RT-PCR (Fig. 2). Although the specificity of the Affymetrix detection call (i.e. p value cutoff 0.04) proved to be good, with few false-positives, the sensitivity was unsatisfactory (excessive false-negatives). This is especially true for low-abundance transcripts (Fig. 2). These data question the suitability of the default p value thresholds (and the algorithms currently used to calculate these) and have forced us to reconsider the entire issue, such as increasing the default p value cutoffs for present detection call or evaluating signal intensity together with the detection p value when deciding on the presence of a probe.
Based on our interpretation of the RT-PCR experiments, we hypothesize that if a probe has a sufficiently elevated signal, the transcript is likely to be present even if the detection p value is deemed higher than the default cutoff. Fig. 3 shows a schematic cartoon representation of more flexible, heuristic detection "cutoffs" (lines) that incorporate knowledge of both typical distribution scatter plot of the p values and signal intensities obtained for endocrine cell lines like ␣TC-1.
The assumption here is that if a probe is detected with a reason -FIG. 2. RT-PCR validation. Scatter plot of microarray detection p values versus log 2 probe signal intensity obtained for 67 select gene products that were independently assessed for expression by RT-PCR. Probes with detection p value Ͼ0.04 (dotted line) were deemed "absent" using the default Affymetrix detection call. While the expression of all probes predicted to be "present" by microarray was confirmed, many transcripts with predicted probe detection p values significantly above the default threshold were also identified as expressed by RT-PCR.
able signal (practically defined as Ͼ35th percentile) and has a relatively modest detection p value (Ͻ0.1 being considered reasonable, but the exact value will likely depend on the specific dataset (54)), then the gene product is most likely expressed. Visual inspection of hundreds of probe images indicated that mRNAs with a low signal (Ͻ25th percentile) and high detection p value are indeed most likely absent, 4 but one has to be especially careful with probes exhibiting intermediate signal intensity, regardless of the detection p value.
Global-scale Proteomic Screening Using Multidimensional LC-MS-Given that one cannot perform confirmation RT-PCR on a genome-wide scale, and considering the fact that RT-PCR may also be subject to artifacts when applied outside its useful dynamic linear range, we opted to measure protein levels directly. While classical biochemical methods for examining protein expression, such as Western blotting or ELISA, are quite effective, they are tedious and generally also limited in scope. Hence, we decided to perform large-scale proteomic profiling studies using high-throughout protein MS. The main objective was to identify as many proteins in ␣TC-1 cells as possible and to compare this pattern to the microarray profile. To this end, we used an ultra-sensitive gel-free LC-MS-based shotgun microsequencing procedure to systematically identify proteins exhibiting detectable expression (see "Experimental Procedures" for details).
The MudPIT technique employed in this study (40 -42, 55) couples high-resolution fractionation of protein enzymatic (i.e. tryptic) digests using multidimensional capillary-scale HPLC to real-time high-efficiency data-dependent MS/MS via ESI (reviewed in Ref. 39). The principle advantage of this approach stems from the very good sample separation achieved by jointly performing ion exchange and reverse-phase chromatography at the peptide level combined with automated procedures for peptide selection and fragmentation. This technique yields very large collections of richly informative spectra, allowing for extensive sequence determination (55,56).
To improve the odds of detecting lower-abundance proteins of special biological interest, such as transcription factors, we first performed a simple subcellular fractionation procedure to enrich for lower-abundance nuclear factors prior to analysis (42,57). This is a particularly important consideration given the sizeable overall dynamic range in protein abundances (commonly predicted to be over five orders of magnitude), which contrasts with the much more limited effective detection range of current instrumentation (i.e. two to three orders of magnitude). Although the resulting protein fraction was not pure and contained sizeable levels of cytoplasmic cross-contaminants (42,57), this easily implemented procedure substantively reduced the levels of higher-abundance cytosolic proteins (e.g. housekeeping enzymes), which usually tend to dominate a proteomic analysis. In practice, this step considerably improves the overall comprehensiveness of proteomic detection (Refs. 42 and 57; data not shown).
For the shotgun analysis, the proteins were concentrated by precipitation, denatured using urea, and digested extensively and sequentially using two proteases; first with endoproteinase Lys-C, which functions under denaturing conditions, then with trypsin (after dilution of the urea), which is a more processive enzyme. The resulting peptide mixture was de-salted by solid-phase extraction and analyzed using the basic MudPIT procedure (see "Experimental Procedures"). Although this approach proved to be very effective, resulting in the identification of many proteins (see below), we have found that multiple repeat MudPIT analyses are generally needed to achieve blanket coverage (that is, a saturating level of detection), even when investigating simplified organellar fractions. Hence, for this study, we performed three analyses on independently isolated cell extracts.
Database Searching and Statistical Validation-To identify the proteins, the entire collection of ϳ100,000 acquired tandem mass spectra was searched against a minimally redundant database of curated human and mouse Swiss-Prot/ TrEMBL protein sequences using the SEQUEST search algorithm (43). Because the candidate spectral matches and search scores are open to interpretation and are often inconclusive (56), it was critical to estimate both the accuracy of an individual putative identification and the overall rate of false discovery (proportion of incorrect identifications or false-pos-4 M. Maziarz, unpublished observations.
FIG. 3. Schematic guide for interpreting microarray-based gene expression profiles.
This plot provides a graphical representation of a suggested "rule" for evaluating transcript detection patterns. The gray arrow represents confidence in detection. If probe signal is high and the detection p value is low, a gene transcript is highly likely to be expressed and hence detected, while confidence decreases with decreasing signal and increasing detection p value. Low-intensity probes with elevated detection p values indicate an mRNA species is unlikely to be expressed. However, the status of gene products with intermediate probe signals and detection p values is more difficult to assign with confidence. Moreover, optimal thresholds for probe signal and p value cutoff values will depend on the actual experimental data and preprocessing, on how many replicates are available, and so on. The general assumption that 30 -40% of mammalian genes will be expressed in any one tissue (70) can also be used as a second guide when deciding on suitable thresholds. itives) when performing this analysis. Several heuristics guidelines have been developed to address these important (and related) concerns (56). More rigorous statistical measures have also been introduced recently to facilitate data interpretation (reviewed in Ref. 58).
For this study, each candidate database match was first evaluated using a statistical algorithm, STATQUEST (57), in order to calculate a probability score (likelihood a prediction is accurate) based on the preliminary SEQUEST results. An initial subset of higher-confidence proteins (with a minimum of 85% or greater predicted probability of being correctly identified in at least one of the three replicates) were selected for further consideration. It is important to note that this initial confidence filter threshold applies to individual sequence matches. In practice, due to the nonlinear distribution of database confidence scores (57), the majority of the filtered predictions actually have a very high likelihood of being correct. Indeed, as seen in Fig. 4, most of these putative identifications were predicted with a ϩ99% likelihood of being correctly identified (p values Ͻ0.01; that is, far greater than the STATQUEST p value threshold of 0.15).
Because spurious database matches usually have limited supporting evidence, often corresponding to only a single spectral match (58), as an additional measure of stringency, we accepted only those candidate proteins for which at least two or more spectra were detected across the three datasets for further analysis. This additional filtering measure resulted in a final set of 1,651 high-confidence proteins (Supplemental Table II , A and B). Although significant variability in spectral counts was seen between the three repeat MudPIT runs, especially for proteins exhibiting lower cumulative spectral counts, the overall reproducibility in terms of the overlap or fraction of shared proteins after filtering was about ϳ45%, as illustrated by the Venn diagram shown in Supplemental Fig. 2.
As an independent measure to assess the effectiveness of this filtering, we empirically calculated the preponderance of incorrect identifications in our set of putative high-confidence proteins. This is especially important with repeat experimental datasets, which can accumulate false-positives due to the sheer number of spectra examined. This was done by first populating the reference protein sequence database with an equal number of mock proteins, created by inverting the amino acid orientation of the normal Swiss-Prot/TrEMBL protein sequences. Because these "reverse" sequences are not expected to occur naturally, any matches to these decoy proteins represent spurious false-positives. The final proportion of identifications mapping to reverse relative to normal (or "forward") proteins therefore provided an objective criterion for estimating the false-discovery rate. As mentioned, spurious database matches are often supported with minimal supporting spectral evidence. The plot provided in Supplementary Fig. 3 shows the decreasing ratio of reverse-to-forward sequence database matches typically observed with increasing number of observed spectral counts. Because the vast majority of incorrect matches were preferentially detected with single spectra alone, they can be readily discarded from further analysis by establishing a minimal two-spectral minimum as a final measure of accuracy. Indeed, after filtering the data using this and the initial criteria outlined above (e.g. STATQUEST), fewer than 5% false-positives (reverse proteins) were detected in the final dataset (data not shown).
Regardless of which method of data validation is employed, the filtering process should aim to balance the trade-off between specificity (precision), which reflects the proportion of incorrect identifications or false-positives, and sensitivity (recall), which indicates the proportion of missed identifications or false-negatives. Receiver-operating characteristic (ROC) plots are often used to assess the effects of varying filter stringency on precision and recall (59). Although we do not have knowledge of the correct class labels (because, in most proteomic studies, one usually does not know a priori which proteins are in fact present in a sample), in practice, one can estimate this trade-off empirically based on the fraction of database matches to forward and reverse sequences after various filters are applied. The ROC-like plot shown in Fig. 5 shows the effects of applying different confidence filters to the ␣TC-1 dataset, based either on the preliminary STATQUEST probability scores or the reproducibility of detection (detected in one or more repeat analyses) or the cumulative spectral counts. The relationship between the number of credible candidates (namely, matches to normal forward protein sequences) and false-positives (represented by matches to reverse sequences) passing each filter is complex. Of course, as one applies a more-stringent filter, the overall false-positive rate decreases (as evidenced by the reduced proportion of The graph shows the distribution of initial protein identification (database search) confidence scores, as determined by the STAQUEST probability algorithm, for all candidate proteins identified by MudPIT with a minimum predicted likelihood of 85% or greater. As can be seen from the markedly skewed distribution, the majority of the preliminary predictions are of very high confidence (Ͼ95% probability of being correct), with ϳ93% of all putative matches predicted with 90% or greater confidence and Ͼ65% of all proteins predicted with Ͼ99% confidence. reverse proteins), whereas the false-negative rate increases (as represented by the decrease in the number of forward proteins detected). But this trade-off is clearly nonlinear. For the most stringent filter, such as only considering proteins identified with high confidence in all three repeat samples or else protein detected with a minimum of 10 or more spectra, the false-positive rate can be reduced to virtually zero, but at the expense of a severely limited sensitivity (i.e. very high false-negative rate).
Depending on the desired balance of specificity/sensitivity, one may select a mixture of different filter criteria. In practice, the two-stage filtering chosen here ensures reliable identifications (limiting false-positives to Ͻ5%) without overly penalizing overall proteomic detection coverage. However, as discussed below, this issue can and should be revisited as other supporting data become available.
Reproducibility and Variance-Despite the effectiveness of statistical filtering, MudPIT-based profiling is still affected by many other sources of error that should be considered. One of the most troubling is the problem of under-sampling. Shotgun MS/MS fragmentation involves a somewhat stochastic peptide selection step, which is generally biased toward higherintensity peptides even when data-driven exclusion lists are used (45,56). Given that the duty cycle (scan rate) of current instrumentation is limited, only a fraction of all eluting peptides will be selected for fragmentation, while many others will be missed. This trend results in preferential detection of higher-abundance proteins (because these often give rise to higher-intensity peptide ions), while lower-abundance proteins frequently go undetected (45). This limitation is compounded by significant variability in the peptide spectral quality run-torun (60). 5 Due to this, MudPIT experiments must generally be repeated several times to achieve adequate overall detection coverage (i.e. saturation) (61). The number of runs needed to reach a useful saturation point will depend on the complexity of the sample under study. In our experience, three repeat analyses are usually sufficient to detect Ͼ80% of all detectable proteins present in a sample. 5 Saturation of detection is a particularly important consideration if one wishes to compare proteomic profiles between different samples (e.g. different cell lines). Relative protein abundance between samples can be estimated based on the ratio of median cumulative spectral count detected for each protein in the respective samples (45). This method is far simpler to implement and interpret than ones relying on sophisticated chemistries or based on internal standards labeled with stable isotopes (62), but requires a sufficient number of repeat analyses to be robust to outlier effects (45). 5 Spurious variance (experimental noise) can be assessed by calculating the standard deviation and overall reproducibility of protein detection run to run (data not shown).
Additional subcellular fractionation and proteomic enrichment procedures aimed at sample simplification can also help surmount the under-sampling problem (42). Alternatively, further advances in instrumentation resulting in higher scan rates, more-efficient modes of peptide sampling, or better spectral analysis, may effectively bypass this concern.
Data Cross-referencing-In addition to the RT-PCR validation, we sought to combine the results from the MudPIT profiling experiments with the microarray datasets to validate our hypothesis that if a probe has high enough signal (raw 5 C. Chung and A. Emili, unpublished observations. FIG. 5. The interplay between precision and recall in database searches. ROC-like plot of the predicted specificity (precision) and sensitivity (coverage or recall) obtained by applying various quality filters to the preliminary proteomic (database search) results. The presumed true-positive fraction is calculated based on the proportion of normal "forward" proteins passing the filter criteria, while the false-positive fraction is calculated based on the proportion of "reverse" proteins passing the same criteria. The data points represent the number of proteins identified with either a given minimum recorded spectral count (with the data also subdivided to highlight run-to-run detection reproducibility) or based on the preliminary STATQUEST database confidence cutoff score. Note that while the false-positive (reverse protein) fraction decreases as one increases the stringency of the STATQUEST quality filter (i.e. moving from right to left), the decrease is marginal as compared with the much sharper decrease in the positive fraction (detection sensitivity or coverage). A filter based on a STATQUEST cutoff of ϩ85% confidence and a minimum of two spectra was chosen as it provided reasonable overall detection coverage for a single experimental dataset without excessive predicted false-positives. intensity), then it is likely to be present even if the detection p value is higher than the default cutoff. This first required cross-referencing the MudPIT and microarray datasets using their respective gene product identifiers (IDs or accession numbers).
When working with commercial microarrays, as we have done here, it is often straightforward to obtain the relevant cross-reference mappings between probe IDs and Swiss-Prot/TrEMBL protein accessions. Affymetrix, for example, provides current mappings between its probe IDs to various knowledge databases for all its chip designs, which can be readily downloaded through their website (www.affymetrix. com). As for the proteomic dataset, we needed to obtain the latest Swiss-Prot accession numbers in order to most efficiently map these to the corresponding gene probes. The ExPASy website (us.expasy.org/sprot/) provides a convenient tool, Swiss-Prot ID tracker (us.expasy.org/cgi-bin/idtracker), to retrieve up-to-date Swiss-Prot IDs, including primary accession numbers for any deleted gene product IDs. Also, within ExPASy, is a Swiss-Prot/TrEMBL entry retrieval list tool (us.expasy.org/sprot/sprot-retrieve-list.html), which allows a quick retrieval of the primary accession number (latest unique alphanumeric protein identifier) and secondary accession number (older accession numbers retained after sequence merger) for a given input list of proteins.
To ensure completeness for this study, we obtained all the primary and secondary accession numbers for each of the protein IDs before mapping these to the microarray probes. We were able to map 72% of the reliably identified proteins to probe IDs on the microarray (Supplemental Table III). Presumably, we were not able to map the remaining proteins either because there were no corresponding gene probes were present on the microarray or because a human ortholog was preferentially identified (a possibility, because both human and mouse protein sequences were used in the database search).
One challenge working with disparate proteomic and functional genomic datasets is the constant updates to the relevant annotation (knowledge) databases, such as Swiss-Prot/ Trembl in the case of proteins and UniGene in the case of mRNA, from which one derives the respective IDs. To be comprehensive, these tables need to be up-to-date, but without creating legacy issues. Hence, an alternative, and perhaps more rigorous, approach to linking the proteins and microarray probes is to identify matches using a sequence alignment tool like BLAST (www.ncbi.nlm.nih.gov/BLAST/). One can utilize a bulk sequence retrieval tool, like the ExPASy Swiss-Prot/Trembl entry retrieval list tool, to obtain both protein and gene sequence information using accession IDs in the correct format for alignment.
When interpreting a BLAST result, one needs to consider more than just the raw score or expectation (e) value as match criteria. This stems from the fact that BLAST is a locally weighted context algorithm, and even highly significant e-values approaching 0 do not always indicate a perfect alignment. Therefore, additional criteria, such as fraction percent identity, should be used to define an acceptable threshold cutoff. In our study, we opted for an e-value Ͻ10 Ϫ20 and Ͼ95% fraction identity as thresholds. We do note, though, that a reciprocal validation search, in which BLAST is repeated after swapping the queries and subjects, can help to identify spurious matches between paralogous gene products. But again, one needs to be cautious selecting appropriate cutoff scores in order to maintain a balance between sensitivity and specificity. Finally, it is worth considering that the reference source of the gene and protein sequence information, and whether the sequences were retrieved from the same database, can markedly affect the stringency of a test.
Comparison of the mRNA and Protein Profiles-After crossreferencing, we could evaluate the overlap and relative sensitivity, specificity, and biases of the proteomic and genomic approaches. Using the same detection call procedure as for the RT-PCR analysis, which is based on a predefined default threshold detection p value of 0.04, we analyzed the protein distribution relative to the microarray-derived detection p values and raw probe signal intensities.
A total of 5,945 gene products were detected by the gene chips alone (in at least two of the three repeat experiments, using a p value cutoff of 0.04), reflecting the high coverage attainable by microarray. A subset of the cognate proteins encoded by these transcripts may have been missed by Mud-PIT because we selectively analyzed a nuclear fraction, or because the corresponding sequences were not represented in the reference database used for the spectral search. Of the 888 gene products that could be unambiguously crossmapped between the platforms, 762 were deemed to be expressed by microarray in addition to being identified by our proteomic method. Conversely, 126 gene products were detected (i.e. identified with high confidence) exclusively by MudPIT, being somehow missed by the gene chip platform using the default p value filter (p value Ͼ0.04; therefore deemed "absent"). Taken at face value, the fact that the genomic and proteomic data are in agreement for the majority of cross-mapped gene product pairs (in that both the gene and corresponding protein were detected) validates the reliability of the two platforms. As seen in Fig. 6A, there was a marked tendency for the MudPIT screening to detect the putative translation end-products of transcripts detected with low p values (that is, at or below the default cutoff). These data further confirm the stringency of the default threshold.
Although no clear correlation existed between the predicted detection p value and the corresponding proteomic evidence, as seen in the scatter plot shown in Supplemental Fig. 4, many of the orphan proteins were identified with substantial spectral counts (and, by inference, with very high confidence). Although we cannot exclude experimental error, these data could be taken as evidence that the detection p value filter cutoffs used for the microarray (and even the proteomic analyses) are too stringent. Alternatively, these data may point to an unexpected biological uncoupling between the corresponding levels of the respective mRNA and protein species.
Considering that probe signal and the detection p value are intrinsically inversely related due to the nature of the Affymetrix statistical algorithms used to calculate the two, it was not too surprising to see that microarray probes with low detection p values (Յ0.04, or "present") tended to have higher signal intensity than those corresponding to transcripts deemed "absent" (Fig. 1). Likewise, and reassuringly, the probes for transcripts encoding the proteins identified by MS also tended to have higher average signal intensities (Fig. 1). In other words, the average mRNA signal for gene products confirmed to be expressed by MudPIT is much closer to probes detected by microarray analysis than those that were deemed absent (by microarray). These results are in alignment with the heuristic rule summarized above (Fig. 3).
Improving Proteomic Coverage by Data Integration-Using the relatively stringent two-stage quality criteria, false-negatives are an inevitable consequence of proteomic dataset filtering (i.e. Fig. 5). On the other hand, one generally prefers to avoid compromising the integrity of the data by loosening the filters too much. Hypothetically, one could seek to incorporate alternate supporting evidence, such gene expression data, both as a guide to assess the effectiveness of the proteomic filtering criteria (specificity versus sensitivity) and to validate marginal protein identifications. To address this possibility, we examined the suitability of using microarray results to eliminate false-positives in the subset of protein identifications with lower-confidence database scores, considering only those proteins with solid preliminary probability scores (Ͼ85% confidence according to STATQUEST) but that were nonetheless removed because the identifications were based on only a single spectrum (a commonly accepted quality criteria) (56). Fig. 6B shows the ratio of "present" and "absent" calls made for the microarray probe pairs corresponding to transcripts predicted to encode these marginal proteins using a default detection p value Յ0.04. Because half of all genes are predicted to be expressed, a false-positive database match has an equal chance of mapping to an expressed gene (hence, the ratio of forward-to-reverse sequences should be ϳ1). However, the probes corresponding to even the most marginal protein identifications exhibited a far greater chance of being detected as "present" by microarray (even using the overly stringent p value threshold) than one would expect by chance alone. This implies that one could apply the results of a parallel gene expression study to reduce the false-negative rate resulting from stringent proteomic data filtering, without increasing the number of contaminating false-positives.
Annotation and Functional Inference-To obtain a holistic sense of the global similarities and differences between the two datasets, functional annotation was used to compare the transcriptome and proteomic datasets. Obtaining simple gene product descriptions from a reference annotation resource like ExPASy is often a good starting point for interpreting the biological significance of expression profiles and a logical first place for deducing the functions of proteins of special note (e.g. those with interesting expression patterns), which can then be individually followed up. Alternatively, one can look for general patterns of functional enrichment using a more-generalized annotation resource such as the GO database (www. geneontology.org), which reports on the functional properties FIG. 6. Comparison of the proteomic and microarray data. A shows the cumulative fraction of high-confidence proteins detected by MudPIT profiling versus the corresponding microarray probe detection p values. For comparison, the complete cumulative profile of all probe p values is shown. The default detection call cutoff value is indicated with a dotted line. B shows a bar graph of the ratio of microarray probe detection "present" to "absent" calls, based on the default detection p value (Յ0.04), made for proteins identified by MS based on only a single medium-to high-confidence spectra (singleton peptide) but excluded from further consideration due to application of a stringent proteomic filter. As can be seen, considerable supporting evidence of gene expression is evident for these marginal protein identifications. This suggests one can use parallel gene expression data to validate inconclusive results from a shotgun proteomic profiling study.
of gene products using a more-standardized, computerfriendly schema. The idea is to look for underlying trends in an input list of gene products, by calculating the relative membership enrichment (versus chance alone) in various select functional categories (e.g. GO terms) using a suitable statistical measure. While the GO curation is far from complete, and potentially error prone, it is currently the most comprehensive resource of its type. Moreover, one can use a dedicated computer program or web application, such as GOMiner (discover.nci.nih.gov/gominer/), to automate the analysis.
As might be expected, gene transcripts expressed in ␣TC-1 were enriched for cell communication, regulatory, and signaling proteins known to act as key determinants of differentiation, tissue-specificity, and endocrine cell function (data not shown). As shown in Supplemental Table IV, a disproportionate fraction of the gene products also identified by LC-MS were found to be involved in nucleic acid binding, transcription, mRNA splicing, and genome maintenance (e.g. DNA replication, cell-cycle control, etc), consistent with the subcellular fractionation and enrichment procedure used in this study. Hence, based on the observed subcellular localization and concordant expression patterns, the potential function of at least some of these proteins in ␣ cell physiology can be reasonably predicted.
DISCUSSION
The introduction of high-throughput, high-resolution experimental technologies over the past few years, especially DNA microarray gene chips and protein MS, has raised optimism that the molecular underpinnings of endocrine cell function can now be elucidated on a systems-wide level. Our expectation is that systematic molecular profiling studies, when performed rigorously, will also help address fundamental biomedical research questions relating to the basis for common metabolic disorders, such as insulin resistance and overt diabetes. Nevertheless, there is growing recognition of the serious limitations and biases associated with most expression profiling technology in their current forms, particularly with regards to the reliability of the biological inferences that can be made (24,36).
In this pilot study, we have attempted to combine large-scale gene and protein expression technology platforms together with the aim of making more reliable, and more comprehensive, inferences regarding endocrine cell function. We have outlined some of the less well-appreciated, but nevertheless important, technical and analytical issues associated with practical implementation of global profiling methods, while offering potential solutions to the most pressing problems. Although the overlap between the gene chip and proteomic patterns reported here represents only a small portion of the total predicted genetic complement of mouse, our data strongly suggest that integration of multiple types of experimental techniques truly does allow one to gain a more complete, statistically sound, and biologically meaningful picture of the sample under study.
It is important to stringently filter genomic datasets using proven, rigorous statistical measures of reliability (24,36,56,58) and to regard preliminary results with caution. Detection calls are an important first step for determining the reliability of probe values. Accurate predictions, and their associated p values, are proving to be important metrics as microarray datasets are increasingly being used as the basis for disease or sample classification systems (e.g. see Ref. 63). The robustness of probe features should be carefully investigated prior to their use as in machine-learning and classification studies so as to avoid spurious biomarker discovery. In this study, we have found the default microarray p value threshold, as suggested by Affymetrix, to be overly conservative with respect to detection specificity at the expense of sensitivity. Using RT-PCR, often considered to be the "gold-standard" technique for transcript detection, we have validated the expression of a number of mRNAs with high-predicted probe p values (Ͼ0.04; therefore, classified as marginal or absent by the Affymetrix detection call). Moreover, we have confirmed these initial findings using high-stringency global proteomic profiling. Viewed together, these results indicate that a significant number of genuine transcripts are likely commonly missed in most microarray screens, presumably as false-negatives due to overly restrictive filtering criteria.
Based on this study, we propose, as a heuristic guideline, that researchers incorporate other information, like probe signal intensity as well as other orthogonal experimental data, such as RT-PCR and (ideally) protein MS, to define suitable detection filters for microarray data. Although the exact nature of these criteria would differ from experiment to experiment, this approach is generalizable. By considering additional biologically pertinent factors, such as detected related properties in gene function across a global dataset, one might better discern between the significance of a (noisy) expression profile. This is particularly the case when taking into account biological variation on top of experimental noise.
The problem of making a reliable detection call seems like a far simpler problem as compared with determining differential gene expression for example. In the latter, one is usually dealing with data from several different samples-either a time series or treatment-control-response experiments, or both-that first need to be normalized so that the signals are comparable. Then, thresholds for deciding upon "significant change" in transcript levels must be decided on. A "yes or no" answer for deciding whether an mRNA was detected requires only a single chip (although, the confidence and power of a result can be increased by repeat analysis, if required), and normalization is not necessary because there is no need for comparing signals. However, detection of genuine gene expression is a very important and interesting research problem in and of itself, though not nearly as alluring as the issue of differential expression, which is possibly a reason why it has received relatively little attention to date.
For starters, it is important to know if a gene is in fact expressed (or not) before one attempts to establish if transcript levels change under different experimental conditions. That is because low-abundance or marginal transcripts are frequently misidentified as "differentially expressed" by standard analysis software, because the absolute signal is often not taken into account if merely looking at fold change. Small fluctuations in background signal, due to technical variability, can sometimes override the signal intensity of an otherwise blank probe, resulting in classification as "differentially expressed" when in fact the transcript is absent altogether. Normalization tries to account for this, but not all normalization algorithms work equally well at dampening or removing variability in low-intensity signal. Hence, one must carefully consider both the absolute signal and detection call to reliably judge the significance and amplitude of log ratios when searching for differentially expressed probes. This consideration has also been recognized by Affymetrix, as their new proprietary expression analysis algorithm, PLIER (Probe Logarithmic Intensity ERror estimation), takes probe hybridization affinity information into account and uses a sophisticated error model to calculate background and nonspecific hybridization, accepting Ͼ10% more probes compared with the Statistical Algorithm (based on Rat 230 2.0 array) (18). Other related algorithms are continuously being developed to improve the overall sensitivity and specificity of mRNA detection, especially for lower-abundance transcripts that are often presumed to be of particular biological relevance. 2 For quantitative multisample comparisons, microarray data is usually normalized to minimize technical variability, such as variation in manufacture and processing of the arrays, scanning (and scanner calibration), differences in detection efficiency between the fluorescent dyes, and systematic spatial biases in measured expression levels. Normalization also aims to minimize unwanted variability due to unequal quantities of starting RNA, variable sample preparation, differences in labeling and hybridization efficiencies, the time of day the experiment is performed, and even technician performing the experiment (20,21). Ultimately, however, biological variation remains the major issue of concern.
The techniques used to normalize Affymetrix data differ in two major aspects: whether they use probe-level data or summarized expression intensity, and whether they perform normalization across the entire dataset or normalize each chip individually. For example Li and Wong's model-based expression index (MBEI) measure uses probe-level data of the entire dataset, excluding outliers, to fit their statistical model, which is then used to calculate gene expression (64). This is a nonlinear normalization algorithm, which takes into account information specific to each probe.
Quantile normalization, used as the normalization method in the widely accepted RMA expression quantitation method (20,21), uses probe-level data and does not require a baseline array. An RMA expression measure that uses sequence information (GCRMA) is a related, though more sophisticated, expression quantitation method (34). The difference is that in GCRMA the background adjustment algorithm takes sequence information into account (such as GC content) and uses a stochastic model for binding affinities. For normalization the quantile normalization is used, and median-polish is used as a summary method, as is the case with RMA. This probe-specific background adjustment can also be used to improve the accuracy of other methods that summarize or normalize probe-level data (65,66).
Because most microarray studies are ultimately aimed at predicting corresponding protein levels, in most scenarios it would probably be advantageous to measure global protein expression patterns directly. The proteome (or, possibly more correctly, the translatome or population of expressed proteins), is conceptually analogous to the transcriptome (population of mRNA transcripts) and is defined as the entire set of proteins produced by a cell or tissue at any given time point (67). MS-based protein expression profiling represents an increasingly attractive, albeit still very challenging, approach for investigating the global biochemical properties of cells and tissues (38,62). When combined with biochemical prefractionation and enrichment procedures, proteomic screening also offers the potential for extensive functional discovery, such as by determining both the subcellular location and possible interacting partners of proteins of special interest. Nevertheless, much remains to be accomplished in this regard with respect to endocrine cell biology.
Despite recent advances, large-scale proteomic measurements of endocrine cells and tissues represents a daunting experimental challenge. Protein abundance, subcellular localization, and turnover are highly dynamic, while biological patterns are dictated by overlapping developmental signals, physiological cues, environmental constraints, and disease perturbations. Classical gel-based proteomic screening techniques, such as two-dimensional PAGE, have proven to be ineffective for monitoring the proteome, particularly lowerabundance proteins, and are biased against the identification of small, basic, or membrane-bound proteins (39,67). In contrast, gel-free shotgun protein profiling strategies, such as the one used in this study, allow for far more comprehensive characterization of complex biological samples (39,41,67). Since its introduction (39,41,67), the MudPIT technique has proven to be a very capable method (reviewed in Refs. 39,40,68). Our group routinely applies MudPIT to monitor the protein patterns of various organelles isolated from rodent tissues (i.e. mouse organs) and cultured mammalian cell lines with the objective of elucidating the biochemical properties associated with these various cell types (42,57). Although the results of our pilot efforts to define the proteome of endocrine cell lines such as islet ␣TC-1 are still quite limited (3,5), based on the results reported here we would argue that proteomic and functional genomic approaches are highly complementary and indeed synergistic when combined. Although a broad correlation was observed between the mRNA and protein patterns, the corre-spondence proved to be lower than might be expected.
Of course, assuming analytical error is not a cause, genes whose expressed cognate transcription and translation products do not correlate are of special interest because they may be indicative of protein regulation by posttranslational mechanisms, for instance by targeted protein degradation. 6 On the other hand, outliers whose protein expression was significantly higher than expected based on the corresponding mRNA levels (i.e. not detected) might include long-lived proteins and, as a class, might be expected to be less common than outliers whose mRNA expression was higher (due to incomplete proteomic sampling). This latter class may contain proteins subject to proteolytic regulation, or perhaps regulation by transport into or out of the nucleus, to another organelle or alternatively to the plasma membrane (insoluble membrane proteins were largely missed by the MudPIT profiling procedure used in this study). These subsets merit additional scrutiny because they may represent factors important in determining or regulating cell-type-specific biological functions.
The issue of proteomic detection coverage is an important consideration. Part of the current limitations associated with shotgun profiling stems from a failure to properly (and optimally) interpret the vast collection of acquired spectra, leading to both many false-negatives (missed identifications) and false-positives (incorrect identifications). To this end, several groups have being trying to develop appropriate computational and statistical tools and methods for evaluating, validating, and mining large-scale protein expression datasets (45,57,58,69). We have argued here that prior knowledge of gene expression patterns can, in fact, be used as supporting confirmatory evidence in favor of tentative (marginal preliminary scoring) protein identifications. Based on our preliminary findings, we propose, as a second heuristic rule (albeit one still requiring definitive formal proof), that any tentative protein gene product deemed to be expressed by microarray analysis and having a high confidence score (Ն85% likelihood by STATQUEST or a similarly derived statistical measure) should be accepted as correctly identified regardless of the total number of supporting spectra.
Data integration is also complementary to alternate biochemical methods aiming for sample simplification or enrichment to improve proteomic detection, such as subcellular fractionation and/or nondenaturing conventional chromatography, which have proven to be fruitful for extending detection limits as well as providing a more biologically informative context for interpreting profiles (37,42,57). Of course, the introduction of new generations of high-performance instruments with much greater performance in terms of sensitivity, dynamic range, and scan speeds, will also surely help to surmount the under-sampling problem. | 2019-03-30T13:12:22.556Z | 2005-04-01T00:00:00.000 | {
"year": 2005,
"sha1": "ebc4611b6ef5fa72b5278b9ff1c8f84d1b41d946",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/mcp.r500011-mcp200",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e1aced10004bf762be1e96f7536790f833d0f6ff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
52054735 | pes2o/s2orc | v3-fos-license | Apolipoprotein E deletion has no effect on copper-induced oxidative stress in the mice brain
The current study was designed to investigate effect of copper administration on oxidative damage to the brain in ApoE−/− mice and to explore the putative neuroprotective effects rendered by apolipoprotein E (ApoE). Male C57BL/6 ApoE−/− and wild-type mice were randomly assigned into four groups, ApoE−/− mice wild-type mice treated with either copper or saline. Copper sulphate pentahydrate or saline (200 µl) were administered intragastrically daily for 12 weeks. Expression of malondialdehyde, superoxide dismutase (SOD), hemeoxygenase 1 (HO-1), and NAD(P)H: quinone oxidoreductase 1 (NQO1) were determined by a combination of biochemical assays. The concentration of copper in the brain of C57BL/6 mice and ApoE−/− mice treated by copper significantly increased compared with mice treated by saline (P=0.0099 and P=0.0443). Compared with the C57BL/6 mice treated by copper, the level of the ApoE−/− mice treated by copper was higher (P=0.018). TBARS and SOD activities or the expressions of NQO1 and HO-1 in the brain were not significantly different amongst the four experimental groups of mice. The relative value of NQO1/β-actin expression in the brain of the ApoE−/− mice was similar in both saline and copper administration experimental groups. However, Western blot analysis showed that NQO1 expression was significantly higher in the ApoE−/− mice brain treated with saline compared with saline treated wild-type mice (P=0.0449). ApoE does not function in protecting the brain from oxidative damage resulting from copper build-up in Wilson’s disease, but may play a role in regulating copper accumulation in the brain.
Introduction
Hepatolenticular degeneration, also termed as Wilson's disease is an autosomal recessive genetic disorder and is characterized by ATP7B gene mutations, leading to defective copper metabolism [1]. Wilson's disease occurs approximately amongst 1 in 30000 live births [2]. The H1069Q mutation in European populations and the R778L mutation in Asian populations are the most common ATP7B mutation observed in patients with Wilson's disease [2][3][4][5].
Hepatic damage to various extents, neuropsychiatric symptoms, Kayser-Fleischer rings, and damage to the kidney and skeletal muscle are classical clinical hallmarks in Wilson's disease. Interestingly, liver damage is generally the major clinical manifestation at disease onset in children, whereas neurological symptoms tend to occur in those aged in their twenties or older [6]. There are though some reports of pediatric cases where primary symptoms were neurological, either isolated or along with mild hepatic injury. The diversity of symptoms and differences in age at disease onset suggest that the features of this disease may not be determined exclusively by the ATP7B gene mutations and that other genetic factors may be involved in the pathogenesis and perhaps progression of the disease.
Recently, some studies have shown that ages of onset as well as diversity of clinical presentation are correlated to the variant apolipoprotein E (ApoE) genotypes [7][8][9]. There are three alleles of the ApoE gene -ε2, ε3, and ε4 -encoding three different ApoE isoforms, each with biological functions different from the other [10]. Several lines of evidence have proved beyond doubt that the ApoE genotypes are associated with occurrence and clinical outcome of neurodegenerative diseases [10][11][12][13][14]. Whereas the ApoE 3 protein has neuroprotective function, the ApoE 4 protein increases vulnerability to CNS damage [15][16][17]. Because ApoE proteins can exert an antioxidant effect [18], and high copper build-up in Wilson's disease induce tissue damage via generation of free oxygen radicals [1], it is speculated that ApoE may render a neuroprotective effect in patients with Wilson's disease.
We have earlier determined the role of ApoE post copper administration induction of hepatic toxicity by modeling the ApoE −/− mice. We confirmed that copper accumulation can induce hepatic oxidative damage and that ApoE may protect the liver from this oxidative injury [18]. Given that copper accumulation also occurs in the brain, the objective of the current study was to explore whether ApoE exerts neuroprotective effect against oxidative damage induced by copper accumulation in the brain.
Animals
Animal experiments were performed after obtaining approval from the Institutional Animal Care and Use Committee of Hebei Medical University, Hebei, China, as described before [20]. Male C57BL/6 ApoE −/− and wild-type mice (15.1 + − 1.38 g; Animal Care Center of Beijing Medical University, China) were used in the present study. Animals were maintained as described before [20]. Twenty-four wild-type or ApoE −/− mice were randomly assigned into two groups -either treated with saline or copper. Copper sulphate pentahydrate (200 μl; 200 ppm; lot# BCBG7381V, Sigma, U.S.A.) or 200 μl saline was intragastrically administered daily for 12 weeks. After 12 weeks, mice (n=8 per group) were killed, and blood was obtained by orbital bleeding. The brains were removed, snap-frozen, and stored in liquid nitrogen until further use. The remaining four mice from each group were processed for immunohistochemistry as described below.
Preparation of tissue samples
Brain tissue homogenates were prepared using ice-cold PBS containing protease inhibitor cocktail (Sigma-Aldrich, U.S.A.) by homogenizer. Homogenates were centrifuged at 8000 g for 10 min. TBARS and superoxide dismutase (SOD) activities were determined in the supernatants as described in the next section.
Determination concentration of copper in brain
The concentration of copper was determined according to the method as described in previous study [20]. Two hundred milligrams of brain samples were digested using 5 ml of concentrated nitric acid and 1 ml of 30% H 2 O 2 in a microwave digestion system (CEM, U.S.A.). And all samples were heated to 140 • C until the volume declined to 2 ml after they cooled down. Each sample was washed for three times using 1% nitric acid. The atomic absorption spectrometer was used to determine the concentration of copper.
Determination of TBARS and SOD activities in the brain
TBARS activities were determined using the thiobarbituric acid (TBA) method (Nanjing Jiancheng Bioengineering Institute, China) as described before [20]. The concentration of TBARS was counted as nM of TBARS per mg of protein. SOD activities were determined by the xanthine oxidase method as described previously [20], and the results were presented as units per milligram of protein.
Immunohistochemistry
After copper or saline administration for 12 weeks, four mice were randomly selected from each group for immunohistochemical experiments using SABC three-step kit (Boster, Wuhan, China). Intraperitoneal injection of 10% chloral hydrate (300 mg/kg) was used for anesthesia. Post-anesthesia mice were transcardially perfused with 200 ml of normal saline followed by 4% paraformaldehyde in PBS. The brains were removed and fixed for 24 h in 4% paraformaldehyde, and embedded in paraffin. Tissue sections (5 μm) were obtained using a microtome. Treatment with 3% H 2 O 2 at room temperature for 15 min was done to block the endogenous peroxidase activity. Sections were then processed as described previously [20] and incubated with primary antibodies against hemeoxygenase 1 (HO-1; 1:100 dilution, Epitomics, U.S.A.) or NAD(P)H: quinone oxidoreductase 1 (NQO1; 1:100 dilution, Epitomics, U.S.A.) overnight at 4 • C. Sections were developed using DAB staining and were then counterstained with Hematoxylin. The sections not containing primary antibodies were used as negative controls. A light microscope was used to image the immunostained sections.
Western blot
Immunoblot analysis of proteins expressed in brain was assayed in lysates made from brain tissues as described before [20].
Quantitative real-time PCR
Total RNA was isolated with TRIzol reagent (Thermo Fisher Scientific, Shanghai, China) and treated with DNase to get rid of any contaminating genomic DNA. cDNA and quantitative real-time PCR (qRT-PCR) analysis were performed as described before [20]. Primers used were NQO1 forward primer 5-TATGCTGCCATGTACGACAACGG-3, reverse primer 3-AAGACCTGGAAGCCACAGAAACG-5. Data analysis was performed using the Sequence Detection Systems interface (Thermo Fisher Scientific).
Statistical analysis
Data were expressed as mean + − S.D. Two-way ANOVA was used for comparing continuous data between male C57BL/6 ApoE −/− group and wild-type mice group that expression of malondialdehyde, SOD, HO-1, and NQO1 data, which were determined by a combination of biochemical assays. If the interaction was significant, least square mean test was used for pairwise comparison only if interaction was statistically significant. All statistical analysis was performed with SPSS 21.0 software and P<0.05 was considered as statistically significant.
Results
The concentration of copper in the brain of ApoE −/− mice To verify the concentration of copper in the brain of the four groups, we determined the concentration of copper used the atomic absorption spectrometer. The results showed that ApoE −/− mice treated with saline exhibited significantly higher copper concentrations in the brain compared with the wild-type mice treated with saline (4.3 + − 0.52 compared with 2.5 + − 0.43; P=0.0099). The copper concentration of C57BL/6 mice after treatment of copper was significantly higher compared with the C57BL/6 mice treated by saline in the brain tissue
The expression of NQO1 mRNA in the brain
To determine the expression of NQO1, we also used the qRT-PCR to detect the level of NQO1 in ApoE −/− mice. Even though the relative ratio of NQO1 and ACTB (encoding β-actin) in the ApoE −/− mice brain treated with copper was slightly more than the ApoE −/− mice treated with saline, the difference did not attain statistical significance (1.4 + − 1 compared with 0.8 + − 0.12; P=0.3605; Figure 4).
Discussion
Clinical outcomes in vascular and neurodegenerative disorders seem to be regulated by the ApoE genotypes [14,19,20]. In Alzheimer's disease, the ApoE 3 protein functions as a neuroprotector; in comparison, the ApoE 4 protein promotes CNS injury [15][16][17]. Several studies have shown that APOE ε4-positive genotype is associated with earlier symptomatic visibility of Wilson's disease, particularly amongst patients harboring the ATP7B p.H1069Q homozygous patients in women [7][8][9]. However, whether ApoE facilitates neuroprotection in Wilson's disease was not known.
In an earlier study [20], we showed the role of ApoE in copper buildup-induced oxidative injury in the liver by administering copper for 12 weeks at high dosage of 200 mg/kg/day to ensure effectiveness of copper toxicity. The animals treated with copper exhibited a significantly higher copper concentration in the liver than the animals treated with saline, suggesting that excessive copper accumulates in the liver. Furthermore, copper treatment also resulted in excessive copper accumulation in other organs including the brain and kidney. Thus, this animal model exhibited copper build-up in the brain, kidney, and liver with wide-ranging hepatic damage, which mimics Wilson's disease presentation features. We found that copper induced an elevation in TBARS activities and an inhibition in SOD activities in the serum and liver respective of ApoE expression, accompanied by an elevation in the expression of NQO1 and HO-1, indicating that elevation of copper concentration can induce oxidative damage in the liver.
Copper functions as the cofactor of various enzymes participating in a diverse array of cellular processes. Therefore, it is likely that pleiotropic mechanisms regulate copper toxicity. High levels of copper were associated with neural degeneration, reducing the number of nerve cells [21], and miniature quantities of copper-induced neurotoxicity in cholesterol-fed mice via oxidative stress-induced apoptosis [22].
In this study, the mice were also exposed to copper administration for 12 weeks with the high dosage of 200 mg/kg/day to ensure effectiveness of copper toxicity. In toxicologic studies, copper is usually administered in doses of 50 mg/kg/day as low toxicity, 100 mg/kg/day as moderate toxicity, and 200 mg/kg/day as high toxicity [23]. On the other hand, it has been reported that, hepatic injury might occur after a period of 12 weeks feeding with standard chow diet [24,25]. In the 12 weeks, the copper accumulated in the brain and the hepatic injury was not very serious. In light of the above, 12 weeks of copper in high dosage was chosen. We found that the ApoE −/− mice treated with saline exhibited significantly higher copper concentrations in the brain compared with the wild-type mice treated with saline, and the ApoE −/− mice treated with copper sulphate pentahydrate also exhibited significantly higher copper concentrations in the brain compared with the wild-type mice treated with copper sulphate pentahydrate, suggesting that ApoE may participate in regulating copper accumulation in the brain. However, it remains unknown how ApoE knockout results in copper accumulation in the brain. The blood-brain barrier is disrupted in ApoE −/− mice [26] resulting in the possibility that increase in the permeability of the blood-brain barrier to copper may lead to an increase in brain copper concentrations in ApoE −/− mice. However, though brain copper concentrations were significantly higher in the wild-type mice treated with copper and the ApoE −/− mice treated with saline or copper compared with the wild-type mice treated with saline, no significant differences in TBARS and SOD activities or the expression of NQO1 and HO-1 were found amongst the saline or copper treated wild-type mice and ApoE −/− mice.
Even though copper deposited in the liver is thought to initiate Wilson's disease, the disease presentation does not match with degree of copper accumulation in the liver, indicating that a multitude of factors may contribute significantly to Wilson's disease pathology [27]. These factors include the exposure time to copper, the intracellular copper distribution, the presenting form of copper in the liver, and the involvement of additional protein regulators. Due to the expression of ATP7B in the CNS, the CNS abnormalities may be accompanied by liver damage, and the delay of neurological symptoms may be arising from partial compensation of ATP7B malfunction by another copper-transporting ATPase, ATP7A, which is expressed in the brain, but not in the liver cells.
To further investigate the impact of ApoE on the brain, we detected both the expression of NQO1 mRNA, and the relative value of NQO1/β-actin in the brain. The results by the two tests were in accordance, showing no differences between the ApoE −/− mice with either, suggesting that ApoE may not produce neuroprotective roles in copper-induced oxidative damage further.
In summary, our results showed that knockout of ApoE potentiated copper buildup-induced oxidative damage in the brain. However, although the brain copper concentrations increased in the ApoE −/− mice, no significant differences in the TBARS and SOD activities or the expression of NQO1 and HO-1 were found in the copper treated ApoE −/− mice. Our findings suggest that ApoE may not protect the brain from copper-induced oxidative damage but might play a yet undefined role in regulating copper accumulation in the brain.
Funding
This work was supported by the Science and Technology Program Project of Hebei Province [grant number 162777197]. | 2018-08-22T21:31:15.507Z | 2018-08-20T00:00:00.000 | {
"year": 2018,
"sha1": "abc02ed3c57e3d238b5aed3970b4a77d2e379c0b",
"oa_license": "CCBY",
"oa_url": "https://portlandpress.com/bioscirep/article-pdf/38/5/BSR20180719/810715/bsr-2018-0719.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "abc02ed3c57e3d238b5aed3970b4a77d2e379c0b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
251510112 | pes2o/s2orc | v3-fos-license | Systemic inflammatory markers for distinguishing uncomplicated and complicated acute appendicitis in adult patients
OBJECTIVE This study aimed to investigate the predictive power of serum systemic inflammatory markers including neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), monocyte-eosinophil ratio (MER), and C-reactive protein (CRP) levels for distinguishing uncomplicated and complicated acute appendicitis in adult patients admitted to the emergency department (ED). METHODS This retrospective, cross-sectional, observational, and single-center study enrolled 212 consecutive adult patients with acute appendicitis who were admitted to the ED of our tertiary care university hospital between January 1, 2019 and December 31 2021. Patients were divided into two groups (Group I, uncomplicated acute appendicitis; Group II, complicated appendicitis) according to their surgical findings and histopathological examination. Systemic inflammatory markers measured on admission were compared among patients to identify factors associated with complicated acute appendicitis. RESULTS A total of 132 patients, 83 male (62.9%) and 49 female (37.1%), were included in the study. The mean age was 34.7±13.40 years. Based on the histopathological examination, the number of patients in Group I was 103 (78.03%) and 29 (21.96%) in Group II. Laboratory findings on admission revealed no significant differences between Groups I and II patients in terms of mean serum NLR, MER, and CRP values (p=0.096, p=0.248, and p=0.297, respectively). However, the mean serum PLR in Group II patients was statistically significantly higher than those in Group I (p=0.032). The mean serum monocyte and monocyte fraction (%) values were significantly lower, and the mean serum neutrophil fraction (%) value was higher in Group II patients compared to those with Group I. Receiving operator characteristic (ROC) analysis identified a serum PLR cutoff value of ≥133.73 for distinguishing uncomplicated and complicated acute appendicitis in adult patients, with 60% sensitivity and 58.4% specificity. In addition, ROC analysis revealed a cutoff monocyte fraction (%) level of ≤6, with 72% sensitivity and 64% specificity, for distinguishing uncomplicated and complicated acute appendicitis in adult patients. CONCLUSION Our findings indicate that the mean serum NLR, MER, and CRP values measured on admission to ED in adult patients with acute appendicitis could not predict complicated acute appendicitis. However, mean serum PLR and neutrophil and monocyte counts can be useful in distinguishing complicated cases.
Among the systemic inflammatory parameters, the neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), monocyte eosinophil ratio (MER), and C-reactive protein (CRP) value were found to be associated with the disease severity in critical illness [4][5][6].In a retrospective study of 162 patients who underwent an appendectomy, Eren et al. [7] noted that NLR can be used with a physical examination as a diagnostic marker for acute appendicitis and predict perforated appendicitis.In addition, in another study of 520 patients who were operated on due to appendicitis, Kapci et al. [8] reported that the hemogram parameters such as leukocyte count, neutrophil count, and NLR are beneficial in the diagnosis of acute appendicitis when used with the physical examination.However, in the research of 101 patients with acute appendicitis, Guler et al. [9] observed no significant difference in terms of leukocyte counts and CRP values between perforated and non-perforated acute appendicitis patients.
In this study, we investigated the predictive power of serum systemic inflammatory markers including NLR, PLR, MER, and CRP levels for distinguishing uncomplicated and complicated acute appendicitis in adult patients admitted to the ED.
Ethics Committee Approval and Patient Consent
This study was conducted in accordance with the 1989 Declaration of Helsinki and was approved by the Institutional Review Board (IRB) of Haseki Research and Training Hospital in March 09, 2022 (no.49/2022).Patient consent to review their medical records was not required by the IRB, because there were no potentially identifying marks and no patient identifiers in the images or accompanying text.
Study Design and Setting
This retrospective, cross-sectional, observational, and single-center study included consecutive 212 adult patients aged ≥18 years who were diagnosed with acute appendicitis in ED and underwent an appendectomy at the surgical clinic between January 1, 2019 and December 31, 2021.Data were collected by searching for K35 International Classification of Disease codes in the hospital's automation system and hospital archive.
Study Population and Sampling
We enrolled a total of 212 patients who were diagnosed with acute appendicitis in ED and underwent an appendectomy at the surgical clinic.Of these patients, 4 were excluded because their post-operative pathology report was noted as negative appendicitis.Eight patients were excluded because they had unusual findings in their appendectomy specimen (e.g., mucoceles of the appendix, neoplasm, and lymphadenopathy).Thirty-five patients were excluded, while their data could not be accessed through the automation system or hospital archive.Two patients under the age of 18 were excluded from the study.Eighteen patients were excluded because they had a history of hematological disease.Thirteen other patients who within the last 1 week took medications that may affect the level of serum systemic inflammatory markers including NLR, PLR, MER values, and CRP levels (anti-inflammatory, antibiotic, and statins) were excluded from the study.Finally, 132 patients with positive appendicitis confirmed with post-operative reports were included in the study.
Patients were divided into two groups according to their surgical findings and histopathological examination.Group I included patients with uncomplicated
Highlight key points
• The incidence of complicated acute appendicitis increases with aging.
• The systemic inflammatory markers including mean serum NLR, PLR, MER values, and CRP levels could not predict complicated acute appendicitis.
• PLR can be useful in distinguishing complicated appendicitis from the uncomplicated appendicitis.
• To identify the patient with complicated appendicitis, a monocyte fraction (%) level of ≤6 was found to be a cut-off with 72% sensitivity and 64% specificity.
• Moreover, to identify the patient with complicated appendicitis, a PLR value of ≥133.73 was found to be a cut-off with 60% sensitivity and 58.4% specificity.
acute appendicitis (cases presenting without surgical or histopathological signs of perforation) and Group II included patients with complicated appendicitis (cases with perforation, necrosis, abscess, or generalized peritonitis).Systemic inflammatory markers measured in serum on admission were compared among patient groups to identify factors associated with complicated acute appendicitis.
Outcome Definition
We evaluated the systemic inflammatory parameters for predicting patients with complicated acute appendicitis from the uncomplicated acute appendicitis.
Statistical Analysis
All data analyses were conducted using SPSS statistical software (version 15.0 for Windows; SPSS Inc., Chicago, IL, USA).Categorical variables (sex and age) were expressed as numbers of patients (n) and percentages (%).Numerical data were expressed as mean, standard deviation, minimum, maximum, and median values.Intergroup comparisons (Group I vs. Group II) were conducted using Chi-squared and Student' s independent t-tests for normally distributed data (e.g., gender and age) and the Mann-Whitney U test for non-normally distributed data (e.g., leukocyte, hemoglobin neutrophil, lymphocyte, and thrombocyte counts; CRP).The numerical variables in the patient groups were compared using the Mann-Whitney U test when the data conformed to non-normal distribution.The decisive factor was analyzed using the receiving operator characteristic (ROC) analysis to determine the cutoff value for PLR, monocyte, and monocyte fraction.The alpha significance level was set at p<0.05.
RESULTS
Table 1 presents the demographic characteristics of the patients in this study.A total of 132 patients, 83 male (62.9%) and 49 female (37.1%) were included in the study.The mean age was 34.7±13.40years.We categorized the patients into two groups based on the surgical findings and histopathological examination.Group I included patients with uncomplicated acute appendicitis and Group II included patients with complicated appendicitis.The number of patients in group I was 103 (78.03%) and 29 (21.96%) in group II.The mean age of group II patients was statistically significantly higher than those in Group I (41.80±16.40 vs. 32.70±12.30;p=0.005).However, no statistically significant difference was found between Group I and II patients in terms of gender (p=0.297).Laboratory findings on admission revealed no significant differences between Groups I and II patients in terms of mean serum NLR, MER, and CRP values (p=0.096,p=0.248, and p=0.297,Table 2, respectively).However, we observed that the mean serum PLR in Group II patients was statistically significantly higher than those in Group I (162.80±208.30vs. 196.40±111.60,p=0.032;Table 2).In addition, mean serum monocyte and monocyte fraction (%) values were found to be significantly lower in Group II patients compared to Group I (p=0.032 and p=0.012, respectively, Table 3).Furthermore, the mean serum neutrophil fraction (%) value was found to be statistically significantly higher in the Group II patients (p=0.047).Finally, no significant difference was found between the patient groups in terms of mean serum leukocyte, neutrophil, lymphocyte, platelet, and eosinophil counts (p=0.881,p=0.377, p=0.100, p=0.307, and p=0.174, respectively, Table 3).
DISCUSSION
This study investigated the role of systemic inflammatory markers such as NLR, PLR, MER, and CRP levels in the preoperative diagnosis of complicated appendicitis in adult patients admitted to the ED and operated on for acute appendicitis.
The key findings we found were as follows.First, 78% of patients had uncomplicated appendicitis and 22% had complicated appendicitis.Second, the mean age of the patients with complicated appendicitis was found significantly higher than those with uncomplicated appendicitis.However, no significant difference was observed between the groups in terms of gender.Third, there were no statistically significant differences between the patient groups in terms of mean NLR, MER, and CRP values.However, the mean serum PLR value was statistically significantly higher in the patients with complicated appendicitis.Fourth, mean serum monocyte and monocyte fraction (%) values were found significantly lower and the mean serum neutrophil fraction (%) value was higher in patients with uncomplicated appendicitis.Finally, in identifying the patient with complicated appendicitis, a PLR value of ≥133.73 was found to be the cutoff with 60% sensitivity and 58.4% specificity, and a monocyte fraction (%) value of ≤6 was determined as the cutoff with 72% sensitivity and 64% specificity.
Acute appendicitis occurs as a result of obstruction of the appendiceal lumen which is the most common cause of acute abdomen in patients admitted to ED worldwide.It's surgery should be performed quickly and urgently.Delayed intervention causes complications and also increased morbidity and mortality rates [2,3].Especially, the detection of complicated appendicitis in the early stage of the disease is critical to improving the patient's prognosis.Therefore, fast, simple, inexpensive, and widely available indicators are required that can easily predict the patient's prognosis on admission to ED.There are clinical studies examining various imaging modalities and biomarkers to distinguish between preoperatively uncomplicated and complicated cases in patients with acute appendicitis [10][11][12].However, many of these imaging modalities and biomarkers have limitations in actual application since they are not effective and easily accessible during hospitalization at the current time.In their retrospective study of appendici-tis cases, Borushok et al. [13] stated that the specificity of any ultrasonography finding alone could not exceed 59% in determining the complicated cases.In the same study, it was reported that the specificity of ultrasonography could reach 86% by evaluating different findings such as pericecal fluid, appendicitis wall thickness, and contamination of the fatty planes, which indicate appendicitis [13].In another study of 94 cases with appendicitis, Horrow et al. [14] reported that computed tomography has a specificity of up to 94.4% in diagnosing complicated and perforated appendicitis.Since the ultrasound results are operator-depending, CT causes radiation exposure, and both imaging methods are not easily accessible, we need other markers to diagnose complicated cases preoperatively.Hemogram is always an important component of diagnosis in patients with acute abdominal pain in ED.Researchers note that some hemogram parameters such as leukocyte and neutrophil count or NLR can predict critical illness [4][5][6].
In a retrospective study of 162 patients who underwent appendectomy, Eren et al. [7] recommend NLR, together with physical examination and other diagnos- tic methods, in the diagnosis of acute appendicitis and in predicting perforated cases.Similarly, Kapci et al. [8] evaluated retrospectively the data of 520 operated patients due to appendicitis and stated that examining the leukocyte counts, neutrophil counts, and NLR from routine hemogram parameters accompanied by physical examination findings is beneficial in the diagnosis of acute appendicitis.Similarly, also in our study, the leukocyte and neutrophil counts and NLR were higher in patients with complicated appendicitis than in those with uncomplicated appendicitis.However, there was no statistical difference between the groups in terms of the leukocyte and neutrophil counts and NLR.Our findings indicate that the leukocyte and neutrophil counts and NLR could not predict complicated appendicitis.
Acute leukocytosis occurs in the majority of patients with acute appendicitis.The studies stated that 80% of patients diagnosed with appendicitis have an increased neutrophil fraction along with leukocyte counts [15].Coleman et al. [15] reported the sensitivity and specificity of leukocyte counts in acute appendicitis as 92% and 100%, respectively, and 69% and 75% for CRP values.However, in a study that included 101 operated appendicitis patients, Guler et al. [9] reported that leukocyte counts and CRP values were not effective in distinguishing between perforated and non-perforated cases.Similar to Guler et al., [9] we found no significant difference in terms of leukocyte counts and CRP values between the patients with uncomplicated and complicated appendicitis.
In another retrospective study of 1,067 patients who underwent an appendectomy, Kahramanca et al. [16] categorized the patients into two groups according to histopathological examination as uncomplicated and complicated appendicitis.In the study, the cutoff value of NLR was found to be 5.74 (sensitivity 70.8%; specificity 48.5%) in identifying the patient with complicated appendicitis.In our study, we found no statistically significant difference between the patient groups with uncomplicated and complicated appendicitis in terms of mean serum NLR, MER, and CRP values.However, a significant difference was observed in terms of PLR value.In the ROC curve analysis performed for identifying the patient with complicated appendicitis, the cutoff value for PLR was found to be ≥133.73,with a sensitivity of 60% and a specificity of 58.4%.In addition, a monocyte fraction value of ≤6 was observed to be the cutoff value with 72% sensi-tivity and 64% specificity.Finally, the cutoff value for monocyte count was ≤0.74 with 60% sensitivity and 58.4% specificity.
This study had some limitations, the most important of which were the small sample size, retrospective and single-center design.In addition, we may have denied the possibility that negative explorations, which specified in the exclusion criteria, may alter false-positive rates in reflecting the marker strength of systemic inflammatory markers including NLR, PLR, MER values, and CRP levels in the present study.Thus, a larger prospective, multicenter study involving patients with complicated and non-complicated appendicitis and including a negative appendicitis group based on post-operative pathology report is needed to overcome these issues.
Conclusion
Our findings indicate that the mean serum NLR, MER, and CRP values measured on admission to ED in adult patients with acute appendicitis, could not predict complicated acute appendicitis.However, mean serum PLR and neutrophil and monocyte counts can be useful in distinguishing complicated cases.In addition, it can be said that the incidence of complicated acute appendicitis increases with aging.
In identifying the patient with complicated appendicitis, a PLR value of ≥133.73 was found to be the cutoff with 60% sensitivity and 58.4% specificity, and a monocyte fraction (%) value of ≤6 was determined as the cut-off with 72% sensitivity and 64% specificity.The mean serum PLR and neutrophil and monocyte counts are simple, cheap, and available to carry out as diagnostic tools to distinguish complicated appendicitis from the uncomplicated appendicitis in ED.Additional randomized controlled trials with larger sample sizes are required to confirm the current findings.
Figure 1 .Figure 2 .
Figure1.Specificity and sensitivity of serum PRL value for distinguishing patients with complicated appendicitis from those with noncomplicated appendicitis using receiver operating characteristic curves (area under the curve, 0.639; 95% confidence interval 0.513-0.764).
Ethics
Committee Approval: The Haseki Research and Training Hospital, Clinical Research Ethics Committee granted approval for this study (date: 09.03.2022, number: 49-2022).Conflict of Interest: No conflict of interest was declared by the authors.Financial Disclosure: The authors declared that this study has received no financial support.Authorship Contributions: Concept -SY, AA, OS; Design -SY, AA, OS; Supervision -OS, HE; Materials -HE, ID; Data collection and/or processing -SY, AA, ID; Analysis and/or interpretation -AA, OS; Literature review -SY, AA, HE; Writing -AA, OS; Critical review -OS, HE, ID.
Table 1 .
Comparison of demographic characteristics (age, gender) among patient groups separated based on histopathological examination *: Subgroup analyses (noncomplicated appendicitis vs. complicated appendicitis) were conducted using Chi-squared and Student's independent t-tests, as appropriate.Group I: Patients with uncomplicated appendicitis; Group II: Patients with complicated appendicitis; SD: Standard deviation; Min: Minimum; Max: Maximum.
Table 2 .
Comparison of laboratory findings among patient groups separated based on histopathological examination
Table 3 .
Inflammatory markers in identifying the patient | 2022-08-12T15:18:09.710Z | 2023-07-31T00:00:00.000 | {
"year": 2023,
"sha1": "484b72b932fe2049df9e721048c8a00139f0346a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.14744/nci.2022.79027",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e80d52652b2c8654de0c149de23511404ac21601",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
5201893 | pes2o/s2orc | v3-fos-license | Giant myxoma causing heart failure symptoms
Summary Background: Myxomas arising from the eustachian valve are exceedingly rare. Case Report: A 72-year-old Jamaican-Chinese woman was evaluated for worsening dyspnea. The 2-dimensional and real time 3-dimensional transesophageal echocardiogram showed a 75 mm length × 44 mm width, multilobulated, mobile mass arising from the eustachian valve occupying the entire right atrial and right ventricular cavities extending into the coronary sinus, right ventricular outflow tract, and proximal inferior vena cava. The patient underwent successful resection of the mass and replacement of the tricuspid valve. Histopathologic examination confirmed the diagnosis of atrial myxoma. Conclusions: This is the largest myxoma found on a Eustachian valve.
Background
Atrial myxoma is the most common benign cardiac tumor [1]. Primary cardiac tumors are extremely rare with a reported incidence ranging from 0.0017% to 0.28% at the time of autopsy [2,3]. Atrial myxomas comprise approximately 30 to 50 percent of these cases with right atrial myxomas being only a quarter of those cases [4]. Right-sided tumors can present with right-sided heart failure symptoms or even fatal complications such as embolization or obstruction of the outflow tract [1]. Atrial myxomas are the most important cardiac tumors to diagnose, as they have an excellent prognosis following surgical excision [5][6][7]. We report a giant myxoma (largest reported size) arising from the eustachian valve, a rare location [8]. To the best of our knowledge, there have been only three reported cases [9][10][11].
case report
A 72 year-old Jamaican-Chinese woman was transferred to our hospital for further evaluation of a mass seen on the right side of the heart on transthoracic echocardiography. She reported worsening dyspnea for the past one year. She had an unremarkable past medical history and denied any chest pain, syncope or any constitutional symptoms. On admission, her heart rate was regular, and her blood pressure was 130/85 mm Hg. Her physical exam was normal except for distended jugular veins, bilateral lower extremity edema, and a systolic murmur heard at the mid to lower left parasternal area.
Her hemogram revealed an elevated leukocyte count of 13,200/cu mm. The comprehensive metabolic panel was within normal limits. Blood cultures did not reveal any growth after 48 hours of incubation. The 12-lead electrocardiogram demonstrated normal sinus rhythm, q waves in leads III and aVF, left ventricular hypertrophy, and a normal axis. Her chest radiograph was within normal limits.
Transthoracic echocardiography (TTE) revealed a large lobular mobile mass originating from the right atrium. To further characterize the mass, we performed transesophageal echocardiography (TEE). Two-dimensional and real time three-dimensional (RT-3D) TEE showed a massive, 75 mm length × 44 mm width, multilobulated, mobile mass occupying the entire right atrial and right ventricular cavities and extending into the coronary sinus, right ventricular outflow tract, and proximal inferior vena cava (Figures 1 and 2). It appeared to be attached at the site of the eustachian valve. The right ventricular cavity size was dilated with normal wall thickness. Right ventricular systolic function could not be assessed as a result of the occupation of its entire volume with the tumor mass. A moderate to large loculated pericardial effusion was identified anterior to the heart and surrounding the left atrial appendage.
A diagnostic left heart catheterization prior to planned surgery revealed normal coronary arteries. The patient underwent successful operation for the right atrial mass described as a gelatinous friable, multi-lobulated tumor mass attached to the atrial wall at the level of the inferior vena cava and coronary sinus (Figure 3). The gross specimen measured 94×74 mm. The base of the mass was then removed by removing a large wedge of right atrium as well as anterior wall, superior vena cava and reconstructed with an autologous pericardial patch. Intraoperative transesophageal examination with color flow doppler immediately following tumor resection revealed severe tricuspid insufficiency. The leaflets appeared to be retracted, restricted and atrophic, probably secondary to long term pressure effects of the mass on the leaflet architecture. A porcine bioprosthetic valve was inserted to replace the tricupsid valve. The tricuspid insufficiency was completely resolved with no perivalvular leak. Histopathologic examination confirmed the diagnosis of myxoma (Figure 4).
discussion
Atrial myxoma is the most common benign cardiac tumor [1]. Primary cardiac tumors are extremely rare with a reported incidence ranging from 0.0017% to 0.28% at the time of autopsy [2,3]. Atrial myxomas comprise approximately 30 to 50 percent of these cases with right atrial myxomas being only a quarter of those cases [4]. Right-sided tumors can present with right-sided heart failure symptoms or even fatal complications such as embolization or obstruction of the outflow tract [1]. Atrial myxomas are the most important cardiac tumors to diagnose, as they have an excellent prognosis following surgical excision [5][6][7].
The primary modality of investigation of myxoma is the TTE which can provide the size, mobility and possibly the site of origin [12,13]. Both 2-dimensional and RT-3D TEE aid in surgical planning with superior visualization of tumor attachment sites [14]. Surgical removal significantly decreases the risk of embolic events as well as complete right ventricular outflow obstruction [5,15]. In addition, intraoperative TEE provides significant information regarding tricuspid regurgitation and can assess the need to perform tricuspid annuloplasty.
conclusions
The use of 2-dimensional and 3-dimensional transesophageal echocardiography along with color flow and Doppler imaging aid in the preoperative and intraoperative diagnosis as well as surgical management of atrial myxomas.
Disclosure
The authors state that they do not have a significant financial interest or other relationship with any product manufacturer or provider of services discussed in this article. The authors do not discuss the use of off-label products, which includes unlabeled, unapproved, or investigative products or devices. | 2018-04-03T04:37:21.068Z | 2012-03-09T00:00:00.000 | {
"year": 2012,
"sha1": "ce78dc8c404de512a6c609599c87f7f65146f1ea",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc3616051?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce78dc8c404de512a6c609599c87f7f65146f1ea",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250674259 | pes2o/s2orc | v3-fos-license | Penumbral imaging with multi-penumbral-apertures and its heuristic reconstruction for nuclear reaction region diagnostics
Imaging of nuclear reaction region is important to clarify heating mechanism in a fast-ignition plasma. The nuclear reaction region can be identified by hard x-ray and neutron images, which are emanated from the heated region. We proposed a novel penumbral imaging that is suitable for imaging quanta having strong penetrating power, such as hard x ray and neutron. Using multiple penumbral apertures arranged with M-sequence leads to two orders of magnitude higher detection efficiency than that with a single aperture. In addition, a heuristic method was introduced to a image reconstruction procedure for reducing artifacts caused by noise in a penumbral image. A proof-of-principle experiment indicates that the proposed imaging is superior to the conventional one.
Introduction
The fast ignition scheme is investigated to achieve fusion ignition and burn with a relatively small laser facility. Institute of Laser Engineering of Osaka University launches FIREX (Fast Ignition Realization Experiment) project 1 to demonstrate fusion ignition with this scheme. A high-intense short-pulse laser must heat locally a fusion fuel for achieving a high gain of fusion energy with the fast ignition scheme. Physical processes dominates the local heating can be understand by identifying the nuclear reaction region in a fast ignition plasma. Advanced penumbral imaging technique has been proposed to obtain two-dimensional images of hard x-rays and/or neutrons emitted from the nuclear reaction region. Penumbral imaging 2 is suitable for imaging quanta having strong penetrating power, such as hard x-rays and neutrons. Penumbral imaging is one of encoded imaging techniques. A reconstruction or decoding process is required for obtaining an objective image from the pneumbral image. In a conventional penumbral imaging, a reconstructed objective image is distorted mainly due to noises and aperture shape uncertainties. A heuristic method is introduced to the image reconstruction procedure for reducing the image distortion. It was found in the previous study that 3 This yield is two orders of magnitude larger than that predicted for the FIREX project. We used multi-apertures for the penumbral imaging. Signal intensity increases with multi-penumbral-apertures arranged in M-matrix compared to that with single one, signal increment is proportional to number of the apertures. We carried out a proof-of-principle experiment for the proposed imaging technique, which is a combination of the heuristic reconstruction method and the multi-penumbral-apertures.
Heuristic image reconstruction of penumbral image
The penumbral imaging 4, 5, 6 uses aperture, whose diameter is larger than objective size. The penumbral image consists of a uniformly bright region surrounded by a penumbra. Spatial information of the objective is contained in the penumbral region. Deconvolution operation is used for the image reconstruction in the conventional penumbral imaging. Weiner filter is used to reduce amplitude of noise. 6 In the case that the amplitude of noise is much lower than that of signal, the use of Weiner filter is a simple and straightforward technique to reduce noises, however, in the contrary case, noise is difficult to be removed by the Weiner filter. Noise remained in a penumbral image is amplified by the deconvolution process.
Heuristic method 7,8,9 , which has high tolerance to noise, has been adapted to the reconstruction process. In the heuristic method, the optimal objective image is estimated by minimizing the mean square difference between an obtained penumbral image and an estimated one. The heuristic method was compared with the conventional one. Figure 1(a) is the penumbral image obtained in an experiment. Signal to noise ratio (SNR) of the penumbral image is about 17.7. The objective image reconstructed by the conventional method is shown in Fig. 1(b). Although SNR of the penumbral image is sufficiently high, artifacts are overlapped on the image reconstructed by deconvolution process. The clear image without artifacts was obtained with the heuristic method as shown in Fig. 1(c). The heuristic method clearly excludes artifacts those appear in the conventional reconstructed image.
Proof-of-principle experiment of Uniformly redundant array of penumbral aperture
Uniformly redundant array of penumbral aperture (URPA) was proposed to increase image intensity. 10 Penumbral apertures are arrayed in a two-dimensional M-matrix. There is a G-matrix corresponding to a M-matrix. A convolution of the M-and the G-matrix generates the perfect delta function. Multiple penumbral images arrayed in a M-matrix are unified automatically by convoluting with a G-matrix, and signal intensity is easily enhanced. The concept of the URPA imaging technique is shown in Fig. 2. The multiple penumbral images arrayed in the M-matrix are recorded on a detector as shown in Fig. 2(c). The unified penumbral image (Fig. 2(e)) is obtained by convoluting the multiple penumbral images (Fig. 2(c)) with the G-matrix (Fig. 2(d)) and the object image ( Fig. 2(f)) is reconstructed from the unified penumbral image (Fig. 2(e)).
The proposed scheme was applied to imaging 2-3 keV x-rays. A Nd: YAG laser was used in this experiment. Energy, pulse duration, and wavelength of the laser were, respectively, 1.5 J, 3.5 ns, and 1064 nm. Penumbral apertures (400 μm in diameter) were fabricated on a 25 μm-thick tantalum substrate. A separation distance between apertures was 500 μm. The distance between the aperture and the object was 211 mm and the distance between the aperture and the detector was 916 mm, thus camera magnification was 4.3. A 100 μm-thick beryllium was put in front of the aperture as an x-ray filter to exclude low energy photons whose energy is less than 2 keV. A planar gold foil was irradiated by a laser pulse, and x-ray images from laser-produced plasma were imaged with the URPA. Three kinds of apertures were used in the experiment; (1) single penumbral aperture, (2) 3 5 URPA and (3) 7 9 URPA. The penumbral image with single aperture is shown in Fig. 3(a). Unified penumbral images obtained with 3 5 URPA or 7 9 URPA are shown in Figs. 3(b) and (c) and, corresponding line profiles are shown below them. SNRs of the unified penumbral images were evaluated to be 2.7, 6.3 and 17.7 for single, 3 x 5, 7 x 9 URPA, respectively. SNRs of unified penumbral images increased in proportion to square root of the number of apertures.
Combination of heuristic image reconstruction and URPA imaging
The proposed imaging technique was compared with the conventional penumbral one. The reconstructed image of the conventional penumbral imaging is shown in Fig. 4(a). A single aperture and the reconstruction procedure based on decombolution and a Wiener filter were used for Fig 4(a). The reconstructed image is strongly distroted by artifacts because of low SNR. The reconstructed image of the proposed imaging technique is shown in Fig. 4(b). The high quality reconstructed image was obtained by using the proposed imaging technique from an object whose signal intensity is not sufficient.
We evaluate spatial resolution of the proposed imaging technique. For this evaluation, an experiment was performed on Gekko-XII laser facility, Osaka University. Configurations of the imaging system were the same as that in the proof-of-principle experiment. A cross slit (slit width 100 μm) was backlit by x rays, and backlit slit was imaged with URPA. Chlorided plastic was irradiated by Gekko-XII laser beams to produce a backlight x-ray source, which emits 2.7 keV x rays. The reconstructed image is shown in Fig. 5(a). Modulation transfer function (MTF) of the imaging system was evaluated from the slit image as shown in Fig 5(b). Spatial resolution was 25 μm in the present system, here spatial resolution was defined as the wavelength corresponds to 0.1 of MTF. Spatial resolution is limited by several factors, i.e. x-ray diffraction, unroundness of the apertures, error of aperture positions, detector resolution and SNR of the penumbral image. Diffraction Spread ( x) on the detector caused by x-ray diffraction is given by here is the wavelength of the backlight x-ray source, D the aperture diameter, d the distance between the target and the aperture. Unroundness of the aperture distorts a reconstructed image, and error of the aperture position blurs a reconstructed image. Spatial resolution limited by x-ray diffraction was only 0.6 μm in the present system. Standard deviation of the aperture unroundness was 1.47 μm, by which spatial resolution was limited to 1.8 μm. Standard deviation of the aperture position errors was 2.6 μm, by which the spatial resolution is limited to 1.8 μm. Spatial resolution of the detector was 10.5 μm. As the result of the above evaluation, SNR of the penumbral image limits dominantly the spatial resolution in the present experiment. | 2022-06-28T03:44:32.263Z | 2010-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "397811a2a8cb76b4fcea4f7367fadb34e425c99b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/244/3/032061",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "397811a2a8cb76b4fcea4f7367fadb34e425c99b",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
249759656 | pes2o/s2orc | v3-fos-license | Fetal pulmonary hypertension: dysregulated microRNA-34c-Notch1 axis contributes to impaired angiogenesis in an ovine model
Background: Persistent pulmonary hypertension of the newborn (PPHN) occurs when pulmonary vascular resistance (PVR) fails to decrease at birth. Decreased angiogenesis in lung contributes to persistence of high PVR at birth. MicroRNAs (miRNAs) regulate gene expression through transcript binding and degradation. They were implicated in dysregulated angiogenesis in cancer and cardiovascular disease. Methods: We investigated whether altered miRNA levels contribute to impaired angiogenesis in PPHN. We used a fetal lamb model of PPHN induced by prenatal ductus arteriosus constriction and sham ligation as controls. We performed RNA-sequencing of pulmonary artery endothelial cells (PAECs) isolated from control and PPHN lambs. Results: We observed a differentially expressed miRNA profile in PPHN for organ development, cell-cell signaling and cardiovascular function. MiR-34c was upregulated in PPHN PAECs compared to controls. Exogenous miR34c mimic decreased angiogenesis by control PAEC and anti-miR34c improved angiogenesis of PPHN PAEC in vitro. Notch1, a predicted target for miR-34c by bioinformatics, was decreased in PPHN PAECs, along with Notch1 downstream targets, Hey1 and Hes1. Exogenous miR-34c decreased Notch1 expression in control PAECs and anti-miR-34c restored Notch1 and Hes1 expression in PPHN PAECs. Conclusion: We conclude that increased miR-34c in PPHN contributes to impaired angiogenesis by decreasing Notch1 expression in PAECs.
Conclusion:
We conclude that increased miR-34c in PPHN contributes to impaired angiogenesis by decreasing Notch1 expression in PAECs.
INTRODUCTION:
Persistent pulmonary hypertension of the newborn (PPHN) is characterized by dysregulation of angiogenesis in the lung, resulting in failure of postnatal decrease in pulmonary vascular resistance (PVR) [1][2][3][4] . In-utero, blood from the right ventricle crosses from the pulmonary artery (PA) to the aorta (Ao) across the patent ductus arteriosus (PDA) due to elevated PVR. Failure of postnatal decrease in PVR leads to persistence of the fetal circulation leading to hypoxemia, cyanosis and need for supplemental oxygen, mechanical ventilation or possibly extracorporeal life support (ECLS) 5 . Currently, inhaled nitric oxide (iNO) is the only Food and Drug Administration (FDA) approved therapy for neonates with hypoxemic respiratory failure secondary to PPHN 6 . Animal models of PPHN have shown impaired angiogenesis to be a key factor contributing to elevated PVR [1][2][3][4] . Angiogenesis is the process by which endothelial cells (ECs) lining existing blood vessels sprout and form new vessels 5 . Dysregulation in either EC signaling or downstream pathways or both leads to decreased angiogenesis, resulting in elevated PVR in PPHN. We previously reported that in fetal lambs, in utero ligation of the PDA for 7 -8 days results in a phenotype resembling human PPHN with significantly increased PVR, pruning of the pulmonary vasculature and decreased angiogenesis 3,4 . Pulmonary artery ECs (PAECs) isolated from these fetal lambs show decreased angiogenesis in vitro [2][3][4] . We and others reported downregulation of several key mediators, including endothelial nitric oxide synthase and AMPK (5'adenosine monophosphate-activated protein kinase) and increased reactive oxygen species (ROS) in this model 2,6,7 . The role of non-coding ribonucleic acids (RNAs) and post-transcriptional modification of gene expression in the pathogenesis of altered angiogenesis in PPHN is unknown.
MicroRNAs (miRNAs) are 22-nucleotide long, non-coding RNAs which are mostly conserved across species 8 . MiRNAs are synthesized as double-stranded sequences in the nucleus. They have a 2-8 base-long "seed" region which is complementary to, and binds the 3' untranslated region (3'UTR) of messenger RNAs (mRNAs) leading to transcript degradation or repression of protein translation 9 . Since their discovery, they have been implicated in a multitude of diseases where angiogenesis is impaired including cancer, atherosclerotic disease, vasculopathies and pulmonary hypertension 8,10,11 . MiRNAs are synthesized in the nucleus by RNA polymerase II as a double-stranded long transcript known as primary miRNA transcript (pri-miRNA) which subsequently gets cleaved into a ~70-nucleotide precursor miRNA (pre-miRNA) by a RNAse II-type protein, Drosha 9 . Pre-miRNA is exported out of the nucleus by Exportin-5 into the cytoplasm where it undergoes further cleavage by Dicer to form the double-stranded RNA complex containing the mature miRNA strand and its complementary sequence. This is bound by transactivating RNA-binding protein (TRBP) and Argonaute protein to form the RNA-induced silencing complex (RISC). The mature miRNA strand guides the RISC to the 3'UTR of a specific mRNA based on the degree of complementarity between the seed region of the miRNA and the 3'UTR. RISC subsequently inhibits translation, highlighting the importance of miRNAs in post-transcriptional modification.
MiRNAs have been implicated in several animal models of pulmonary hypertension 12,13 . Those belonging to the miR-17-92 cluster, specifically miR-17 and miR-92a contribute to dysregulated EC function and angiogenesis 14,15 . Rat hypoxia models of PH have shown that increased miR-126a contributes to the hypoxia-induced endothelial-to-mesenchymal transition in neonatal PH 16 . MiR-126a has also shown to be involved in Klf-2a-mediated activation of vascular endothelial growth factor (VEGF) signaling 17 . MiR-21 was found to be a likely contributor to pulmonary vascular remodeling due to hypoxia and targets bone morphogenetic protein receptor (BMPR) −2 18 . Other miRNAs like miR-145 and miR-210 are also upregulated in experimental models and contribute to pulmonary artery smooth muscle cell (PASMC) migration and proliferation and PAEC proliferation and resistance to apoptosis. Microarray analysis from postmortem lung tissue of infants who died from PPHN identified miR-379, miR-7977 and miR-455 to be significantly downregulated 19 . Changes in miRNA profile in neonatal PPHN and whether these serve as biomarkers of dysregulated angiogenesis or are involved in post-transcriptional modification of key signaling pathways in PPHN remains unknown. Our objective of this study was to identify the differential miRNA expression between control and PPHN lamb PAECs and to study the in vitro effects of altering the expression of these miRNAs on angiogenesis.
Generation of PPHN lamb model and extraction of PAECs:
Animal studies were approved by the Medical College of Wisconsin Institutional Animal Care and Use Committee (IACUC) and conformed to current National Institutes of Health (NIH) guidelines for care and use of laboratory animals. Fetal ductus arteriosus constriction was performed at 128 ± 2 days of gestation (full term being ~ 142 days) to generate PPHN phenotype as described previously 20,21 . A sham surgery without ductal ligation was performed to generate control phenotype. After 8 days of ductal constriction, pregnant ewes were euthanized, fetuses were delivered, and lungs removed en bloc from them.
PAECs were isolated using 0.1% collagenase type A from the main pulmonary arteries and immunopurified using CD31 antibody-coated magnetic beads. Their identity was verified by staining for factor VIII antigen and acetylated low-density lipoprotein uptake 22,234 .
Next generation sequencing:
Total RNA was extracted from passage 2 PAECs harvested from six each of control and PPHN lambs (3 male and 3 female in each group) using TRIzol (Thermo Fisher Scientific Inc., Waltham, MA) method as described previously 24 .Small RNA libraries were prepared using Illumina TruSeq Small RNA Sample Preparation kit. Amplified cDNA constructs after separation on 6% polyacrylamide gel electrophoresis were purified and used for small RNAsequencing using Illumina HiSeq 4000. Data were analyzed as described previously 25,26 . The cleaned sequences were mapped against miRBase v19 to identify known miRNAs (sheep miRNAs and homologs of miRNAs known in species other than the sheep) using miRanalyer 27 . Differential expression of miRNAs was detected using DESeq2 29 . The Benjamini-Hochberg method was used to control false discovery rate (FDR) in all statistical tests 28 . FDR of 0.05 was deemed as significant.
Identification of microRNAs for study:
RNA-sequencing data was analyzed to find miRNAs which had a significant fold change in the PPHN PAECs as compared to the control PAECs. Ingenuity pathway analysis (IPA, Qiagen, Germantown, MD) was used to identify molecular and cellular functions as well as the top analysis ready miRNAs. IPA was also used to identify the top analysis-ready up and downregulated miRNAs. These were input into open-access software, TargetScan to identify predicted mRNA targets for each miRNA based on the complementarity between the miRNA seed region and the mRNA 3'UTR 29 . MiRNAs which had targets in pathways known to be involved in angiogenesis, cell signaling, and cell proliferation based on data from IPA and TargetScan were selected for further study.
Cell culture and microRNA transfection:
Fetal lamb PAECs between passages 2 -5 were cultured in Dulbecco's Modified Eagle's Media (DMEM) supplemented with 20% fetal bovine serum, in a humidified incubator at 37°C, 5% CO 2 and 95% room air. Transfection was carried out at 60-70% confluency using miRNA mimic and inhibitor with Lipofectamine 2000 (Thermo Fisher Scientific, Waltham MA). Transfection was carried out in the same direction as the miRNA alteration in the PPHN PAECs. Since miR34c was found to be upregulated, its expression was increased in control PAECs using miRNA mimic (gain-of-function) and inhibited in the PPHN PAECs using miRNA inhibitor (loss-of-function). MiR-34c mimic and inhibitor (miR-34c-5p mirVana miRNA mimic, Catalog # MC11039 and miR-34c-5p mirVana miRNA inhibitor, Catalog # MH11039, Thermo Fisher) with Lipofectamine 2000 (Thermo Fisher) was used for transfection of cultured PAECs to study effects of increasing and decreasing miR-34c expression respectively, in vitro. For experimental control, PAECs were transfected with the same concentration using same protocol with company provided control miRNA mimic (Thermofisher Catalog #4464058) and inhibitor (Thermofisher Catalog #4464076) which have been shown to have no physiologic effect in vitro. Real time polymerase chain reaction (RT-PCR) was performed for miR-34c to verify differences in miR34c expression between control and PPHN PAEC and the efficacy of transfection with mimic and inhibitor (Supplement Figures 1-3). Cells were either harvested for RNA and protein extraction or used for cell count, migration studies and tube formation assays after 48 h of transfection as described below.
Polymerase chain reaction (PCR) for microRNA and mRNA:
Total RNA was extracted using chloroform extraction method from cells suspended in TRIzol as described previously 245 . This was used to prepare complementary DNA (cDNA) using Qiagen miScript II RT-PCR kit for amplification of miRNA sequences and a SuperScript kit for mRNA (Thermo Fisher) sequences using company provided protocols. RT-PCR was performed using SYBR Green method. ΔΔCt method was used for analysis of transcript abundance. RT-PCR was done on RNA from control and PPHN samples for comparison of miR34c levels and to verify the transfection with miR34c mimic and inhibitor (Supplement Figures 1-3).
Immunoblotting:
Comparison of control and PPHN PAEC for Notch 1 expression was done in 3 control (one male and two female) and 3 PPHN (two male and one female) samples. Each sample lysate was run in duplicate on the same gel; sample size of 3 each was used for comparison of control and PPHN groups for Notch1/β-actin ratio. Transfection of control PAEC with miR34c mimic and PPHN PAEC with miR34c inhibitor was done in PAEC from lambs with same sex distribution. For comparison of control and PPHN PAEC for Hey1 and Hes1 expression, lysates from 4 control and 4 PPHN PAEC (two each of male and female sex) were used. After aspirating media from culture dishes, adherent cells were washed with Hank's Balanced Salt Solution (HBSS) and scraped using cell scraper. Cell lysates were prepared in radioimmunoprecipitation assay (RIPA) buffer and heat stabilized after adding Laemmli buffer at 95°C for 10 minutes. 20ug of protein lysate was resolved by sodium dodecyl sulfate polyacrylamide gels for electrophoresis (SDS-PAGE). Gels were run at 100V, transferred onto nitrocellulose membranes and blocked using 3% bovine serum albumin (BSA) blocking buffer. Membranes were probed with primary antibodies overnight at 4°C, washed with Tris-buffered Saline with 0.1% Tween-20 (TBST) buffer, incubated with horseradish peroxidase (HRP)-conjugated secondary antibodies (1:5000 of goat anti-rabbit or anti-mouse) to the primary antibody for 1 hour at room temperature and washed with TBST again. For signal generation, HRP enzyme activity was detected using Pierce ECL Western Blotting Substrate (Thermo Fisher Scientific Inc.) and recorded on iBright FL1000 Imaging System (Thermo Fisher Scientific Inc). Images were analyzed using ImageJ and normalized to beta-actin. Primary antibodies used were rabbit Notch1 (1:1000, Abcam #ab52627), rabbit Hey1 (1:1000, Abcam catalog # 154077) and rabbit Hes1 (1:1000, Abcam catalog # ab71559).
Angiogenesis assays:
Since the primary cellular processes affected by miR34c, which is the focus of our study include angiogenesis functions of cell proliferation, migration, and differentiation, we studied these functions of PAEC in vitro. For cell count and viability -Adherent cells were detached from culture dishes using TrypLE (Thermo Fisher), centrifuged and resuspended in DMEM. 10uL of resuspended cells were stained with 0.4% Trypan blue and counted for number of live cells as well as viability using EVE Automated Cell Counter (NanoEntek, Seoul, South Korea) in triplicate for each sample.
Cell migration assay --Media was aspirated from confluent culture dish, and a linear scratch was made in the center of the cell monolayer using a sterile 1000uL pipette tip. The Detached cells were washed away with sterile HBSS and adherent cells were incubated in reduced serum media for 5-6 h. The width of the scratch was measured at a specific location along the line of scratch, both at the time of making the scratch and after 5-6 h of migration of cells into the gap, as reported previously [2][3][4] .
Capillary tube formation assay --Cells were harvested at the end of transfection period using TrypLE to detach them from adherent monolayer and re-seeded on to 6-well plates coated with Matrigel (Corning Life Sciences, Arizona, US) at a density of 1 × 10 5 /well in reduced serum media. After 4-6 h, media was aspirated, cells were stained with CalceinAM dye (Thermo Fisher) and visualized under fluorescent microscopy as we reported previously 4 . CalceinAM is non-fluorescent at baseline and after uptake into live cells, undergoes intracellular ester hydrolysis to form Calcein, which is a fluorochrome. Calcein is a hydrophilic alcohol and hence emits green fluorescence in live cells, facilitating the identification of tube formation activity. Three images each from three separate parts of each well were obtained and analyzed using ImageJ analyzer for number of nodes, number of meshes and total branching length.
Statistical Analysis: All data are shown as mean±SD. Graphpad Prism version 9.0 (Graphpad software, San Diego, CA) was used for statistical analysis of data. Unpaired T test was used to compare 2 samples (control versus PPHN) while ANOVA was used to compare data from more than 2 samples (Control and PPHN samples with miR mimic and anti-miR) with Tukey's post-hoc test to determine which means were different.
MiRNAs are differentially expressed between control and PPHN PAECs
Small-RNA sequencing identified 470 differentially expressed miRNAs (DE-miRNAs) between control and PPHN PAECs ( Table 1). 12 of these were significantly different out of which 7 were upregulated and 5 were downregulated. IPA showed that top molecular and cellular functions affected by the DE-miRNAs were cellular development, cell growth and proliferation, cellular movement, cell cycle and cell death and survival. These are key processes involved in sprouting angiogenesis and have been implicated in the pathogenesis of pulmonary hypertension. Pathway analysis also identified the top five diseases and biological functions involved to be organismal injury, cancer, respiratory disease, cellular development, and cell movement (Figure 1). Pathway analysis showed that the top analysisready miRNA was miR-34c-5p among upregulated and miR-331-5p among downregulated. This led us to select miR-34c for further studies in PPHN angiogenesis.
MiR-34c alters angiogenesis in endothelial cells in vitro
MiR-34c-5p (miR-34c) was up-regulated by log 2 1.85 fold in PPHN PAECs as compared to control PAECs (Table 1 and Supplement Fig 1). IPA, TargetScan and mirdb.org predicted 7mer-m8 canonical binding of miR-34c to the 3'UTRs of Notch1, the receptor that binds 5 Notch ligands and coordinates the response. Notch1 has been implicated in dysregulated angiogenesis seen in tumorigenesis as well as in pulmonary hypertension. As a result, we focused our studies on the role of miR-34c and its effects on Notch1 in PPHN.
Control lamb PAECs transfected with miR34c mimic (Supplement Fig 2) showed significantly decreased cell proliferation, cell migration and tube formation in Matrigel assay as compared to those transected with control miRNA (Figure 2 and Table 2). In contrast, PPHN PAECs, which have decreased angiogenesis at baseline showed improved cell proliferation, cell migration and tube formation when transfected with miR-34c inhibitor (Supplement Fig 3) as compared to control miRNA inhibitor ( Figure 2 and Table 2).
PPHN PAECs show decreased expression of Notch1 and its downstream targets
To identify whether Notch1 is a potential target of miR-34c, we aimed to first identify the role of Notch1 in PPHN. We found that Notch1 protein expression was significantly decreased in PPHN PAECs as compared to control PAECs ( Figure 3A and 3C). We also investigated the downstream transcriptional targets of Notch1 intracellular domain (N1 ICD), Hey1 and Hes1. We found that Hey1 and Hes1 were significantly decreased in PPHN PAECs as compared to control PAECs ( Figure 3B and 3C).
MiR-34c overexpression decreases mRNA and protein expression of Notch1 and its downstream targets in control PAECs and anti-miR34C increases them in PPHN PAECs
RT-PCR for Notch1 mRNA (forward primer CAAGAAGTTCCGGTTCGAGGA, reverse primer TGTCTGGTCGTCCAGGTCAG) in control PAECs transfected with miR-34c mimic showed significantly decreased levels of the Notch1 transcript (n=5, p=0.0079), thereby indicating that miR-34c overexpression likely leads to degradation of Notch1 transcript.
Notch1 protein expression was significantly decreased with the addition of miR-34c mimic in control PAECs ( Figure 3A and 3B, n=3, p<0.05) and the decrease in Notch1 protein expression seen in PPHN PAECs was significantly reversed by the addition of miR-34c inhibitor ( Figure 4A and 4B, p<0.01, n=3). The decrease in Notch1 expression was also found to be dose-dependent with incremental doses of the miR-34c mimic, leading to sequential decreases in the Notch1 protein expression (see Figure 4E).
MiR-34c overexpression in control PAECs did not change the expression of Notch1 downstream targets, Hey1 and Hes1, although a trend for lower expression was noted ( Figure 4A, 4C and 4D, n=3). In contrast, inhibition of miR-34c expression in PPHN PAECs led to increased Hey1 and not Hes1 ( Figure 4A, 4B and 4C, n=3, p<0.05 only for Hey1) expression, both of which were decreased in PPHN PAECs at baseline (Fig 3).
DISCUSSION:
Primary cellular processes predicted to involve miR-34c are apoptosis, growth, migration, proliferation, senescence, G1 phase and cell cycle progression, resistance, and differentiation based on IPA prediction tools. Accordingly, our results indicate that miR-34c regulates angiogenesis at a cellular level, likely through modulation of the canonical Notch pathway. The Notch family consists of four transmembrane Notch receptors (Notch 1-4) and five ligands -Delta-like ligands -1, 3 and 4 (Dll-1, 3, 4) and Jagged (Jag) -1 and 2. Notch ligands expressed on the surface of a cell can bind to the Notch receptor of an adjacent cell, leading to its proteolytic cleavage by ADAM and gamma-secretase complex. This releases the Notch intracellular domain (NICD) which is translocated to the nucleus ( Figure 5) where it interacts with the DNA-binding protein CBF1-Suppressor of Hairless-LAG1 (CSL) and the co-activator Mastermind (MAM) to promote gene transcription of targets belonging to the Hey and Hes family of basic helix-loop-helix (bHLH) proteins 30 . Notch1 is crucial for development as is evidenced by embryonic lethality in endothelial Notch1 knockout mice with intact vasculogenesis but abnormal angiogenesis and secondary vascular remodeling 31 . Alterations in Notch receptor expression have been previously reported in the adult models of PAH. 34 A few prior studies have found an inverse relationship between miR-34c and the Notch1 expression in physiologic processes like muscle and bone development and in tumor progression 32-34 . Notch1 has been shown to be critical in vascular development during embryonic phase of life as evidenced by the death of endothelial-Notch1 knockout mouse embryos at E10.5 31 . Notch1 is also critical for postnatal angiogenesis and vascular endothelial growth factor (VEGF)-regulated Dll-4-Notch signaling establishes the adequate ratio between tip and stalk cells required for correct sprouting and branching patterns 35,36 . We previously reported that an altered balance of the Notch ligands, Jag1 and Dll4 leads to dysregulated angiogenesis in PPHN, but to our knowledge there are no reports of endothelial-Notch1 implicated in PPHN 4 . In the present study, we found that Notch1 is decreased significantly in PPHN PAECs, along with the downstream targets, Hey1 and Hes1. Decrease in Notch1 is also demonstrated by increasing miR-34c expression on control PAECs and was reversed partially by inhibiting the action of miR-34c in PPHN PAECs in vitro.
MiR-34c overexpression also impairs angiogenesis in control PAECs and miR-34c inhibition improves angiogenesis in PPHN PAECs. These reciprocal effects of miR34c indicate that the dysregulated angiogenesis seen in PPHN PAECs can be in part due to miR-34c overexpression in PPHN and that the effects may be mediated through Notch1 inhibition.
Hey1 and Hes1, which are downstream targets of Notch1 have been implicated in dysregulated angiogenesis. Hey1/Hey2 knockout mice fail to express arterial endothelial markers and are essential for embryonic vascular development 37 . Endothelial-specific Hes1 mutant embryos exhibit defective vascular remodeling in the brain 38 . Our PPHN phenotype showed decreased expression of Hey1/Hes1 in the PAECs, which is likely the mechanism through which Notch1 regulates angiogenesis in this model. MiR-34c overexpression which leads to decreased Notch1 expression, also showed a decreasing trend in Hey1/Hes1 expression indicating that miR-34c-mediated Notch1 decrease affects downstream Notch signaling. Inhibition of miR-34c expression also led to increased expression of both Hey1 and Hes1 thereby indicating that loss-of-function of miR-34c leads to stimulation of Notch downstream targets as well. This effect was more pronounced in Hey1 as compared to Hes1.
Our data contrast with a recent report of increased Notch1 signaling and endothelial cell proliferation in adult PAH cells which show downregulation of Notch2 and corresponding increase in Notch1 expression in these cells 34 . Increase in Notch1 was associated with increased Hey1 and Hes1 levels in adult PAH cells and siRNA mediated knockdown of Notch2 recapitulated the effects in control PAEC. The investigators also reported an increase in cell migration in response to decreased Notch2/increased Notch1 expression. In contrast to these findings in adult PAH, endothelial cells in PPHN model show consistent decrease in proliferation, migration and tube formation and decreased angiogenesis in vitro and in vivo. We also found a decrease in Notch1 expression in PPHN and in response to miR-34c overexpression in control PAEC from fetal lambs. Other recently published studies have demonstrated the anti-proliferative effects of miR-34c in cancer cells and smooth muscle cells, similar to our findings in PAECs 39,402 . These studies highlight the importance of conducting these studies specifically in PPHN.
There are some limitations to our study. First, our studies focused on changes in miRNA expression in PAECs since we addressed changes in endothelial angiogenesis in this model. Whether miR-34c influences the proliferation and other functions of PASMCs is not clear from our studies. Since miRNAs are biologically stable in extracellular location, it is likely that it will influence other cells in pulmonary arteries. Secondly, we did not investigate the effects of miR-34c deletion or overexpression on angiogenesis in vivo. The larger size of the fetal lamb model and lack of genetic models precludes in vivo studies of miR-34c knockdown or overexpression in this model. We also did not study whether reversal of Notch1 downregulation in PPHN PAECs by itself, without the addition of miR-34c, inhibitor improves angiogenesis. We, however, examined the role of miR34C elevation at several levels including sequencing data, angiogenesis assays, RT-PCR and immunoblotting studies to demonstrate that miR-34c is upregulated and contributes to dysregulated angiogenesis in PPHN PAECs through the canonical Notch pathway.
In summary, we conclude that increased miR34c-5p levels in PPHN PAEC contribute to impaired angiogenesis in this model. Whether miR34C-5P based therapies improve angiogenesis and restore postnatal adaptation in PPHN requires further studies.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
Statement of Financial Support:
This work was supported by grants 1R01 HL136597-01 from NHLBI, Multiyear Innovation Research grant and Muma Endowed Chair in Neonatology from Children's Research Institute of Children's Wisconsin (GGK).
Data Sharing Statement:
The original sequencing output will be available for free download from the NCBI Sequence Read Archive after manuscript online publishing date under "Ovine PPHN PAEC smallRNA-seq" file name. (A) Immunoblotting of control and PPHN PAECs (n=3 lambs for each, in duplicate in consecutive lanes) for Notch1 protein normalized to beta-actin. (B) immunoblotting for Hey1 and Hes1 protein normalized to beta-actin in control and PPHN lamb PAECs for n=4 lambs each. Images were analyzed with ImageJ software. Summarized data (C) for Notch1 were n=3 each for control and PPHN PAECs and n=4 for Hey1 and Hes1 for control and PPHN PAEC shown as mean ± SD. White bars represent control PAECs and gray bars represent PPHN PAECs. Data were analyzed by unpaired T test to compare expression levels for each protein in the control and PPHN PAEC. ***P< 0.001 for Notch1 and Hes1 and **** p=0.004 for Hey1, as shown in (C). Cell count and cell viability after 48hours of transfection of control and PPHN PAECs as shown in the table.
All transfections were carried out with PAECs obtained from 3 separate control and PPHN lambs each and cell count and viability was assessed in triplicate for each sample and then averaged. Cell viability was measured using Eve automated cell counter after staining cells with 0.4% Trypan blue post 48hours of transfection. Analysis was done for control PAEC/control mimic vs control PAEC/miR-34c mimic and for PPHN PAEC/ control inhibitor vs PPHN PAEC/miR-34c inhibitor using t-test. | 2022-06-18T13:25:36.558Z | 2022-06-03T00:00:00.000 | {
"year": 2022,
"sha1": "b003a98433dfd37946ba717eb36978b0903e598c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0d548a7af47e2dfb3acc6f5601327981a573c353",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18880310 | pes2o/s2orc | v3-fos-license | University of Groningen Characterization and Crystal Structure of a Robust Cyclohexanone Monooxygenase
Cyclohexanone monooxygenase (CHMO) is a promising biocatalyst for industrial reactions owing to its broad substrate spectrum and excellent regio-, chemo-, and enantioselectivity. However, the low stability of many Baeyer– Villiger monooxygenases is an obstacle for their exploitation in industry. Characterization and crystal structure determination of a robust CHMO from Thermocrispum municipale is reported. The enzyme efficiently converts a variety of aliphatic, aromatic, and cyclic ketones, as well as prochiral sulfides. A compact substrate-binding cavity explains its preference for small rather than bulky substrates. Small-scale conversions with either purified enzyme or whole cells demonstrated the remarkable properties of this newly discovered CHMO. The exceptional solvent tolerance and thermostability make the enzyme very attractive for biotechnology. Cyclohexanone monooxygenase (CHMO; EC 1.14.13.22) is an FADand NADPH-dependent Baeyer–Villiger monooxygenase (BVMO). A wide variety of ketones are converted by CHMO into esters or lactones through the insertion of an oxygen atom on one side or the other of the carbonyl group. In addition, CHMO oxidizes aldehydes and heteroatoms and carries out epoxidation reactions. In contrast to conventional Baeyer–Villiger oxidations using peracids, CHMO reactions are environmentally friendly and often proceed with excellent regio-, chemo-, and enantioselectivity. Numerous industrial applications have been suggested for CHMO. The conversion of cyclohexanone as catalyzed by CHMO is of particular interest since the product, e-caprolactone, is a precursor of both adipic acid and e-caprolactam, which are known polymer building blocks. One of the main barriers to exploiting CHMO as a biocatalyst is its lack of stability. To date, the best-characterized CHMOs are from Acinetobacter calcoaceticus NCIMB 9871 (AcCHMO) and Rhodococcus sp. HI-31 (RhCHMO). Similar CHMOs have been identified from various bacteria (Table S1 in the Supporting Information). However, engineering attempts to improve their stability have met with limited success. The only robust BVMO described so far, phenylacetone monooxygenase from Thermobifida fusca (TfPAMO; EC 1.14.13.92), shows no activity on cyclohexanone. Our interest in using a robust BVMO to produce e-caprolactone and other valuable compounds prompted us to search for a stable CHMO in the available genomes of thermophiles. Among the retrieved sequences, a putative CHMO from Thermocrispum municipale DSM 44069 was found (TmCHMO; NCBI RefSeq: WP_028849141.1). This thermophilic organism was isolated from municipal waste compost. TmCHMO clusters with known CHMOs sequences (Figures S1, S2 in the Supporting Information), the closest homologue being from Brachymonas petroleovorans (66% sequence identity). We identified three genes upstream of the TmCHMO gene that may participate in cyclohexanol degradation (Table S2). These sequence analyses suggested that the T. municipale enzyme may be a robust CHMO. Therefore, we purified TmCHMO and studied its substrate acceptance (Figure S3). For the sake of comparison, the same analysis was performed with AcCHMO. Small aliphatic ketones such as cyclohexanone, cyclobutanone, and bicyclo[3.2.0]hept-2-en-6-one were efficiently converted by both CHMOs, although in all cases, the Km values for AcCHMO were higher than those measured for TmCHMO, despite similar kcat values (Table S3). Conversely, the bulkier molecules progesterone and cyclopentadecanone were not substrates of either enzyme. At high substrate concentrations, substrate inhibition was observed for TmCHMO, which is not unprecedented for this enzyme class (Figure S4). To avoid decreased conversions as a result of this effect, reaction media with two phases have been successfully implemented. The uncoupling rate for both CHMOs was around 0.1 s . To compare their thermostability, the ThermoFAD method was used as a first approach. This confirmed that AcCHMO is only moderately stable since it exhibits a Tm value of 37 8C (Table S4). Conversely, the Tm value for TmCHMO was found to be 11 8C higher. Next, the two CHMOs were probed for their resistance to thermal inactivation (Figure 1A). This revealed that TmCHMO still displayed 58 % residual activity after 5.5 h at 45 8C. By contrast, AcCHMO lost its activity within a few minutes at 45 8C. These experiments were complemented by analyzing the effect of cosolvents on the stability, since enzymes working in organic solvents or aqueous/organic mixtures are often desired for biotechnology. While AcCHMO essentially lost its activity after [*] Dr. E. Romero, Prof. Dr. M. W. Fraaije Department of Biotechnology, University of Groningen Nijenborgh 4, 9747AG, Groningen (The Netherlands) E-mail: m.w.fraaije@rug.nl Dr. J. R. G. Castellanos, Prof. Dr. A. Mattevi Department of Biology and Biotechnology “Lazzaro Spallanzani” University of Pavia, Via Ferrata 9, 27100, Pavia (Italy) E-mail: andrea.mattevi@unipv.it [] These authors contributed equally to this work. Supporting information and the ORCID identification number(s) for the author(s) of this article can be found under http://dx.doi.org/10. 1002/anie.201608951. 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Angewandte Chemie Communications 1 Angew. Chem. Int. Ed. 2016, 55, 1 – 5 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim These are not the final page numbers! 25 min incubation in 14 % acetonitrile at 20 8C, TmCHMO remained highly active (> 80 %) for at least 20 h under these conditions (Figure 1B). These thermostability and solventtolerance data clearly show that TmCHMO is a substantially more robust biocatalyst than AcCHMO. Besides influencing stability, the reaction medium can also modify enzyme selectivity. The effect of cosolvents (Table S5) was determined by using 2-butanone, the conversion of which to two possible regioisomers is of industrial interest (Scheme S1). All reactions were stopped after 48 h at 17 8C. The ratio between the products methyl propanoate and ethyl acetate was found to be about 3:7 for both purified TmCHMO and AcCHMO in the absence of any cosolvent (Figure S5). The same regioisomer ratio was observed using whole cells of Escherichia coli expressing one or the other CHMO (not shown). Next, we inspected the effect of various solvents at 15 % concentration. The two purified CHMOs exhibited similar results, although the yields of TmCHMO were generally higher. The strongest effect on regioselectivity was observed with 2-methyl-1,3-dioxolane, which led to almost exclusive production of ethyl acetate. 1,3-Dioxane and 1,4-dioxane had a more moderate influence, since about 40% of the total product was methyl propanoate. We also carried out reactions with 30 % methanol or ethanol. These cosolvents had a negligible effect on enzyme regioselectivity. However, they considerably decreased the 2-butanone conversion yield for AcCHMO (< 4%), while that for TmCHMO remained high (96% and 56% for methanol and ethanol, respectively). From these results, it can be concluded that the robustness of TmCHMO makes possible to modulate its regioselectivity through using cosolvents. Along the lines of the previous experiments, the potential of TmCHMO as a regioselective biocatalyst was further probed by using rac-bicyclo[3.2.0]hept-2-en-6-one. This compound can be converted by BVMOs into four products (Scheme S2). This reaction is widely used to study the ability of BVMOs to carry out the kinetic resolution of racemic compounds, and it is of interest for the synthesis of prostaglandins, for example. Small-scale conversions of rac-bicyclo[3.2.0]hept-2-en-6-one were carried out with TmCHMO, AcCHMO, and TfPAMO. Both enantiomers of this ketone were fully converted by both CHMOs, yielding almost exclusively one regioisomer from each enantiomer (Figure S6A and Table S6). By contrast, TfPAMO produced all four possible lactones, proving to be far less regioselective than CHMOs. We also used the same BVMOs to produce enantiomerically pure sulfoxides, which are widely used in asymmetric synthesis and are often biologically active. The prochiral compound thioanisole was chosen as a model substrate (Scheme S3). The CHMOs exclusively produced the (R)-sulfoxide, whereas TfPAMO produced both enantiomers, leading to an ee of only 16 % for the (R)-sulfoxide (Figure S6B, Table S6). Having established that TmCHMO is an appealing biocatalyst based on its thermostability, solvent tolerance, and selectivity, we performed a more in-depth characterization of its mechanistic properties. The reaction mechanism of a BVMO generally involves a C4a-peroxyflavin intermediate that forms a tetrahedral Criegee intermediate through nucleophilic attack on the substrate carbonyl carbon (Scheme S4). Rearrangement of the Criegee intermediate yields the ester or lactone product. The spectral changes for TmCHMO during its catalytic cycle were monitored using a stopped-flow spectrophotometer. Anaerobic reaction of TmCHMO with NADPH resulted in the loss of the absorbance peaks at 376 nm and 440 nm, which is consistent with the formation of the two-electron-reduced enzyme. After mixing the reduced TmCHMO with air-saturated buffer, a rapid increase in absorbance at 355 nm was observed (k = 37 s ), together with a small absorbance decrease at 450 nm (Figure 2A). These spectral changes are indicative of the formation of the C4a-peroxyflavin intermediate. The absorbance at 355 nm was stable for 3 s and then slowly decreased (k = 0.01 s ) owing to decay of the intermediate, which is consistent with hydrogen peroxide elimination to form the reoxidized enzyme (k = 0.004 s ). In a second set of experiments, the anaerobically reduced TmCHMO was mixed with cyclohexanone in air-saturate
Cyclohexanone monooxygenase (CHMO;EC1.14.13.22) is an FAD-and NADPH-dependent Baeyer-Villiger monooxygenase (BVMO). [1] Awide variety of ketones are converted by CHMO into esters or lactones through the insertion of an oxygen atom on one side or the other of the carbonyl group. In addition, CHMO oxidizes aldehydes and heteroatoms [2] and carries out epoxidation reactions. [3] In contrast to conventional Baeyer-Villiger oxidations using peracids,C HMO reactions are environmentally friendly and often proceed with excellent regio-, chemo-, and enantioselectivity. [4] Numerous industrial applications have been suggested for CHMO. [5] Thec onversion of cyclohexanone as catalyzed by CHMO is of particular interest since the product, e-caprolactone,isaprecursor of both adipic acid and e-caprolactam, [6] which are known polymer building blocks. [7] One of the main barriers to exploiting CHMO as ab iocatalyst is its lack of stability. [8] To date,t he best-characterized CHMOs are from Acinetobacter calcoaceticus NCIMB 9871 (AcCHMO) and Rhodococcus sp.H I-31 (RhCHMO). Similar CHMOs have been identified from various bacteria ( Table S1 in the Supporting Information). However,e ngineering attempts to improve their stability have met with limited success. [9] The only robust BVMO described so far,p henylacetone monooxygenase from Thermobifida fusca (TfPAMO;E C 1.14.13.92), shows no activity on cyclohexanone. [10] Our interest in using ar obust BVMO to produce e-caprolactone and other valuable compounds prompted us to search for as table CHMO in the available genomes of thermophiles. Among the retrieved sequences,aputative CHMO from Thermocrispum municipale DSM 44069 was found (TmCHMO;NCBI RefSeq:WP_028849141.1). This thermophilic organism was isolated from municipal waste compost. [11] TmCHMO clusters with known CHMOs sequences (Figures S1, S2 in the Supporting Information), the closest homologue being from Brachymonas petroleovorans (66 % sequence identity). We identified three genes upstream of the TmCHMO gene that may participate in cyclohexanol degradation (Table S2). These sequence analyses suggested that the T. municipale enzyme may be ar obust CHMO.
Therefore,w ep urified TmCHMO and studied its substrate acceptance ( Figure S3). [4,12] Forthe sake of comparison, the same analysis was performed with AcCHMO.S mall aliphatic ketones such as cyclohexanone,c yclobutanone,and bicyclo[3.2.0]hept-2-en-6-one were efficiently converted by both CHMOs,a lthough in all cases,t he K m values for AcCHMO were higher than those measured for TmCHMO, despite similar k cat values (Table S3). Conversely,the bulkier molecules progesterone and cyclopentadecanone were not substrates of either enzyme. [13] At high substrate concentrations,substrate inhibition was observed for TmCHMO,which is not unprecedented for this enzyme class ( Figure S4). To avoid decreased conversions as aresult of this effect, reaction media with two phases have been successfully implemented. [5] Theuncoupling rate for both CHMOs was around 0.1 s À1 .T o compare their thermostability,t he ThermoFAD method was used as af irst approach. [14] This confirmed that AcCHMO is only moderately stable since it exhibits a T m value of 37 8 8C (Table S4). [9a,b] Conversely,t he T m value for TmCHMO was found to be 11 8 8Chigher. Next, the two CHMOs were probed for their resistance to thermal inactivation ( Figure 1A). This revealed that TmCHMO still displayed 58 %residual activity after 5.5 ha t4 58 8C. By contrast, AcCHMO lost its activity within af ew minutes at 45 8 8C. These experiments were complemented by analyzing the effect of cosolvents on the stability,s ince enzymes working in organic solvents or aqueous/organic mixtures are often desired for biotechnology. [15] While AcCHMO essentially lost its activity after 25 min incubation in 14 %a cetonitrile at 20 8 8C, TmCHMO remained highly active (> 80 %) for at least 20 hunder these conditions ( Figure 1B). These thermostability and solventtolerance data clearly show that TmCHMO is asubstantially more robust biocatalyst than AcCHMO.
Besides influencing stability,the reaction medium can also modify enzyme selectivity. [15][16] Thee ffect of cosolvents (Table S5) was determined by using 2-butanone,t he conversion of which to two possible regioisomers is of industrial interest (Scheme S1). [17] All reactions were stopped after 48 h at 17 8 8C. Ther atio between the products methyl propanoate and ethyl acetate was found to be about 3:7f or both purified TmCHMO and AcCHMO in the absence of any cosolvent ( Figure S5). Thes ame regioisomer ratio was observed using whole cells of Escherichia coli expressing one or the other CHMO (not shown). Next, we inspected the effect of various solvents at 15 %c oncentration. Thet wo purified CHMOs exhibited similar results,a lthough the yields of TmCHMO were generally higher. Thestrongest effect on regioselectivity was observed with 2-methyl-1,3-dioxolane,w hich led to almost exclusive production of ethyl acetate.1 ,3-Dioxane and 1,4-dioxane had am ore moderate influence,s ince about 40 %o ft he total product was methyl propanoate.W ea lso carried out reactions with 30 %m ethanol or ethanol. These cosolvents had anegligible effect on enzyme regioselectivity. However,t hey considerably decreased the 2-butanone conversion yield for AcCHMO (< 4%), while that for TmCHMO remained high (96 %a nd 56 %f or methanol and ethanol, respectively). From these results,itcan be concluded that the robustness of TmCHMO makes possible to modulate its regioselectivity through using cosolvents.
Along the lines of the previous experiments,the potential of TmCHMO as ar egioselective biocatalyst was further probed by using rac-bicyclo[3.2.0]hept-2-en-6-one.T his compound can be converted by BVMOs into four products (Scheme S2). This reaction is widely used to study the ability of BVMOs to carry out the kinetic resolution of racemic compounds, [18] and it is of interest for the synthesis of prostaglandins,f or example. [19] Small-scale conversions of rac-bicyclo[3.2.0]hept-2-en-6-one were carried out with TmCHMO,A cCHMO,a nd TfPAMO.B oth enantiomers of this ketone were fully converted by both CHMOs,y ielding almost exclusively one regioisomer from each enantiomer ( Figure S6A and Table S6). [20] By contrast, TfPAMO produced all four possible lactones,p roving to be far less regioselective than CHMOs.W ea lso used the same BVMOs to produce enantiomerically pure sulfoxides,w hich are widely used in asymmetric synthesis and are often biologically active. [21] Thep rochiral compound thioanisole was chosen as am odel substrate (Scheme S3). TheC HMOs exclusively produced the (R)-sulfoxide,w hereas TfPAMO produced both enantiomers,leading to an ee of only 16 %for the (R)-sulfoxide ( Figure S6B,T able S6). [18c,22] Having established that TmCHMO is an appealing biocatalyst based on its thermostability,s olvent tolerance, and selectivity,w ep erformed am ore in-depth characterization of its mechanistic properties.The reaction mechanism of aBVMO generally involves aC4a-peroxyflavin intermediate that forms at etrahedral Criegee intermediate through nucleophilic attack on the substrate carbonyl carbon (Scheme S4). Rearrangement of the Criegee intermediate yields the ester or lactone product. [23] Thespectral changes for TmCHMO during its catalytic cycle were monitored using as topped-flow spectrophotometer.A naerobic reaction of TmCHMO with NADPH resulted in the loss of the absorbance peaks at 376 nm and 440 nm, which is consistent with the formation of the two-electron-reduced enzyme.After mixing the reduced TmCHMO with air-saturated buffer, ar apid increase in absorbance at 355 nm was observed (k = 37 s À1 ), together with as mall absorbance decrease at 450 nm (Figure 2A). These spectral changes are indicative of the formation of the C4a-peroxyflavin intermediate.T he absorbance at 355 nm was stable for 3sand then slowly decreased (k = 0.01 s À1 )o wing to decay of the intermediate,w hich is consistent with hydrogen peroxide elimination to form the reoxidized enzyme (k = 0.004 s À1 ). In as econd set of experiments,the anaerobically reduced TmCHMO was mixed with cyclohexanone in air-saturated buffer.T he absorbance at 355 nm increased for 0.1 sa nd then immediately decreased, which demonstrates the low kinetic stability of the intermediate in the presence of cyclohexanone ( Figure 2B and Figure S7). Therate of formation of the peroxyflavin was not influenced by the presence of cyclohexanone,while its decay rate was 80-fold higher than that measured in the absence of this ketone.C ollectively,t hese experiments suggest that TmCHMO functions as at ypical BVMO,f orming as table flavin peroxide that can effectively perform substrate oxygenation. Thekinetic stability of the peroxyflavin enables the enzyme to efficiently couple NADPH and dioxygen consumption with substrate oxygenation without leakage of hydrogen peroxide,w hich can be harmful in the context of large-scale biotransformations. [23a-c] Foramore in-depth understanding of TmCHMO properties,its crystal structure in complex with FADand NADP + in the oxidized and reduced states were solved to aresolution of 1.22 and 1.60 (Table S7;F igure 3A,a nd Figure S8). In an attempt to rationalize the relatively high thermostability,w e investigated the number of salt bridges present, since they are known to contribute to the thermostability of proteins. [24] Our computational analysis of TfPAMO,T mCHMO,a nd RhCHMO identified 37, 31, and 16 salt bridges,r espectively ( Figure S9). This finding correlates with their T m values (61, 48, and 37 8 8C, respectively). Mutagenesis on these BVMOs will be carried out to confirm the role of salt bridges in their stability.
Theo verall structures of the oxidized and reduced forms share an almost identical conformation, with ar oot mean square deviation of 0.20 for the backbone Ca atoms. However,i nspection of the active sites shows distinct alterations ( Figure 3B and Figure S10). In the reduced enzyme, the electron density corresponding to the nicotinamide moiety of NADP + is disordered, with no well-defined electron density.F urthermore,R 329 of the oxidized enzyme is engaged in H-bonds with the carboxamide group of NADP + and the side chain of D59. Upon enzyme reduction, R329 moves away from NADP + and points toward the isoallox-azine moiety of the flavin ring, which favors an electrostatic interaction between the positively charged guanidinium group of R329 and the negatively charged reduced flavin. Upon formation of the flavin peroxide and concomitant loss of the negative charge on the flavin ring, R329 would shift back to the conformation interacting with the NADP + , thereby making the catalytic center accessible to ak etone substrate. [23d] These results confirm that the formation of the negatively charged reduced flavin is associated with al ocalized rearrangement of the central elements of the catalytic site.N ADP + is required for peroxyflavin formation and stabilization, primarily to provide essential H-bonding interactions.
Inspecting the electron density of the oxidized structure of TmCHMO revealed that the putative substrate-binding pocket is occupied by ar ing-shaped ligand ( Figure 4A and Figure S11). This region of electron density was putatively assigned to am olecule of nicotinamide,p ossibly resulting from the degradation of NADP + .S pecifically,t he ligand is placed right in front of the flavin ring and is in contact with residues T60, L145, L428, P430, F434, T435, and L437. This arrangement is very similar to that observed in the structures of RhCHMO tight [23e] and TfPAMO MES , [23d] both with ab ound ligand ( Figure 4B and Figure S12). Theligand-binding site is ac ompact cavity,w hich explains the general preference of TmCHMO for medium/small substrates.
To conclude,w er eport the discovery of ar obust CHMO that shows great promise as an oxidative biocatalyst. The enzyme was found to be much more thermostable and solvent tolerant than known CHMOs.F urthermore,h aving established an effective recombinant production system and elucidated its crystal structure,T mCHMO provides the perfect starting point for engineering approaches to tune its properties. Programme Research and Innovation actions H2020-LEIT BIO-2014-1 and by the MEBIO grant (053.24.105) from the NWO. We acknowledge the European Synchrotron Radiation Facility and the Swiss Light Source for the provision of beam time.W eappreciate the assistance with the microspectrophotometry measurements of Dr. Antoine Royant of the European Synchrotron Radiation Facility and the "Institut de Biologie Structurale". | 2019-09-23T13:16:00.653Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "c27e68edca9f5cd7185bc48190841b5c3db63bde",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.201608951",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcb8f39677fe7fd45e61cdc7c79f4a4618edb548",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
118842021 | pes2o/s2orc | v3-fos-license | The strange transformation for point rotation coordinate frames and its experimental verification
We consider the general form of the linear transformation for point rotation coordinate frames. The frames have the rotation axis at every point. In the transformation the frequency of one frame relative to another is not equivalent to the reverse frequency. Using symmetry of the direct and reverse transformation as well as symmetry of the frame coordinates we show that two different type of the transformation are possible. The first type is a generalization of the Lorentz transformation. This case can not be checked in optical measurements. In contrast to that the second unusual type allows us to observe consequences of the transformation in an optical experiment even though the characteristic constant inherent in the transformation is less than"nuclear time"of the order of 10^{-23} sec. We describe the experiment.
I. INTRODUCTION
The concept of a "point rotation frame" arises in crystallooptics. Distinctive feature of the frame, in contrast to Cartesian one, is existence of the rotation axis at every point. In such a frame the axes are constructed on field amplitudes and only the axis direction is essential. Similar (non-rotating) frames have been used in quantum field theory for a long time. An example of the frame is the rotating optical indicatrix (index ellipsoid). One more difference in comparison with the Cartesian frame is the absence of centrifugal forces in the point rotation frame. The frame coordinates is an angle (phase) and time, the frequency of rotation is a parameter. In electro-optical crystals the rotation is stimulated by an applied rotating electric field. In crystals with the linear (Pockels) effect the frequency equals half (and even quarter [1]) of that of the electric field. In the Kerr crystals this frequency is doubled [2,3]. The sense of rotation of the plane circularly polarized light wave moving through the electro-optical crystal with rotating optical indicatrix is reversed and the optical frequency is shifted if the amplitude of the applied electric field equals the half-wave value. The device for the shift by means of electro-optical crystals is the single-sideband modulator [4,5,6]. Note that optically the rotating phase plate is equivalent to the modulator but physically they are different as the plate has only one axis of rotation.
It is convenient to use for the description of circularly polarized plane light wave in the single-sideband modulator the transition to a frame with the resting optical indicatrix. Apparently, for the first time, convenience of that had been described in initial works on singlesideband modulation of light [4]. Such a transition results in change of the optical frequency. The change equals the frequency of the optical indicatrix. After the polarization reversal and returning back to the initial frame the frequency deviation is doubled. Emphasize that in the frame with the resting indicatrix the modulating electric field is also at rest in spite the fact that both the indicatrix and field rotate at different frequencies relative to the initial frame?! This is one further unusual and strange property of the point rotation frames.
The transition to the rotating frame always is connected with the question what is the frequency superposition law, is it linear or not. The nonlinear law always corresponds to an extra frequency shift. Emphasize that the consideration in the framework of the Maxwell equation can not give such an extra shift. The situation is analogous to a comparison between results obtained for rectilinear move with help of the Lorentz transformation and the Newton mechanics.
In Ref. [7] the question was considered in assumption that the combined frequency may be presented in terms of power series of two other frequencies. It was also assumed that the frequency of one frame relative to another equals to the reverse frequency. The only difference is the sign. The negative sign corresponds to the rotation in opposite direction. In this condition the extra shift in the first approximation is proportional to the product of the optical frequency squared and the modulation frequency. The characteristic constant in the extra shift has dimension of time. In Ref. [7] a optical experiment was also proposed for measurement of the term and it was shown that a lower limit for measurements of the characteristic constant with such a form of the shift is about 10 −17 sec.
Shortly after it was shown that an analogy exists between the light propagation in medium with the rotating optical indicatrix and the motion particle in the rotating magnetic field and both the phenomena can be described in the framework of the Pauli equation [8]. In other words a plane circularly polarized light wave propagating along the optical axis of 3-fold electro-optical crystal under the action of an applied electric field possesses properties of two-component spinor. It means that measurements of the optical frequency shift in the single-sideband modulator is similar to measurements of the magnetic moment in the magnetic resonance and anomalous magnetic moment may be associated with the nonlinear frequency shift. It was understood that the probable value of the characteristic constant is, as maximum, of the order of "nuclear time" ∼ 10 −23 sec. [8,9]. Such a small value excludes possibility to observe in optical experiments the term calculated in Ref. [7] Meanwhile the immediate way to determine the frequency superposition law is the transformation for the point rotation frames. Emphasize that the Maxwell equation do not contain any information about the transformation. The transformation must be postulated.
In the given paper we consider general linear transformation for the point rotation frames. We use symmetry of frame coordinates and assume that the reverse frequency is a function of the direct frequency with the same function in vice versa. We show that two different types of the transformation exist. The first type is a generalization of the Lorentz transformation. The type in an experimental sense corresponds to the case of Ref. [7]. The second type is principally different. The type give us a chance to measure the extra term. We describe an optical experiment for the measurement of the term. The experiment keeps the main features of that in Ref. [7].
II. GENERAL LINEAR TRANSFORMATION
The general form of the linear transformation for the transition from one frame to another can be written as followsφ where ϕ and t is an angle (phase) and time, tilde corresponds to the reverse transformation ν is the frequency of second frame relative to first one. It is obvious that Eq. (1) turns out into Eq. (2) if variables with tilde change to variables without tilde and vice versa.
First of all we exclude from the consideration the Galilean transformation, i.e., the case q ≡ 1. This case with its infinite frequencies seems unbelievable from the viewpoint of contemporary physics.
Making normalization etc., we would arrive to the case ν = −ν. However we can not carry out arbitrary normalization since usually t and ϕ already determined and connected with the space Cartesian coordinates. Generally the point rotation frames is not compatible with the Cartesian frame except when the frames are at rest. In this case if the rotation axis coincides with some Cartesian axis we may associate ϕ and t with the Cartesian cylindric angle and time. The above normalization would result, in particular, in the change of the speed of light. Therefore we must consider here the general case assuming thatν is a function of ν. It means that rotations to the right and left are not equivalent in the approach. For the point rotation frame we have not a general principle like the relativity principle for the Cartesian frames, however we use principle of symmetry instead. It means, in particular, that ifν(ν) = f (ν) then ν(ν) = f (ν).
The functionν(ν) and q(ν) remain indeterminate except the condition at small ν, namely,ν → −ν, q → 1 if ν → 0. If the characteristic constant τ is of the order ∼ 10 −23 sec. then the normalized frequency τ ν even in microwave range is about 10 −12 . Therefore we assume that the functionν(ν) may be expanded in the powers series in ν This expansion is compatible with the reverse expansion at certain conditions for the coefficients a n , namely, the expansion can not contain only odd powers of ν and up to term ν 2n has only n independent coefficients a n . Obviously that any expansion (3) together with the reverse expansion can be written in the symmetric form with help of finite or infinite series where F is a function. Results below will be also valid for arbitrary F (νν) with the only condition F (0) = 0. The main problem in the given approach is the nonlinear frequency shift. However since it is defined by productqq we do not need to know the explicit form of function q(ν). For finding the form ofqq we use symmetry between ϕ and t. Transformation (1) can be written ast where role of (ϕ, t, q, ν) is played by (t, ϕ, Q, Λ) respectively and It is naturally to assume that the equality in Eq. (4) would keep if Λ/σ,Λ/σ is substituted for ν,ν. Here σ is a dimensional constant. Making use of the substitution and excluding (ν + ν) we obtain the equation forqq where Θ = (1 − 1/qq)/σ. In the given case σ = ±τ 2 . Two type solutions of Eq. (7) exist. First type is exact solution Θ =νν or Transformation (1) for this case is a generalization of the Lorentz transformation. Without loss generality we use here the term the Lorentz transformation in spate the fact that σ may be as positive as negative. From the viewpoint of the experimental checking this case is equivalent to results of Ref. [7]. The second type of solutions may be presented as series where r is a negative root of equation F (r) = 0. The number of such solutions equals the number of zeros of F (r). A necessary condition for existence of the solutions is n ≥ 2 in expansion (4). First term in the right part of Eq. (9) determines the frequency superposition law in the first approximation. The characteristic constant in the given case is For simplicity we use the same letter for the characteristic constant in both the types of solutions. Note that expansion (9) is valid for small ν. However at zeros of F (νν) exact equalityν = −ν holds, i.e., values ν = ± |r| are some distinctive points. The second type of solutions adds to the list one further strange property of the point rotation frames. Normalization imparts the Lorentz shape to Eq. (1) If ν * → 0 then ϕ * → ∞. On the other hand the nonnormalized transformation at ν → 0 tends to the Galilean formφ where ϕ and t switch places. Term τ ϕ is very small because of the small value of τ . In accordance with Eq. (13) consider the time ∆t = τ ∆ϕ + ∆t and angle ∆φ = ∆ϕ intervals. The time interval measured at the same value of angle (∆ϕ = 0) is quite determined ∆t = ∆t whereas at the same time (∆t = 0) a time leap ∆t = τ ∆ϕ exists. The leap is the time of the rotation through angle ϕ at frequency 1/τ. Since Eq. (13) is the frame transformation into itself the result may be interpreted as an uncertainty of the time determination. The maximal value of the leap is 2πτ as at ϕ = 2π the frame also coincides with itself.
If the second type truly corresponds to physical reality then a lower limit for measurements of the characteristic constant may be drastically decreased.
III. FREQUENCY SUPERPOSITION
Consider a plane circularly polarized light wave moving through an electro-optical crystal with the rotating optical indicatrix. The light and the indicatrix, for definiteness, are assumed to rotate in the same direction with frequencies ω and ν. In correspondence with Eq. (1) the optical frequency in the frame with the resting optical indicatrix is It is obvious that the reverse transition result to exact equality However if the reversal rotation occurs then instead of ω ′ we must substitute −ω ′ in Eq. (15). After simplification we obtain for the output frequency For the solution of the first type 1 − 1/qq = σνν. Taking into account that ν ≪ ω andν ≈ −ν for small ν we obtain from Eq. (16) in the first approximation The extra frequency shift 2τ 2 νω 2 is an equivalent of that in Ref. [7]. The shift can not be measured optically because the very small characteristic constant τ . Even if the modulating frequency is a powerful optical wave ν ∼ 10 14 Hz then at τ ∼ 10 −23 sec. the extra shift |2σνω 2 | ∼ 10 −2 Hz can not be picked out in laser or photodetector noises. Consider the second type of solutions with 1 − 1/qq ≈ τ √ −νν. In the first approximation The extra shift 2τ ω 2 do not depend on ν, i.e., such a shift must be produced by the usual half wave plate! Emphasize that from the viewpoint the Maxwell equation the frequency shift in the single-sideband modulator is a consequence of the phase difference between two component of the electric field of light wave whereas from the viewpoint of photons it is something different. The extra shift may be interpreted as an energy of the polarization reversal. The sign difference of the energy corresponds to the assumption on inequivalence of the right and left rotation. The shift of the second type may far exceed the shift of the first type. The relative value of the extra shift for τ ∼ 10 −23 sec. is 2τ ω ∼ 10 −8 in visible range. Schematic of the experiment for measuring the extra shift of the second type is shown in Fig.1. Linearly polarized light from laser passes through the single-sideband modulator (for example Lithium Niobate modulator [5]). A electric field rotating at frequency Ω = 2ν is applied to the modulator. Evolution of the laser spectrum under change of the amplitude and frequency of the electric field may be observed by means of the scanning interferometer. The linearly polarized light is a sum of two circularly polarized waves of frequencies ω and −ω. The modulator changes the frequencies to ω + Ω − 2τ ω 2 and −ω + Ω − 2τ ω 2 respectively. After the paralyzer light is modulated in intensity at frequency 2Ω−4τ ω 2 . The extra shift 4τ ω 2 could be extracted by heterodyning as it is in Ref. [7]. However in the given case the schematic is simplified (heterodyne and doubler are crossed out in Fig. 1) since resonance Ω = 2τ ω 2 can be used with matching the sign of τ by the reversal of the applied electric field. The shift can be measured with confidence if the characteristic constant is about 10 −23 sec. Note that if the extra shift really exists then similar schematic may be effectively used for precision measurements in spectroscopy.
V. CONCLUSION
The idea of inequivalence of the direct and reverse frequencies and the principle symmetry leads to two types of the transformation for the point rotation frames. The first type is a generalization of the Lorentz transformation. The type seems more applicable to the Cartesian frames. Measurements of the extra frequency shift in this case lies beyond possibilities of optics. The second type is applicable only to the point rotation frames. An unusual and strange property of this type is uncertainty of the time determination. The extra shift in this case may be verified in optical measurements if the characteristic constant in the transformation is about (and even less) 10 −23 sec.
Another question arises in the above construction: is parity violation connected with the inequivalence of the direct and reverse frequency? | 2019-04-14T03:15:15.035Z | 2006-05-25T00:00:00.000 | {
"year": 2006,
"sha1": "38a9286e962eb5974ba53a23bf4b59672259249d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "857805393126ee511c2289736b78b71d21203d82",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
91133503 | pes2o/s2orc | v3-fos-license | Couple formation and spawning between two female Discus Fish ( Symphysodon aequifasciatus-Cichlidae ) in captivity
In an experiment performed to characterize the reproductive behavior of the discus fish in captivity, couple formation with two females was observed. The observations were carried out in captivity, based on ad libitum methodology. Adult individuals were allowed to naturally form couples. The couple formation was considered when individuals performed the substrate cleaning behavior. Fifteen couples were selected. The eggs of three couples did not initiate embryonic development and became infeasible within three days after spawning. We found that these spawnings belonged to all-female couples. The sex of the individuals in same-sex couples was confirmed through subsequent couple formation and spawning of fertile eggs with known males. Eggs were deposited by one or both females in the same-sex couples. Double spawnings were larger and differentially colored. The motivations that lead individuals of the same sex to form couples in this species are still unknown.
RESUMO
Em um experimento realizado para caracterizar o comportamento reprodutivo de acarás disco em cativeiro, observou-se a formação de casais com duas fêmeas.As observações foram realizadas em condições ex situ, com base na metodologia ad libitum.Grupos de indivíduos adultos foram distribuídos em aquários para formação espontânea de casais.A formação de um casal foi considerada quando os indivíduos apresentaram o comportamento de limpeza do substrato.Quinze casais foram selecionados.Observamos que os ovos de três casais não iniciaram o desenvolvimento embrionário e se tornaram inviáveis entre o segundo e o terceiro dia após a desova.Verificou-se que eram casais constituídos por duas fêmeas.O sexo dos indivíduos desses três casais foi confirmado posteriormente por meio de acasalamento e produção de desovas viáveis com machos conhecidos.As desovas dos casais de mesmo sexo foram produzidas por uma ou as duas fêmeas.Desovas duplas tinham duas colorações distintas e foram bem maiores que as de casais de machos e fêmeas.As motivações que levam indivíduos de acará disco do mesmo sexo a formar casais ainda são desconhecidas.PALAVRAS-CHAVE: reprodução, comportamento, aquicultura, peixe amazônico Couple formation and spawning between two female Discus Fish (Symphysodon aequifasciatus -Cichlidae) in captivity ACTA AMAZONICA Neotropical cichlids destined to ornamental purposes have great commercial demand, but there is little relevant information about their development and reproduction (Dias and Chellappa 2003).Among the species of Neotropical cichlids sold for ornamental purposes, the discus fish, Symphysodon aequifasciatus Pellegrin, 1904, Cichlidae, which is native to the rivers of the Amazon basin in Brazil, Peru and Colombia (Wattley 1991), stands out.The main characteristics of this species are its small size, about 20 cm in length, high and rounded body shape, in addition to a wide range of colors.
Understanding the reproductive process is an essential part of the study of the biology of species (Silva and Esper 1991), and the reproduction of the discus fish in captivity is one of the biggest barriers to their commercial production.Symphysodon aequifasciatusis is a species of biparental care, characterized by maintaining adhesive eggs and the larvae on the substrate.Parental care is only ceased when the offspring reaches independence (Wattley 1991).In an experiment conducted to characterize the reproductive behavior of the discus fish in captivity, couple formation between two females was observed.
The observations were carried out in the aquaculture sector of Universidade Estadual do Norte Fluminense Darcy Ribeiro (UENF), from March 2009 to June 2010, based on the ad libitum methodology proposed by Altmann (1974).We selected 42 adult individuals from the stock of the aquaculture sector of UENF, with average values of 13.09 ± 1.58 cm length, 10.39 ±0.98 cm height, and 62.15 ± 1.73 g weight.They were distributed in seven experimental aquariums, totaling six fish per aquarium.Each aquarium had a capacity of 50 L working volume and was equipped with an aeration system, a foam filter for suspended solid retention, and 30cm plastic tube to serve as spawning substrate, given that this species spawns naturally on trunk surfaces.Since this species does not present apparent sexual dimorphism (Câmara 2004) we distributed six specimens per tank to the natural formation of couples.We considered that a couple was formed when the individuals cleaned the substrate, a characteristic behavior of different cichlid species in the moments prior to spawning.According to this criterion, 15 couples were selected.
After couple formation, spawning naturally occurred for the 15 couples, but it was observed that the eggs of three couples did not initiate embryonic development and became inviable between the second and third days after spawning.By analyzing the three couples that produced the inviable eggs, we found that they consisted of pairs of females, because we tested them with males later.These female couples displayed parental care with the spawn, performing aeration of the spawn with flipper movements and the removal of the first eggs that became inviable, both common behaviors of male/ female couples in this species (Mattos et al. 2016).
By comparing the spawns of male/female couples with those of the three same-sex couples, we observed that oocytes were released either by one or both females (Figure 1).Spawning by both females was evidenced by the disproportionate number of eggs in relation to the average number of eggs in male/female spawns.Also, the oocytes of the two different spawns differed in their color pattern: one spawn had a yellowish tone, while the other had an orange tone, showing clearly that they came from different individuals (Figure 1B).
After confirming the non-viability of their spawns, the six females of the same-sex couples were separated and each one was paired in individual aquariums with a male of confirmed gender and fertility.With the change of partners, it was observed that females paired with the new male partners and showed the normal reproductive behavior characteristic of the species and spawned normally.The spawns were viable and after egg-hatching, the couple showed normal parental care for the offspring (Mattos et al. 2016).
Previous studies on other species reported that the courtship of individuals of the same sex can occur in nonreproductive individuals, but the causes that lead to such behavior are still unknown.This was not the case with our same-sex couples of S. aequifasciatus, since the females generated viable offspring when paired with males.Bailey and Zuk (2009) listed 14 species from different taxonomic groups (mammals, birds, insects and reptiles) that presented homosexual behavior in the wild and in the laboratory, performing courtship, pair bonding and copulation.For fish, information on reproductive behavior of same-sex couples is scarce.
Attraction behavior among individuals of the same sex was also observed in males of guppy, Poecilia reticulate Peters, 1859, where male individuals kept in aquariums only with others of the same sex had higher courtship behavior when
AMAZONICA
Couple formation and spawning between two female Discus Fish (Symphysodon aequifasciatus -Cichlidae) in captivity compared to individuals that were kept in aquariums with females (Field and Waite 2004).
Seahorses, Hippocampus reidi Ginsburg, 1933 showed a similar behavior to that found in S. aequifasciatus females, by forming same-sex couples of males and females and performing characteristic courtship movements (Silveira 2009).In Hippocampus reidi, however, the homosexual behavior was induced by the presence of only same-sex individuals in the aquarium, while in S. aequifasciatus samesex couple formation occurred in the presence of both sexes.
The motivations that lead individuals of the same sex to form couples are still unknown and more experimental behavioural studies are needed to understand the causes of homosexual behavior in these species, since the reproductive investment of same-sex couples does not produce offspring.Our observations contributed to further the knowledge about the reproductive dynamics of S. aequifasciatus and generated usefull information for the breeding management in this species.
Figure 1 .
Figure 1.Spawning of only one female (A) and synchronous spawning of two females (B) of discus fish, Symphysodon aequifasciatus.This figure is in color in the electronic version. | 2019-04-02T13:06:47.626Z | 2017-06-01T00:00:00.000 | {
"year": 2017,
"sha1": "b5f18d11c2c949e3f8be102ef32a0c4b2e595d7f",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/aa/v47n2/1809-4392-aa-47-02-00167.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b5f18d11c2c949e3f8be102ef32a0c4b2e595d7f",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
59438227 | pes2o/s2orc | v3-fos-license | On the support of Pollicott-Ruelle resonant states for Anosov flows
We show that all generalized Pollicott-Ruelle resonant states of a topologically transitiv $C^\infty$-Anosov flow with an arbitrary $C^\infty$ potential, have full support.
Introduction
Let (M, g) be a smooth, compact Riemannian manifold without boundary. Let X ∈ C ∞ (M, T M ) be a smooth Anosov vector field and let us denote by ϕ t the flow on M generated by X and let V ∈ C ∞ (M ; C) be a smooth potential function. Then we can define the following differential operator It is a well established approach to study the dynamical properties of Anosov flows by the discrete spectrum of the operator P, the so called Pollicott-Ruelle resonances. The fact that for volume preserving flows and real valued potentials, the operator P is an unbounded, essentially self adjoint operator on L 2 (M ) might suggest, that P has good spectral properties on L 2 (M ). However, due to the lack of ellipticity P has mainly continuous spectrum which carries little information on the dynamics of the flow. A very important progress was thus to construct Banach spaces [Liv04,BL07] or Hilbert spaces [FS11,DZ13] for Anosov flows in which the operator P has discrete spectrum in a sufficiently large region (see also [Rug92,Kit99,BKL02,GL06,BT07] for analogous results for Anosov diffeomorphism). More precisely, it has been shown that there is a family of Hilbert spaces H sG parametrized by s > 0 such that for any C 0 > 0 and for sufficiantly large s the operator P acting on H sG has discrete spectrum in the region {Imλ > −C 0 }. This discrete spectrum is known to be intrinsic to the Anosov flow together with the potential function and does not depend on the sufficiantly large parameter s (see Section 2.2 for a more precise statement). Accordingly we call λ 0 ∈ C a Pollicott-Ruelle resonance if it is an eigenvalue of P on H sG for sufficiently large s and we call ker H sG (P − λ 0 ) the space of Pollicott-Ruelle resonant states. As P is not anymore a normal operator on H sG , there might exist also exist finite dimensional Jordan blocks and given a Ruelle resonance λ 0 we denote by J(λ 0 ) the maximal size of a corresponding Jordan block and we call ker H sG (P − λ 0 ) J(λ0) the space of generalized Pollicott-Ruelle resonant states.
The interest in these Pollicott-Ruelle resonances and their resonant states arises from the fact that they govern the decay of correlations. In order to illustrate this in more detail, let us for the moment assume that the Anosov flow is contact, which includes for example all geodesic flows on compact manifolds with negative sectional curvature. If we denote by dw the invariant contact volume form and take two smooth functions f, g ∈ C ∞ (M ), then one is interested in the behavior of the correlation function In this case the following expansion of the correlation function has been established in [Tsu10, Corollary 1.2] (see also [NZ13, Corollary 5]) (1) C f,g (t) = t k e −itµj u j,k (f )v j,k (g) + O(e −γt ).
Here µ j are the Pollicott-Ruelle resonances for a vanisihing potential (V = 0) which are lying in a strip 0 > Im(µ) > −γ. Furthermore u j,1 , v j,1 are the corresponding left and right generalized eigenstates and the sum over k arises from the possible existence of Jordan blocks of size K j . As Im(µ j ) < 0 all terms except the first one vanish exponentially in the limit of large times. This effect is interpreted as an exponential convergence towards equilibrium, known as exponential mixing and has first been established in the seminal works of Dolgopyat [Dol98] and Liverani [Liv04]. The way towards equilibrium is governed by the sub leading resonances and resonant states. While the resonances determine the possible decay modes, the resonant states determine the coefficients. If for example there would be a simple Pollicott-Ruelle resonance µ j with a resonant state, that vanishes on an open set U ⊂ M , then for all observables f ∈ C ∞ c (U ) this resonance would not appear in the correlation expansion (1). The main theorem of this article states, that this can not be the case.
Theorem 1. Let ϕ t be a topologically transitive C ∞ Anosov flow. Let λ 0 be a generalized Pollicott-Ruelle resonance and u ∈ ker Remark 1. Note that contrary to the correlation expansion (1) which we mentioned as a motivation, our result for the resonant states holds for a general C ∞ Anosov flow with no assumption on contact structures, smooth invariant measures, or even mixing properties. Taking a suspension of a topological transitive Anosov diffeomorphism, the theorem directly implies an analogous result for the resonant states of the Anosov diffeomorphism.
Remark 2. To our knowledge there has up to now little been known about the structure of Pollicott-Ruelle eigenstates. However in the proof of [GL08, Theorem 5.1] Gouëzel and Liverani show an analogouse result for the peripheral spectrum of topologically mixing hyperbolic maps. In this case, there is however only one simple peripheral eigenvalue so the statement applies only to this one resonant state which corresponds to the equilibrium measure. Furthermore for the particular case of a geodesic flows on manifolds of constant negative curvature, Dyatlov, Faure and Guillarmou [DFG15] proved an explicit relation between Pollicott-Ruelle resonant states and Laplace eigenstates. Using this relation allows to transfer well established support properties of Laplace eigenstates to support properties of Pollicott-Ruelle eigenstates.
The article is organized as follows: In Section 2 we introduce the basic definitions and recall some known facts about Anosov flows and the microlocal approach to Pollicott-Ruelle resonances. Section 3 is devoted to the proof of the main Theorem 1.
Acknowledgements: I would like to thank Heiko Gimperlein for enlightening discussions about the microlocal techniques in this proof. Furthermore I am grateful to Sebastién Gouëzel, Stéphane Nonnenmacher, Viviane Baladi and Gabriel Rivière for the hints concerning the dynamical systems literature and in particular for the explications about the smoothness of the conditional measures (c.f. Theorem 7) by the first two of them. Last but not least, I thank Colin Guillarmou and Joachim Hilgert for stimulating discussions, from which this work emerged as well as for their detailed feed-back on an earlier version of the manuscript. This work has been supported by the grant DFG HI 412 12-1.
Notation and Preliminaries
2.1. Anosov flows. Let (M, g) be a smooth, compact Riemannian manifold without boundary and let us denote the Riemannian distance between two points m,m ∈ M by d(m,m). A smooth vector field X ∈ C ∞ (M, T M ), respectively the flow ϕ t on M generated by X, are called Anosov if there is a direct sum decomposition of each tangent space which depends continuously on the base point m and which is invariant under dϕ t . Furthermore the decomposition has to fulfill E 0 (m) = RX(m) and there has to exist some fixed C and θ > 0 with (2) An Anosov flow is said to be topological transitive if there exists a dense orbit.
It is known, that the bundles E 0 ⊕ E s and E 0 ⊕ E u are uniquely integrable [Ano67,HPS70] and given a point m ∈ M we denote by W wu (m) and W ws (m) the integral manifolds through the point m which are smooth immersed submanifolds of M . They are called weak stable and weak unstable manifolds through m. Also the bundles E s and E u are uniquely integrable. The corresponding integral manifolds will be denoted by W s (m), W u (m) and will be called strong stable and strong unstable manifolds. For further reference let us denote by n ws , n wu , n s , n u ∈ N the dimension of the manifolds W ws (m), W wu (m), W s (m) and W u (m), respectively. Let us recall the following well established result on the continuous dependence of the invariant manifolds on the base point.
Theorem 2 (c.f. [HP70, Theorem 3.2]). W ws is a family of n ws -dimensional immersed C ∞ -submanifolds that depends continuously on the base point w.r.t. the C ∞ -topology. More precisely, this means that for any m there is a neighborhood m ∈ U ⊂ M as well as a continuous map g : U → C ∞ (D nws , M ) such that for anym ∈ U, g(m) is a C ∞ -diffeomorphism of the n ws dimensional disk D nws onto a neighborhood ofm in W ws (m).
The analogous result holds for W wu , W s , W u .
Remark 3. The stable and unstable foliations are known to be even Hölder continuous in general, and under certain assumptions such as curvature pinching even regularity C 1 or better can be obtained (c.f. [Has02, Section 2.3]). We will, however, not explicitly need these refined regularity estimates in the sequel.
Given two points y, z ∈ W ws (m) we denote by d ws (y, z) the metric distance on W ws (m) coming from the metric inherited from the Riemannian metric on M . In the same way we can define d wu , d s and d u . We can now define the following different open balls of radius r > 0. Theorem 3 (Product Neighborhood Theorem). There exists δ 0 > 0 independent of m ∈ M such that for any m ∈ M and δ ≤ δ 0 the following maps (y) are unambiguously defined, injective and homeomorphisms onto their images.
For a proof see [PS70] (see also [Pla72] for a more detailed statement). Given a point m ∈ M and 0 < α, β < δ 0 we define the open product neighborhoods of m We will call such product neighborhoods also rectangular neighborhoods. These product neighborhoods have the important property that the defining homeomorphisms imply foliations by local invariant manifolds. In the sequel we will only need the foliations of (wu, s)-rectangles which are given by ) are the local strong stable leaves and U y := H wu,s m (B wu α (m) × {y}) the local weak unstable leaves. Note that from Theorem 2, as well the strong stable leaves S x as the weak unstable leaves U y are smooth submanifolds of the product neighborhood. Still the foliations are no smooth foliations as the trivialization map H wu,s m is only continuous, which reflects the fact from Theorem 2 that the local invariant manifolds S x (U y respectively), depend only continuously on their basepoints x (y respectively). As M is a Riemannian manifold, the leaves S x (U y respectively) inherit a Riemannian metric and thus also a Lebesgue measure which we denote by dm Sx (dm Uy respectively).
One way to introduce Sinai-Ruelle-Bowen (SRB) measures is to demand, that they are absolute continuous w.r.t. the weak unstable foliation.
Definition 1 (SRB measure). A measure µ on M which is invariant under the Anosov flow ϕ t is called a SRB-measure if for any product neighborhood PN wu,s α,β (m) and for all y ∈ B s β (m) there are positive, measureable functions τ y on U y as well as a measure dσ on B s β (m) such that the restriction of µ to PN wu,s α,β (m) is given by More precisely this equality means, that for any Remark 4. There are other equivalent possibilities to define SRB measure in terms of metric entropy and ergodic averages, see e.g. [You02] for an overview.
Theorem 4. For any Anosov flow there exists a SRB measure. If the Anosov flow is topological transitive, the SRB measure is unique and the Anosov flow is ergodic w.r.t. the SRB measure.
For an original proof of this result in the context of Axiom A diffeomorphism see [Bow70] and for generalizations to flows [Bow73].
Microlocal approach to Pollicott-Ruelle resonances for Anosov flows.
Let us collect some facts about the spectral theory of Pollicott-Ruelle resonances. Suitable Banach spaces, in which the differential operator P, has discrete spectrum, have first been introduced by Liverani [Liv04]. In this article we will use the microlocal approach to transfer operators, which has been introduced by Faure, Roy and Sjöstrand in a series of papers [FR06,FRS08,FS11]. The following result for Anosov flows has originally been shown by Faure and Sjöstrand [FS11, Theorem 1.4]. The reader might also be interested in [DZ13, Proposition 3.2] where an alternative proof, using different microlocal techniques, is given. While these two publications only mention the case of zero potential a more general statement including arbitrary smooth potentials can be found in [DG14].
Theorem 5. For any C 0 > 0 there is an s > 0 and Hilbert spaces H sG and D sG such that in the region Here the Hilbert space H sG is an anisotropic Sobolev space that fulfills the relations where H s (M ) denotes the ordinary Sobolev space. The Hilbert spaces D sG can be defined by considering the operator P acting on D (M ) and we set , analytic Fredholm theory implies that the resolvent R(λ) := (P − λ) −1 : H sG → H sG has a meromorphic continuation to {Imλ > −C 0 } and the Pollicott-Ruelle resonances are defined to be the poles of the meromorphic continuation. Given a Pollicott-Ruelle resonance λ 0 the space ker H sG (P − λ 0 ) \ {0} is nonempty and its elements are called the corresponding Pollicott-Ruelle resonant states. Furthermore there is an integer J(λ 0 ) ≥ 1 determining the maximal size of a Jordan block with spectral value λ 0 , i.e. we have for all k ≥ J(λ 0 ) ker H sG (P − λ 0 ) k = ker H sG (P − λ 0 ) J(λ0) and we call the space ker H sG (P − λ 0 ) J(λ0) the space of generalized Pollicott-Ruelle resonant states. Note that the fact that C ∞ (M ) is densely contained in all H sG together with the uniqueness of meromorphic continuation imply that neither the position of the resonance λ 0 nor the spaces ker HsG (P − λ 0 ) and ker HsG (P − λ 0 ) J(λ0) depend on the parameter s, so they are intrinsic objects of the Anosov flow (c.f. [FS11, Theorem 1.5]).
A central ingredient in the proof of Theorem 5 and the construction of the anisotropic Sobolev spaces is to study the symplectic lift of the flow action to T * M . Faure and Sjöstrand therefore introduce the decomposition m M via the Riemannian metric this definition seems counterintuitive, as E * s (m) is identified with E u (m) and vice versa. The reason for this choice in [FS11] is that it is natural from the point of view of the symplectic lift of ϕ t to T * M (c.f. [FS11, eq. (1.13)]). We will not need
On the support of Pollicott-Ruelle eigenstates
In this section, we will prove Theorem 1. Let us start by giving a brief outline of the proof: As a first step, we will introduce suitable charts that are adapted to the foliation of M into stable and unstable manifolds. Then we study some particular distributions, denoted by ρ A , which are defined in these charts and which correspond to characteristic functions of tubes along the strong stable foliation. Proposition 6 then gives estimates on the decay of the Fourier transformation of ρ A . As a consequence, one deduces that the wavefront set of the distributions ρ A are bounded away from E * u , thus they can be paired (or multiplied) with Pollicott-Ruelle resonant states. Using the Fourier estimates in Proposition 6 we then show, that any Pollicott-Ruelle resonant state that does not have full support, has to be identically zero. Therefore we only use two properties of the resonant states: the flow invariance of their support and their microlocal regularity. A major technical difficulty in the proof comes from the fact that the stable and unstable foliations for general Anosov flows are only Hölder regular. We We can then define the open set V wu i := {x ∈ R nwu : (x, 0) ∈ V i } and directly see that ). According to Theorem 2 the S x ⊂ V i are C ∞ -submanifolds depending continuously on the basepoint x w.r.t. the C ∞ topology. Furthermore, the definition of S A allows us to associate to any Borel set Given a chart κ i we write T * V i = V i × R n and we define Γ 1 , Γ 2 ⊂ R n to be the minimal closed cones such that and that on the other side N * S 0 = {0} × (R nwu × {0}) are complementary cones. Thus from the continuity of the foliations and after a possible refinement of the charts we can assume that We now want to study the Fourier transform of ρ A : Proposition 6. Let χ ∈ C ∞ c (V i ) be an arbitrary cutoff function, then there is a constant C > 0 such that for any Borel set A ⊂ V wu i , uniformly in ξ ∈ R n . Here vol nwu (A) is the n wu -dimensional euclidean volume of A ⊂ V wu i ⊂ R nwu . Furthermore, fix a closed cone Ω ⊂ R n such that Ω ∩ Γ 2 = {0}. Then for all N ∈ N there is a C N > 0 such that for all Borel sets A ⊂ V wu Proof. In order to prove this theorem we use the definition of the Fourier transform for compactly supported distributions and get In order to obtain the estimates for this integral let us recall the following fact which is a consequence of the absolute continuity of the strong stable foliation.
Theorem 7. For any V i ⊂ R n defined as above and any x ∈ V wu i there is a smooth function δ x ∈ C ∞ (S x ) such that for any ψ ∈ C ∞ c (V i ) Here dx, dy are the euclidean measures on R nwu , R ns , ψ |Sx ∈ C ∞ (S x ) is the restriction to the smooth submanifold S x and dm Sx the induced measure on the submanifold S x . The functions δ x are called conditional densities of the strong stable foliation. Furthermore, their dependence on x ∈ V wu i is continuous in the following sense: If T 1 s (S x ) is the set of normalized tangent vectors in s ∈ S x , then for any k ∈ N the following k-norm is finite and for all k ∈ N the map x → δ x k is continuous.
Proof. The existence of the conditional measures can be derived by a standard argument from the absolute continuity of the strong stable foliation (see Appendix A respectively standard dynamical systems literature, e.g. [BS02, Section 6.2]). In order to obtain the smoothness of the conditional measures as well as the continuous dependence on the base point x one additionally needs the smoothness of the Jacobians of holonomy maps along the strong stable leaves. This statement can for example be found in [GLP13, Appendix E].
First we use the Fubini formula for the strong stable foliation (12) in order to estimate the integral in (11) we obtain Now the uniform bounds on δ x 0 directly imply (9). In order to see (10) we use the boundedness of δ x k together with the following partial integration argument: For ξ 0 ∈ Ω with |ξ 0 | = 1 and x ∈ V wu i we define the tangent vector field X ξ0,x : S x → T S x ⊂ R n s → Pr TsSx (ξ 0 ). Here Pr TsSx is the orthogonal projection onto T s S x which makes sense after having identified T V i = V i × R n . From the definition of S x and Γ 2 we conclude that for any x ∈ V wu i and s ∈ S x we have have (s, ξ 0 ) / ∈ N * S x . Thus there is C ξ0 > 0 such that X ξ0,x (s), ξ 0 ≥ C ξ0 uniformly in s ∈ S x and x ∈ V wu i . By continuity we find an open neighborhood W ξ0 ⊂ R n of ξ 0 such that uniformly in ξ ∈ W ξ0 , s ∈ S x and x ∈ V wu i . Now we use the closedness of Ω and a compactness argument to conclude that there are ξ 1 , . . . , ξ K as well as a constant C 0 > 0 such that for any ξ ∈ Ω there is 1 ≤ l ≤ K, such that Furthermore Theorem 2 implies that s → X ξ l ,x (s), ξ ∈ C ∞ (S x ). Even if this function is not compactly supported, the fact, that the S x are precompact, and that these functions can always be smoothly extended to a slightly larger stable leaf implies that for all k ∈ N the norm X ξ l ,x (s), ξ k is finite and depends continuously on x ∈ V wu i .
Performing N times partial integration w.r.t. the differential operator Note that for this independence we crucially use that x → δ x k as well as x → X ξ l ,x (s), ξ k are continuous.
Recall that we can choose Ω to be an arbitrary cone with Ω∩Γ 2 = {0}. Thus (10) implies that for any Borel set A ⊂ V wu i we have ρ A ∈ D Γ2 (V i ), i.e. the wavefront set of ρ A is contained in Γ 2 . Multiplication of ρ A with an arbitrary compactly supported distribution v ∈ E Γ1 (V i ) is therefore well defined and yields a compactly
Proof.
As v has compact support in V i let us choose a cutoff function χ ∈ C ∞ c (V i ) with χ = 1 in a neighborhood of supp v. Then we calculate and note that the last integral converges as Γ 1 ∩ Γ 2 = {0}. Now let us fix a closed cone Ω ⊂ R n fulfilling Γ 1 \ {0} ⊂ IntΩ and Ω ∩ Γ 2 = {0}. As WF(v) ⊂ Γ 1 we get for any N ∈ N a constant C N such that |v(ξ)| ≤ C N,v ξ −N uniformly for ξ ∈ R n \ Ω. As any distribution is of finite order we in addition get the existence of an integer K v and a constant C v,2 such that |v(ξ)| ≤ C v,2 ξ Kv uniformly for all ξ ∈ R n . Splitting the integral over ξ we thus obtain for any given N ∈ N. Then we can use (9) in the first integral and (10) with N = K v + n + 1 in the second integral which leads to the desired estimate (14).
Next we study the local structure of supp u ⊂ M for a resonant state u ∈ ker H sG (P − λ 0 ). Therefore note that supp u ⊂ M is a ϕ t -invariant closed subset. This is the only property that we use in the following lemma.
where vol Xi is the Lebsegue measure on X i . Now take a cover of product neighborhoods M = N i=1 R i with r s i < ε s /4. We now want to consider the backwards iterates ϕ −t (U 0 ) for t > 0. Firstly, the flow invariance of Σ assures, that ϕ −t (U 0 ) ∩ Σ = ∅. Secondly the dynamical properties of the flow assures that the sets ϕ −t (U 0 ) are stretched into the strong stable direction and thirdly, if µ denotes the SRB measure, then ergodicity implies that µ( t>0 ϕ −t (U 0 )) = µ(M ). We will subsequently use these three facts in order to construct the sets N i . Therefore we first consider for an arbitrary index i and t > 0 the connected components of ϕ −t (U 0 ) ∩ R i . If a connected component can be written in the form h i (A × Y i ) with A ⊂ X i open, we call it a complete empty tube. We call them "empty" as these sets have no intersection with Σ and we intend to build our set N i from the sets A. Note that for large t the stretching and folding will imply that ϕ −t (U 0 ) ∩ R i has many connected components. But as we started with a product neighborhood U 0 and as the strong stable foliation is preserved by the flow, there are maximally two connected components which are not complete empty tubes. They correspond to the two ends of the product neighborhood (c.f. Figure 2). We can however make them complete by considering the connected components ϕ −t (U 1 ) ∩ R i . As U 1 ∩ Σ = ∅, these connected components are also empty. Furthermore the fact that r s i < ε/4 together with the fact, that the product neighborhoods are stretched along the strong stable direction for negative times implies the following statement: Any connected component of ϕ −t (U 1 ) ∩ R i that contains a connected component of ϕ −t (U 1 ) ∩ R i is a complete tube. Thus we can define an open set A i,t ⊂ X i such that the union of connected components of ϕ −t (U 1 ) ∩ R i that contain a connected component of ϕ −t (U 0 ) ∩ R i is given by h i (A i,t × Y i ) with this definition we obviousely have
Proof. As
We now can define the opens set N i := t>0 A i,t ⊂ X i and obtain As h wu i : X i → V wu i is a diffeomorphism, it only remains to prove that vol nwu (h wu i (N i )) = vol nwu (V wu i ). This, however, is an immediate consequence of (which uses the ergodicity w.r.t. the SRB measure) as well as the following Lemma.
As the differential operator P is local we have supp u ⊂ suppũ M . Furthermore because ϕ * t u = e −iλ0 u, the support supp u is invariant by the Ansosov flow so we can apply Lemma 9 to Σ = supp(u). Thus we obtain a cover M = N i=1 R i of product neighborhoods as well as open subsets N i ⊂ X i fullfilling After a possible further refinement we can also guarantee the existence of C ∞ -charts κ i : R i → V i ⊂ R n as well as closed cones Γ 1 , Γ 2 ⊂ R n fulfilling (8). Now let us choose a smooth partition of unity and define u i := (κ −1 i ) * (χ i · u) ∈ E (V i ). Recall from (3) that u ∈ ker H sG (P − λ 0 ) implies WF(u) ⊂ E * u and by (8) this implies that WF(u i ) ∈ E Γ1 (V i ). Now, using Lemma 8, we obtain for an arbitrary Borel set A ⊂ V wu i the estimate |(u i ·ρ A )[1]| ≤ C ui vol nwu (A). We now want to use this estimate in order to conclude, that u i = 0 for all i = 1, . . . , N . Now for any ε > 0 let us introduce the set are open, also the family N i,ε is a family of open sets, that increases monotonously and converges to N i for ε → 0. Considering the complement N c i,ε := V wu i \ N i,ε we obtain a monotonously decreasing family of closed sets converging to the nullset V wu i \ N i . Furthermore, from (16) we see that the distribution ρ N c i,ε defined in (5) is equal to 1 on an open neighborhood of supp(u i ). Taking an arbitrary ψ ∈ C ∞ c (V i ) and any ε > 0 we can use this fact to calculate . But as the multiplication by a smooth function doesn't increase the wavefront set we can apply Lemma 8 to (u i · ψ) ∈ E Γ1 (V wu i ) and obtain for any ε . Finally, as the constants C ui,ψ are independent of ε, the right hand side converges to zero as ε → 0, so we have shown u i [ψ] = 0. As ψ ∈ C ∞ c (V i ) was an arbitrary test function this implies u i = 0 for any i and we conclude that u = 0 which is a contradiction. This finishes the proof of Theorem 1.
Appendix A. Smoothness of the conditional measures
In the proof of the support property of the Pollicott-Ruelle resonant states, the smoothness of the conditional measures (Theorem 7) plays a central role. While the result is well known to experts in dynamical system theory it is difficult to find a precise statement or a proof in the existing literature. This is why I wanted to provide some more details in the arxiv-version of this article, on how to connect the smoothness of the conditional densities to the smoothness of holonomy maps, by standard dynamical systems arguments: Let V i ⊂ R n be the range of a coordinate chart κ i as in Theorem 7. As we only work in one fixed chart, we drop the index i in order to simplify the notation and write V := V i . Recall that we have already defined V wu = {x ∈ R nwu : (x, 0) ∈ V } and analogousely we define V s := {y ∈ R ns : (0, y) ∈ V }. Recall that the strong stable manifolds define a foliation of V by the leaves S x which are parametrized by x ∈ V wu . Note that S 0 = V ∩ ({0} × R ns ) so it can be identified with V s . Now for any y ∈ V s we define T y := V ∩ (R nwu × {y}) which defines a smooth foliation transversal to the foliation S x . Recall that by the construction of the coordinate charts (4), T 0 coincides with a weak unstable leaf, but in general the other leaves can not coincide with the weak unstable leaves unless the particular case, that the weak unstable foliation is smooth. Let us now reduce the set V such that it has a product structure w.r.t. the foliations S x and T y , i.e. such that for all x ∈ V wu and y ∈ V s one has S x ∩ T y =: H(x, y) defines a unique point contained in V . Note that this poses not problem for proving the theorem which claims the existences of the conditional measures in some product neighborhood of the strong stable and weak unstable foliation as such a set is always contained in the reduced set V . Now with this product structure of S x and T y we can define two holonomy maps H S y : and H T x : Let us first discuss the second one, belonging to the smooth transversal foliation. According to Theorem 2, H T x is a C ∞ diffeomorphism and depends continuously on x in the C ∞ -topology. Note that the euclidean metric on V induces metrics and thus measures on the submanifolds S x which we call dm Sx (s) and as S 0 is simply the y-axis dm S0 is simply the euclidean measure dy on V s after identifying S 0 and V s . The fact that H T x are diffeomorphisms gives us a Jacobian function J T x ∈ C(S 0 ) such that for a any function ψ ∈ C c (S x ) Sx ψ(s)dm Sx (s) = S0 ψ • H T x (y)J T x (y)dm S0 (y).
The theorem of change of variables gives us, the precise form of J T x (y): For any y ∈ S 0 the differential of the holonomy maps is a linear isomorphism DH T x : T y S 0 ⊂ R n → T y S 0 ⊂ R n . Choosing two orthonormal basis of the n s -dimensional subspaces T y S 0 ⊂ R n and T y S 0 ⊂ R n (w.r.t. to the euclidean metric on R n ) the differential can be expressed as an n s × n s matrix and the Jacobian is simply the absolute value of the determinant. Now the fact, that H T x is C ∞ and depends continuously on x translates to the fact that J T x ∈ C ∞ (S 0 ) and depends continuously on x w.r.t. the C ∞ -topology. Let us now turn to the holonomy maps of the strong stable foliation. Here the situation is infinitely more complicated, as for a general Anosov flow the maps H S y are only Hölder continuous homeomorphisms. However it is known that they are absolutely continuous and that they possess a Jacobian J S y such that for any function ψ ∈ C c (T y ) Ty ψ(x)dLeb Ty (x) = T0 ψ • H S y (x)J S y (x)dLeb T0 (x).
The existence of such Jacobians for holonomies in dynamical systems can be found in various textbooks (see e.g. [BS02, Section 6.2]. It is however more difficult to find sharp statements on their regularity in x and y. The following sufficiently strong statement for our purpose can for example be found in [GLP13, Appendix E] Proposition 11. For any x, the map y ∈ S 0 → J S y (x) ∈ R >0 is in C ∞ (S 0 ) and the map x ∈ V wu → J S • (x) ∈ C ∞ (S 0 ) is continuous 1 Let us now combine all the ingredients in order to calculate the conditional densities. Given a function ψ ∈ C c (V ) we calculate V ψ(x, y)dx dy = dm Sx (s) dLeb T0 (x).
1 Note that in [GLP13, Appendix E] they state the result for the strong unstable foliation.
Furthermore they give even more precise information on the Hölder regularity of the map x → J S • (x) which is not relevant for our purpose.
We have thus shown that the conditional densities take the form .
Collecting the statements on the regularity of the Jacobians and holonomy maps we conclude, that for any x they are smooth and that the map x → δ x (H T x (•)) ∈ C ∞ (S 0 ) depends continuously on x w.r.t. the C ∞ topology. This statement assures, the uniform bounds on δ x k .
Remark 5. In the case of C ∞ -foliations we could take coordinate charts κ i such that S x = ({x}×R ns )∩V i . Then (12) would reduce to the ordinary Fubini theorem. For a general foliation such generalizations of Fubini theorems are a nontrivial task and there are counterexamples of foliations that are not absolutely continuous (see e.g. [BP07, Section 8.6]). | 2016-09-15T20:17:47.000Z | 2015-11-26T00:00:00.000 | {
"year": 2017,
"sha1": "aa0a903a49de0d16a84dcc7add15e523dcd26df0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1511.08338",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aa0a903a49de0d16a84dcc7add15e523dcd26df0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
264973347 | pes2o/s2orc | v3-fos-license | Study protocol for a longitudinal observational study of disparities in sleep and cognition in older adults: the DISCO study
Introduction Cognitive dysfunction, a leading cause of mortality and morbidity in the USA and globally, has been shown to disproportionately affect the socioeconomically disadvantaged and those who identify as black or Hispanic/Latinx. Poor sleep is strongly associated with the development of vascular and metabolic diseases, which correlate with cognitive dysfunction. Therefore, sleep may contribute to observed disparities in cognitive disorders. The Epidemiologic Study of Disparities in Sleep and Cognition in Older Adults (DISCO) is a longitudinal, observational cohort study that focuses on gathering data to better understand racial/ethnic sleep disparities and illuminate the relationship among sleep, race and ethnicity and changes in cognitive function. This investigation may help inform targeted interventions to minimise disparities in cognitive health among ageing adults. Methods and analysis The DISCO study will examine up to 495 individuals aged 55 and older at two time points over 24 months. An equal number of black, white and Hispanic/Latinx individuals will be recruited using methods aimed for adults traditionally under-represented in research. Study procedures at each time point will include cognitive tests, gait speed measurement, wrist actigraphy, a type 2 home polysomnography and a clinical examination. Participants will also complete self-identified assessments and questionnaires on cognitive ability, sleep, medication use, quality of life, sociodemographic characteristics, diet, substance use, and psychological and social health. Ethics and dissemination This study was approved by the Northwestern University Feinberg School of Medicine Institutional Review Board. Deidentified datasets will be shared via the BioLINCC repository following the completion of the project. Biospecimen samples from the study that are not being analysed can be made available to qualified investigators on review and approval by study investigators. Requests that do not lead to participant burden or that conflict with the primary aims of the study will be reviewed by the study investigators.
INTRODUCTION AND RATIONALE
Dementia, including Alzheimer's disease, is a severe manifestation of cognitive dysfunction and among the top 10 causes of death in the USA and globally. 1Morbidity from less severe cognitive dysfunction is equally notable, since cognitive dysfunction alone, although not terminal in most cases, still requires progressive social and medical support.The prevalence of cognitive dysfunction is higher among persons with fewer socioeconomic resources, less education or those who identify as black or Hispanic/Latinx race and ethnicity. 2Disparities in the cardiovascular and metabolic correlates of cognitive dysfunction are hypothesised to account for differences in cognitive function by race and ethnicity. 3However, while these disparities are observed by race and ethnicity, they are likely attributable to differences in socioeconomic resources that provide access to preventive medical care and offer differential access to environmental resources (eg, healthy foods, safe spaces for physical activity) that promote ideal health.Given the irreversibility of cognitive dysfunction, identifying and addressing these modifiable factors that are associated with cognitive decline could reduce disparities in cognitive dysfunction. 4leep disturbances are one such potentially modifiable correlate of cognitive dysfunction that vary by race/ethnicity and socioeconomic
STRENGTHS AND LIMITATIONS OF THIS STUDY
⇒ Multiple race and ethnic groups are examined in a single study with identical instrumentation and methodology applied across groups.⇒ This is a longitudinal follow up study to determine temporality.⇒ Gold-standard instruments illustrate objective determination of habitual sleep and sleep architecture.⇒ Sample is not generalisable for the population since random sampling is not used.
][10][11][12] Alternatively, short sleep duration or sleep disorders may influence cognition dysfunction and dementia via inflammation and endothelial dysfunction. 13 146][17][18] These patterns are observed across the lifecourse, including in epidemiological studies of middleaged and older-aged adults. 19Based on prior evidence that sleep characteristics are worse among non-white older adults and the biologically plausible pathways by which poor sleep could influence cognitive functioning, we formed a longitudinal observational epidemiological study to investigate how disparities in sleep among older adults influence cognitive outcomes.
Specific aims
The primary objectives of the Disparities in Sleep and Cognition (DISCO) study are twofold.The first is to examine numerous social, behavioural and health-related characteristics as possible explanations for racial and ethnic sleep health disparities.The second is to determine whether disparities in sleep health by race and ethnicity are significant contributors to disparities in the rate of cognitive decline over the course of approximately 2 years.
The study has two primary aims: 1. Define the contribution of psychological well-being, social well-being and clinical characteristics to sleep disparities among older adults.We will test the hypotheses that sleep disparities by race and ethnicity are partially explained by lower psychological well-being, greater stress, lower self-efficacy and lower social support among non-white adults; and by a higher burden of prevalent comorbidities, such as heart failure or diabetes among non-white adults.2. Determine whether sleep disturbances mediate racial or ethnic differences in the change in cognitive function over 24 months.We hypothesise that inadequate sleep will be associated with greater cognitive decline and will partially mediate the racial and ethnic differences in change in cognitive function over 2 years.We will additionally explore mechanistic pathways to account for these associations, including the presence of cerebral small vessel dysfunction and insulin resistance.
Study design
The DISCO study is a longitudinal observational cohort study that will examine 450 older adults with an equal proportion of non-Hispanic black, non-Hispanic white and Hispanic/Latinx adults of any race aged 55 years and older.Participants will be examined at baseline and again approximately 24 months later.To account for 10% lost to follow-up, we are enrolling approximately 495 participants at baseline.Women and men will be enrolled at proportions that reflect older age distribution according to the 2019 US Census (54% and 46%, respectively).Study participant enrolment began in July 2019 and will continue through December 2023.Participant follow-up will take place through March 2024.
Study setting
This is a single-site study in the metropolitan Chicago area and surrounding suburbs, and nearby northwest Indiana and southern Wisconsin.
Recruitment methods
Participants are recruited through a combination of strategies including random sampling of commercially available telephone listings, institutional electronic health records, advertisements on social media and in publicly available locations, community engagement strategies and word of mouth (ie, snowball sampling).1.Our recruitment strategy using commercially available sampling and the electronic health record sampling are similar.Our staff mails a letter to potential participants or sends an email to explain the study and invite them to contact study staff to assess eligibility.If potential participants do not attempt to reach us, then our staff call participants to explain the study, invite participation, screen for eligibility and obtain consent.2. Advertisements for our study are placed in newspapers, at community sites (eg, public libraries, cafes, senior living facilities), on public transportation and on Facebook.The advertisements include a link to our study website and staff phone numbers so that individuals can request more information about the study and complete preliminary screening questions to determine their eligibility.3. Community engagement activities call for the active involvement of community leaders, organisations and other sectors or stakeholders (such as departments of health) working with academic institutions in the development, implementation, and evaluation of research priorities and activities. 20The DISCO team engaged with a number of Chicago-area communitybased grassroots and health and human services organisations, local and state government representatives, professional organisations and other sectors that are not traditionally involved in research.DISCO convenes a Community Advisory Board (CAB) approximately every 6 months to provide recommendations on project activities, suggest community engagement opportunities, discuss strategies for recruitment and retention, build relationships with community groups, and inform our team about community events.Most CAB members were identified from organisations that serve black and Hispanic/Latinx communities in the city of Chicago.While a core group of 3-4 CAB mem-
Open access
bers remain engaged in the study since its inception, the CAB also includes some older adults who enrolled in the DISCO study and joined the CAB after their study participation.When study results are available, the CAB will advise on strategies to disseminate the findings back to the community who contributed to the research.4. The DISCO staff are also using 'snowball sampling,' to recruit participants.Staff ask participants who complete the study to refer us to people outside of their household who are sociodemographically similar (based on race, ethnicity and age) who might also be interested in participating.Study staff ask the participants to share our study flyer and website with these friends and family.
Impact of the COVID-19 pandemic on recruitment
The COVID-19 pandemic greatly impacted study recruitment due to the vulnerability of older adults to severe outcomes from SARS-CoV-2. 21The research clinic was closed from 16 March 2020 to 1 June 2020.DISCO pivoted to virtual-only engagement strategies and recruitment activities from March 2020 to June 2022.Following advice from our CAB, we provided virtual educational sessions on relevant topics to increase the community's familiarity with our research team.Additionally, we expanded our digital and print recruitment campaigns and tailored them to our target audience.During this period, we maintained a slower but steady rate of enrolment.
Eligibility criteria
Table 1 summarises the inclusion and exclusion criteria.Eligible participants are given the Montreal Cognitive Assessment (MOCA) test either in-person or via the telephone. 22To administer over the phone (a process that was approved following the pandemic to avoid unnecessary in-person contact), the BLIND MOCA is used, which is a version designed for people with visual impairment.Both versions contain cognitive domains such as attention, concentration, memory, language, conceptual thinking, calculation and orientation.A pass score on the original MOCA is 26/30, and a pass score on the BLIND MOCA is 18/22.Participants who indicate that their preferred language is Spanish are administered the validated Spanish-language version of the MOCA.If they pass, study staff solicit verbal consent over the telephone and obtain a signed consent using REDCap.
DATA COLLECTION Examination structure
Between July 2019 and March 2020, most participants came to the research clinic twice during the baseline examination-first to provide consent, complete questionnaires described in table 2 and undergo the clinical examination to undergo a fasting blood draw, anthropometric measures, blood pressure assessment, 6 min walk test, National Institutes of Health (NIH) Toolbox to assess cognitive function and to receive the sleep equipment that they will wear in their homes.On or around the eighth day, participants returned to the research clinic to complete additional cognitive function testing, gait analysis and measurement of cerebral blood flow.
From October 2020 going forward, participants are consented to join the study remotely via REDCap and invited to complete the questionnaires prior to the in-person clinical examination.Following receipt of the questionnaires, they attended the clinical examination and to receive the sleep devices (Actiwatch and Sleep Profiler).Participants are provided with the option of returning on the eighth morning or completing the remaining set of cognitive testing elements on that same day.We document the order with which the examination components were captured and devices administered and can account for any variability in our analyses, though we do not anticipate that it will influence our associations.
Lifestyle Behaviour Domain
Global Physical Activity Questionnaire (GPAQ) 63 13-item scale examining several components of physical activity including intensity, duration and frequency.
Tobacco History Item questionnaire that determines when an individual began smoking and their current smoking habits.
Automated Self-Administered (ASA) 24-Hour Dietary Recall 64 Automated self-administered 24-hour dietary assessment tool web-based tool(.Substance Use 12-item questionnaire that determines the use of alcohol and marijuana. Psychosocial Domain STRAIN 65 The Stress and Adversity Inventory (STRAIN) is a secure, online stress assessment system that measures individuals' lifetime exposure to different types of acute and chronic stress that can affect mental and physical health.The system is intended to combine the reliability and sophistication of an interview-based measure of stress with the simplicity of a self-report instrument.To accomplish this goal, the STRAIN enquires about 75 different types of stressors (Adolescent STRAIN) or 55 different types of stressors (Adult STRAIN) that cover all major life domains (eg, health, intimate relationships, friendships, children, education, work, finances, housing, living conditions, crime) and several social-psychological characteristics (eg, interpersonal loss, physical danger, role change, entrapment).
PROMIS Social Isolation-Short Form 4 a 66 6-item scale examining perceived social isolation.Higher scores indicate higher perceived isolation.
Perceived Stress Scale 68 10-item scale assessing the degree to which situation in participant's life are appraised as stressful.Higher scores indicate higher levels of stress.
University of California, Los Angeles Loneliness Questionnaire 69 8-item scale assessing perceived isolation from others.Higher scores indicate higher perceived loneliness.
Generalized Anxiety Disorder 70 7-item questionnaire assessing self-reported anxiety symptoms in the past 2 weeks.Higher scores indicate higher anxiety symptoms.
Brief Resilience Scale 71 6-item questionnaire used to assess the ability to bounce back.High score indicated high resilience, Bereavement 1-item examining whether participants have lost a spouse/partner or loved one in the past 6 months.
United States Department of Agriculture Food insecurity 72 Item questionnaire used to measure household food security and food insecurity.High score indicated extremely low food security.
Open access
Participants repeat these examination procedures 6-24 months after their baseline assessment.
All data are stored electronically via REDCap and on a secure, encrypted, password-protected server.
Assessment of cognitive function
A primary outcome in the study is cognitive function as determined using the NIH Toolbox cognition battery. 23 24Participants whose preferred language is Spanish are administered the NIH Toolbox Spanish translation.DISCO participants complete the tests that assess attention, 25 executive and memory domains, 26 and episodic memory 27 28 because of the overlap between Alzheimer's disease and vascular cognitive impairment (cerebral small vessel disease) in this age group. 29 second measure of cognitive abilities is determined using the PROMIS Applied Cognition Abilities Instrument, which is a 16-item measure evaluating selfimpressions of cognitive function in the previous 7 days in areas such as mental acuity, concentration and memory.30
Sleep assessment
We are using several methods to assess sleep health in the DISCO study that are consistent with gold-standard measures for in-home unattended assessments in observational population studies.
Wrist actigraphy
Sleep-wake activity is measured over 7-8 days using wrist activity monitors (Actiwatch Spectrum Plus or Pro, Philips Respironics).Wrist actigraphy has been validated against polysomnography (the gold standard of sleep measurements), demonstrating a high correlation for sleep duration among both people with insomnia (r=0.82) and in healthy people (r=0.97) with a discrepancy ranging from 12 to 25 min. 31This method is thought to be a more accurate representation of habitual sleep patterns than polysomnography because it is less disruptive to sleep, can be carried out in the home, and is usually averaged over multiple days.Participants also complete a simple sleep diary and are asked to push the event marker button each time they try to go to sleep and when they wake up.
The actigraphy recordings are scored by a trained technician following specific study guidelines regarding the use of event markers, sleep diary and the activity and light data from the device.We document which method was used to determine the start and end of all rest intervals.All scored recordings are reviewed by PI and Sleep Specialist Dr. Knutson.We use the validated algorithms included in the Actiwatch software analysis system (Actiware) to calculate several measures for each rest interval.
The primary actigraphic estimates of habitual sleep include: (1) sleep duration, (2) sleep percentage (%, the sleep period actually spent sleeping) and ( 3) sleep fragmentation (an index of restlessness).We also are calculating rest-activity rhythms and sleep regularity indices using previously published methods. 32 33pe 2 polysomnography Attended, in-laboratory, full-polysomnography recordings are the gold-standard method to assess sleep architecture (ie, sleep stages) and respiratory events; however, this method is burdensome on the subject.Thus, we have selected an easy-to-use electroencephalography (EEG) monitor that can be self-applied and worn at home.We are using the Sleep Profiler system (Advanced Brain Monitoring, Carlsbad, California, USA) to assess sleep stages, spectral power and respiratory events.The system has configurable acquisition of up to six channels of electrophysiological signals to acquire EEG, electromyography, electrooculography and electrocardiography signals.The device also includes respiratory measures via airflow adapter, cannula, wireless WristOx, Thorax and Abdomen Piezo belts.Several validation studies have been performed comparing the Sleep Profiler to polysomnography and demonstrated strong agreement between these methods. 34 35In addition, analysis of two nights of recording demonstrated stability in the measures of the sleep stages, indicating the Sleep Profiler provides valid measures of sleep stages with a single night. 36articipants are asked to wear the device for one night and are given detailed instructions and demonstrations, along with both videos and written instructions.The system firmware monitors signal quality to ensure that the sensors are properly applied and that high-quality signals are being acquired.Impedance checking is automatically initiated when acquisition begins.Voice messages are delivered to the patient if the impedances are too high or the sensors have become detached.We disable any voice messaging during sleep, however, because we do not want to impair the sleep of the participants.The acquired signals are saved in a universal data format (European Data Format) that can be analysed with third party software, if needed.For our primary analyses, we are using the Sleep Profiler cloud that includes automatic scoring and staging.The pulse rate is analysed to detect autonomic activation.Head movement is used to assist in detecting sleep/wake and to identify periods with gross movement which result in artefact or indicate behavioural arousals.The software provides visual presentation of the recordings and the ability to rescore by a technician.
Instrument Description
Social Support 73 Item questionnaire indicated the no of people available for support and satisfaction of support.
Table 2 Continued
Open access
All recordings are reviewed by a trained technician who modifies the analysis if needed.The Sleep Profiler System provides the following measures: minutes and percentage of stages of rapid eye movement (REM) sleep, non-REM sleep (N1, N2, N3), sleep latency, wake after sleep onset (WASO), sleep efficiency, arousals, pulse rate and average power spectra, including delta (or slow-wave activity).The software also detects apnoeas and hypopnoeas based on either 3% or 4% desaturation.We are using the American Academy of Sleep Medicine definition of hypopnoea when there is a ≥3% oxygen desaturation from pre-event baseline and/ or the event is associated with an arousal. 37We then calculate the Apneoa-Hypopnoea index (AHI; events/hour).
Sleep questionnaires
Participants complete several questionnaires to assess subjective sleep quality and chronotype preferences. 38he following validated instruments are collected at both time points: the Pittsburgh Sleep Quality Index, 39 Insomnia Severity Index, 40 Morningness-Eveningness Questionnaire, 41 Epworth Sleepiness Scale 42 and the Brief Index of Sleep Control. 43See table 2 for a description of each instrument.
Other measurements
Cerebral blood flow velocity is assessed via transcranial Doppler (TCD) ultrasound measurements. 44TCD provides a powerful tool for non-invasive assessment of cerebral vascular responses to various physiological challenges such as motor or cognitive activation or change in blood pressure and end-tidal carbon dioxide, which we know are regulated at the level of arterioles or resistance vessels of the brain (cerebral small vessels).After 10 min of resting data are recorded, participants perform cognitive tasks and then perform the cerebral vasoreactivity test, which is assessed using the CO2 breathing and hyperventilation method.The resting data are used to calculate cerebral autoregulation and pulsatility index, the cognitive trial will be used to calculate neurovascular coupling and the breathing trial will be used to measure vasoreactivity.These cognitive tests are completed only at baseline for all but a small proportion of participants (<5%).
Table 2 lists the remaining domains that are assessed in the study.We selected these domains with the goal of capturing the multidimensional factors that could influence sleep or cognitive function.We assessed a broad set of social determinants of health that we know underlie disparities in health behaviours and disease outcomes by race and ethnicity.All questionnaires that did not have Spanish language translations available were translated by our study team or professional translation company and reviewed by two additional Spanish-speaking coinvestigators.
Sociodemographic characteristics including race, ethnicity, age and educational level were determined based on self-report from the participants.Wherever possible, validated questionnaires were used to assess covariates that we hypothesised were associated with sleep, cognitive function or both.
[47][48] DATA MANAGEMENT Sample size and power calculation Analyses in both aims will include the entire proposed sample of 495 recruited participants.Power to detect clinically meaningful effect sizes were calculated conservatively assuming 9% (n=450) and 18% (n=405) of participants would have missing data for primary analyses or would be lost to attrition.A study that examined sleep quality, including percentage of WASO, 49 found that sleep after days with lower subjective stress had a lower percentage of WASO than sleep after days with higher stress (mean WASO%=12.2%vs 16.4%, SD=12.1% vs 14.9%).We have 91% power to detect these differences with the proposed sample of 450 participants and 87% power to detect this difference assuming 10% attrition (n=405) in aim 1 analyses.For power analyses using the structural equation modelling approach in aim 2, we calculate the power to detect both unacceptable model-fit using root mean square error of approximation (RMSEA) and effect sizes.With a sample size of n=450, we achieve 92% power to detect an RMSEA of 0.1 (unacceptable model fit) against the null hypothesis of RMSEA=0.5 (good model fit).With a sample size of n=450 participants and using RMSEA=0.05under the null hypothesis of a good model fit and RMSEA=0.1 under the alternative hypothesis of unacceptable model fit, we can achieve a power of 92% by observing 5 variables of interest.Assuming 10% are lost to follow-up after baseline, with the sample size of n=405 we can achieve a power of 89%. 50With the proposed sample size, we can achieve an effect size of 0.2 (small-moderate effect size) with five observed variables and five latent variables. 51 52Furthermore, with n=405, we achieve 87% power to detect an OR of 3:1 in cognitive decline for the group with adequate sleep compared with the group with inadequate sleep.
The definition of 'inadequate sleep' will vary based on the hypotheses under study.We are capturing a comprehensive set of measures of sleep health including habitual sleep as well as sleep architecture.We plan to examine multiple dimensions of sleep health, including sleep duration and wake after sleep onset (WASO), but in secondary analyses we can explore the wealth other measurements that we collected and develop a composite 'sleep health' score as has been proposed in the literature.We have the flexibility to generate composite sleep health scores so that we can produce research that is aligned with contemporary research objectives.
Open access
Quality control and quality assurance Study data are collected using electronic methods that constrain response ranges to plausible ranges.Participants either enter responses directly into the REDCap system or data are entered by study staff members who interviewed study participants.The analytical team meets monthly to review data regarding data completeness, data quality and additional data topics as needed.This process will ensure proper data management, scoring of questionnaires and completion of study procedures in a timely manner.
General considerations
In all analyses, distributional characteristics of each measure and residual diagnostics will be used to assess modelling assumptions.As needed, transformations, non-parametric methods and/or inclusion of higherorder (quadratic, cubic, interaction, etc) terms may be considered in analytic models.Statistical estimates (eg, regression coefficients) will be reported with accompanying 95% CIs and p values as applicable.We will use type I error rate of 0.05 to assess statistical significance, while also qualitatively determining if the effect magnitude is clinically meaningful.
Missing data
The multilevel models planned for analysis of longitudinal data are generally robust for unbalanced data across study time points.Nonetheless, multiple imputation with chained equations will then be used to examine the sensitivity of findings to missing data. 53For non-ignorable missing data, we will conduct sensitivity analyses using non-ignorable pattern-mixture and selection models to investigate the robustness of our conclusions across the different models for missing data.
Analyses for aim 1
To evaluate the contribution of psychosocial characteristics to sleep disparities, we will compare the coefficient estimate for race in models including vs excluding each characteristic of interest.The base model for the primary analyses will be a linear regression model with WASO duration as the outcome and race, sex and age as explanatory variables.Each subsequent model will add a psychosocial characteristic (eg, depressive symptoms) to the base model and calculate the percentage reduction in the estimated coefficient for race in the expanded model compared with the base model.This same modelling strategy will be used to evaluate the contribution of characteristics to disparities in self-reported sleep quality (eg, PSQI) or sleep disorders.Secondary analyses will examine the cross-sectional associations between measured psychosocial variables and objective and subjective measures of sleep.We will fit regularised regression models (eg, LASSO, elastic net) for each sleep outcome (eg, WASO, PSQI) that include interaction terms between race and psychosocial characteristics to assess effect modification by race. 54These models will be adjusted for sociodemographic characteristics and comorbidities measured at the baseline visit.The final models will identify the psychosocial variables that best explain each sleep outcome.Additional analyses will stratify by obstructive sleep apnea (OSA, ie, AHI<or≥15) and by sex.OSA is not an exclusion.
Analyses for aim 2
We will assess whether WASO is associated with greater cognitive decline over 24 months via mixed-effects models for change in continuously determined cognitive function score, with a random intercept for participant to account for the correlation that arises from measuring multiple time points from an individual. 55The mixedeffects models will be adjusted for time-invariant (eg, sex) and time-varying (eg, body mass index, comorbid disease) covariates.Results from this analysis will inform whether baseline sleep measurements or change in sleep measurements will be used in subsequent analyses.7][58] Furthermore, we will test for interactions between the sleep variables and race/ethnicity to determine if different aspects of sleep vary by race.We will use the same analytic approach to test to what extent cerebral vascular blood flow or insulin resistance mediate the association between WASO and other sleep metrics and cognitive function.
ETHICS AND DISSEMINATION
The DISCO study is approved by the Northwestern University Feinberg School of Medicine Institutional Review Board.As stated above, informed consent is obtained in writing from participants and stored in Study Tracker.All data will be deidentified before sharing the results, posing no risk to participant confidentiality.Deidentified datasets will be shared via the BioLINCC repository following the completion of the study.In addition to the ongoing processing of biospecimens during the study, study investigators are banking serum, plasma and buffy coat.Samples that are not being analysed can be made available to qualified investigators with a materials transfer agreement and/or data use agreement as applicable.The study investigators will review and consider all requests that do not lead to participant burden nor conflict with the primary aims of the study.
Twitter Mandy L Pershing @mandypershing and Mandy Wong @MandyWongVo Contributors All authors have contributed to the design of this protocol.AG, DC, FS, KH, KK, MC, PCZ, SA and T-HV initiated and conceptually designed the project.SC is acquiring data.This protocol was drafted by KH, KK, MLP, MC, MW, SC, SJA and SSR, and was refined for critically important content by DC, SSR and T-HV.Statistical advice was provided by MW and SJA.KK and MC obtained funding for the study.All authors approved the final manuscript.
Table 1
Inclusion and exclusion criteria
Table 2
60st of survey instruments used in this study Assessment tool for the early detection of mild cognitive impairment.Assesses short-term memory, visuospatial abilities, executive function, attention, concentration and working memory, language and orientation to time and place.questionnairethatrecords information about individuals' medical history.High scores indicate better health.Short Form-36 Questionnaire6036-item measure assessing health-related quality of life.Scores are summarised in a Physical Component Summary and Mental Component Summary.Higher scores indicate better health status.
62-Item measure examining heart failure symptoms, physical limitations and quality of life.Lower scores represent more severe symptoms/limitations. PROMIS Global Health6210-item scale that assesses physical, mental and social aspects of health.Higher scores indicate better health.Time Use Questionnaire 7-item scale that assesses the frequency and duration of activities.COVID-19 Questionnaire 15-item questionnaire that related to exposure to COVID-19. | 2023-11-04T06:18:20.817Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "138fa692f7cd67263c0ce5d75b82c1c47dce1ffd",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "BMJ",
"pdf_hash": "de35053ab556ecfedca9f705178c0e48fbbd1a30",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119700054 | pes2o/s2orc | v3-fos-license | Hasse--Schmidt Derivations and Cayley--Hamilton Theorem for Exterior Algebras
Using the natural notion of {\em Hasse--Schmidt derivations on an exterior algebra}, we relate two classical and seemingly unrelated subjects. The first is the celebrated Cayley--Hamilton theorem of linear algebra,"{\em each endomorphism of a finite-dimensional vector space is a root of its own characteristic polynomial}", and the second concerns the expression of the bosonic vertex operators occurring in the representation theory of the (infinite-dimensional) Heinsenberg algebra.
Introduction
In 1937, Hasse and Schmidt introduced the notion of higher derivations [8], nowadays called Hasse-Schmidt (HS) derivations. Let (A, * ) be an algebra over a ring B, not necessarily commutative or associative. A HS-derivation on A is a B-algebra homomorphism, D(t) : A → A[[t]], that is, a B-linear mapping satisfying D(t)(a 1 * a 2 ) = D(t)a 1 * D(t)a 2 , ∀a 1 , a 2 ∈ A.
A fundamental example of a HS-derivation is given by the map sending any function f = f (z), holomorphic in some domain of the complex plane, to its formal Taylor series,
The aim of Hasse and Schmidt was to find a counterpart to the Taylor series that would work in positive characteristic. Their definition does not require division by integers and is therefore particularly suitable for this purpose. Schmidt later applied the theory to investigate Weierstrass points and Wronskians on curves in positive characteristic [15].
In a number of papers motivated by Schubert Calculus [3,6] (see also the book [5]), one of us proposed to study HS-derivations for exterior algebras.
If A is a commutative ring with unit, M a module over A, and M its exterior algebra, then a HS-derivation on M is a ∧-homomorphism that is, a linear mapping satisfying In this paper we consider HS-derivations D(t) = i≥0 D i · t i with D 0 = 1, the identity on The coefficients of t in these equations give u ∧ D 1 That is why in [3] we call (1.3) the integration by parts formulas.
In the present article we show how these simple formulas link two classical and seemingly unrelated subjects (one finite-dimensional and the other infinitedimensional), apparently leading to a unified interpretation.
One topic is the classical Cayley-Hamilton Theorem of Linear Algebra saying that each endomorphism f of an r-dimensional vector space M is a root of its own characteristic polynomial det(t1 − f ). Let us reformulate this theorem as a linear recurrence relation on the sequence of endomorphisms (f j ) j≥0 , (1.4) f r+k − e 1 f r+k−1 + · · · + (−1) k e r f k = 0, ∀k ≥ 0, where det(t1 − f ) = t r − e 1 t r−1 + · · · + (−1) r e r , and 0 denotes the zero endomorphism. Consider D(t) = i≥0 D i · t i , the unique HS-derivation on the exterior algebra of M such that D i|M = f i , i ≥ 0. It turns out that the sequence (D i ) i≥0 of endomorphisms of M satisfies relations similar to (1.4), see Theorem 2.3 in Section 2.1 for the exact formulation, and Section 3 for the proof.
The other topic concerns bosonic vertex operators arising in the representation theory of the (infinite-dimensional) Heisenberg algebra (see, for example, [9]). As we observe in Section 2.2, any countably generated vector space over the rationals can be equipped with the structure of a free module of finite rank r over a ring of polynomials in r variables with rational coefficients, for any integer r > 0. We present the construction in Section 4, and in Section 5 we apply it to obtain the "finite-dimensional approximation" to the well-known expressions of the vertex operators Γ(t) and Γ * (t) generating the bosonic Heisenberg vertex algebra (see [9, p. 56]). We interpret Γ(t) and Γ * (t) as the limit, when r → ∞, of the ratio of two characteristic polynomials associated to the shift endomorphisms of steps +1 and −1, respectively. The precise formulation can be found in Section 2.3.
Our work is based on interpreting (1.3) as a sort of abstract Cayley-Hamilton theorem, holding for general invertible HS-derivations on exterior algebras of arbitrary modules (not necessarily free). If the module is countably generated, then (1.3) produces a sequence of Cayley-Hamilton relations (3.5) which specialize to the classical Cayley-Hamilton formulas (2.7) when considering Hasse-Schmidt derivations associated to endomorphisms of finitely generated free modules.
Plan of the paper. In Section 2 we formulate the main statements. Section 2.1 is devoted to our extention of the Cayley-Hamilton theorem on an exterior algebra of a finite rank free module, which is Theorem 2.3. The proof can be found in Section 3, which also includes the necessary information on the Hasse-Schmidt derivations, and the discussion concerning the case when the ring A contains the rationals.
In Section 2.2, we equip a countably infinite-dimensional Q-vector space with a natural structure of a free module of rank r over the ring of polynomials of r variables with rational coefficients, for any integer r > 0. The construction is based on a Giambelli's type formula. The details are explained in Section 4.
In Section 2.3 we apply our construction to the bosonic Heisenberg vertex algebra and interpret the truncation of bosonic vertex operators as the ratio of two characteristic polynomials, respectively associated to the shift endomorphisms of step ±1; see Section 5 for detailed explanation.
Acknowledgments We thank the referees for their efforts reading the first version of the paper. Their sharp criticism really contributed to the improvement of the article. In particular, we appreciate the help of the referee who found a mistake in our text. Her/his suggestions allowed us to simplify our proof. We are also grateful to the other referee who demanded more motivations.
As a consequence, any formal power series P (t) = i≥0 p i t i , where p i ∈ End A M , clearly defines a HS-derivation P (t) on M . If, in addition, p 0 = 1, the identity on M , then the formal inverse P −1 (t) ∈ (End A M ) [[t]] defines the inverse HSderivation, Proposition 2.1. Let M be an A-module of at most countable rank, and f ∈ End A M . Set f 0 = 1. Then is an invertible HS-derivation, and its inverse is Our extension of the Cayley-Hamilton theorem concerns modules of finite rank. If M has rank r over A, then r M has rank 1, and so D(t) and D(t) act on r M by multiplication by some formal power series. Indeed, Remark 2.2. (1) Clearly "the eigenvalue" H r (t) of D(t) on r M is the formal inverse of E r (t), i.e., (2.5) H r (t)E r (t) = 1.
If we write H r (t) = j≥0 h j t j , then (2.4) and (2.5) determine each h j as a polynomial of e 1 , . . . , e r . For example, h 0 = 1, h 1 = e 1 , h 2 = e 2 1 − e 2 etc. (2) It is worth emphasizing that {e i } and {h j } are related in exactly the same way as the elementary and the complete symmetric functions of r variables, since E r (t) and H r (t) are their generating functions, respectively (see, for example, [13, I 2]).
For f ∈ End A M , one can write HS-derivations D(t) and D(t), defined by (2.2) and (2.3), respectively, in the form According to (2.4), the characteristic polynomial of f is Hence, for i = r+k, the restriction of (2.7) to M gives the classical Cayley-Hamilton theorem (1.4).
2.2.
A look at the infinite-dimensional case. Let M 0 be a Q-vector space with a countable basis, and M 0 = j≥0 j M 0 its exterior algebra. We equip M 0 with a structure of a free module of rank r over the ring of polynomials of r variables with rational coefficients, for any integer r > 0. See Section 4 for details.
We fix a basis (b j ) j≥1 of M 0 , and define the shift operators σ +1 , σ −1 on M 0 by their action on the basis, One can attach to each of the endomorphisms σ +1 , σ −1 a unique HS-derivation and its inverse, as in (2.2), (2.3). In this subsection we need only the HS-derivations generated by σ +1 . We shall denote by σ +i , σ +i : M 0 → M 0 the coefficients of t i in σ + (t) and σ + (t) respectively. In the next subsection, the HS-derivations corresponding to σ −1 will also appear. Let us fix r > 0. It is convenient to enumerate the basis of r M 0 , which corresponds to (b j ) j≥1 , by partitions λ = (λ 1 ≥ · · · ≥ λ r ≥ 0) of lenght at most r. We write P r for the set of all such partitions, and denote the basis vectors as follows, Let us equip r M 0 with a B r -module structure via is given by (2.4). In terms of the inverses, see (2.5), the same structure is given by The interpretation of e j 's and h j 's as the elementary and the complete symmetric functions of r variables, see Remark 2.2 (2), suggests to consider B r as a Q-vector spaces generated by the Schur polynomials (see, for example, [13, I 3]), Here h j 's are defined as in Remark 2.2 (1) for j ≥ 0, and h j = 0 for j < 0. According to Giambelli's formula as in [3, p. 321]), This allows us to equip M 0 with a multiplicative structure over B r , see Proposition 4.3. Denote M 0 , endowed with this multiplicative structure, by M r . In Section 4, we check that • M r is a B r -module of rank r freely generated by b 1 , . . . , b r .
• r M r is r M 0 with the B r -module structure defined by (2.11) or (2.12).
• e i is the eigenvalue of σ +i restricted to Remark 2.4. The notion of HS-derivation on an exterior algebra enables one to extend some finite-dimensional linear algebra concepts (like eigenvalues and characteristic polynomials) to an infinite-dimensional situation. Indeed, an endomorphism of an infinite-dimensional vector space does not have a characteristic polynomial, whereas the corresponding HS-derivation is still defined.
Finite-dimensional approximations of bosonic vertex operators.
We apply the construction of the previous subsection in order to get a "finitedimensional approximation" of the well-known expression of the vertex operators occurring in the boson-fermion correspondence. We interpret this approximation as the ratio of certain characteristic polynomials.
Details are in Section 5, see also [5]. Take the polynomial ring of countably many indeterminates, B = Q[x 1 , x 2 , . . .], and define the bosonic vertex operators, following [9, p. 56], We find finite-dimensional counterparts of these operators using the symmetric functions interpretation. Namely, similarly to the finite-dimensional case, define E ∞ (t) and H ∞ (t), as the generating functions of the elementary and the complete symmetric functions of a countable set of variables, say, (ξ k ) k≥1 . Consider also x j = j k≥1 ξ j k , the power sum symmetric functions, see [13, I 3]. We have In order to define Γ r (t) and Γ * r (t) for r > 0, use the notation of Section 2.2. In particular, the ring (2.10) is freely generated by the Schur polynomials (2.13), and r M r is spanned over B r by [b] r 0 , according to (2.14). Juxtaposing (2.11) or (2.12) and (2.14) we get, respectively, , which we denote in the same way.
For the HS-derivations generated by the shift operator σ −1 of Section 2.2, we use the indeterminate t −1 instead of t, and denote them by by their values on ∆ λ (H r )'s, as follows, . For r 1 < r 2 , the natural projection B r2 → B r1 sending each of e r1+1 , . . . , e r2 to zero, sends E r2 (t) to E r1 (t), H r2 (t) to H r1 (t), and X r2 (t) to X r1 (t) . In this sense, Thus, Γ r (t) and Γ * r (t) tend to Γ(t) and Γ * (t) when r → ∞.
Cayley-Hamilton Theorem revisited
3.1. Hasse-Schmidt derivations on exterior algebras [3,6]. Let A be a commutative ring with unit, M a free A-module of rank r, and b 1 , . . . , b r some A-basis of M . For , their product is defined as follows, Given series D(t), we use the same notation for the induced A-homomorphism, The series We call D(t) the inverse series and write it in the form One can check that D(t) invertible if and only if D 0 is an automorphism of M .
Proposition 3.1. The following two statements are equivalent:
]) The product of two HS-derivations is a HSderivation. The inverse of a HS-derivation is a HS-derivation.
Proof. For the product of HS-derivations D(t) andD(t), the statement i) of Proposition 3.1 holds. Indeed, ∀u, v ∈ M ,
. [6] If D(t) is the inverse of a HS-derivation D(t), then
for all u, v ∈ M . Equivalenly, for any k ≥ 1, 3.2. Proof of the Theorem 2.3. As we have seen in Proposition 2.1, any endomorphism f ∈ End A (M ) defines two graded mutually inverse HS-derivations, where 1 denotes the identity endomorphism. Write D(t) and D(t) in the form then these HS-derivations satisfy the following properties.
Indeed, D(t) | k M is a polynomial of t of degree k, 1 ≤ k ≤ r.
As before, we assume that our A-module M is freely generated by (b j ) 1≤j≤r . Thus r M has rank 1 and is spanned by [b] r 0 = b 1 ∧ · · · ∧ b r . The restriction of each D i to r M is a multiplication by some scalar e i ∈ A,
Applying (3.5) to our situation, we can write
Equivalently, we have Of course, one can set e k = 0 for k > r, in order do not distinguish between the two cases. However, we prefer a division into cases. Assume now i > r − k > 0. Then, according to Remark 3.5, (iii), the right hand side of (3.7) vanishes ∀v ∈ r−i M , as r − i < k. This means that D k u − e 1 D k−1 u + · · · + (−1)e k u = 0 for any u ∈ i M with i > r − k > 0. This proves the first part of Theorem 2.3. If k > r, then the left hand side of (3.8) vanishes for each i ≥ 0, and this proves the second part.
Remark 3.6. Thus we understand (1.3) as an abstract Cayley-Hamilton theorem valid for general invertible HS-derivations on exterior algebras of arbitrary (not necessarily free) modules. If the module is free and at most countably generated, then (1.3) produces a sequence of Cayley-Hamilton relations (3.5). This sequence turns into the classical Cayley-Hamilton formulas (2.7) when the HS-derivation corresponds to an endomorphism of a finitely generated free module.
In notation of Theorem 2.3, we have r = 3.
1) Let us take k = 2 and check that D 2 − e 1 D 1 + e 2 1 vanishes on R 3 ∧ R 3 , as it should be, according to (2.6). We show the calculation of (D 2 − e 1 D 1 + e 2 1)(u ∧ v); for two other basis vectors u ∧ w and v ∧ w it is completely similar.
First, we find the action of D 1 , D 2 on u ∧ v. We have .
2) Take k = 4 and check that D 4 − e 1 D 3 + e 2 D 2 − e 3 D 1 vanishes on R 3 ∧ R 3 . According to (2.7), this endomorphism vanishes on the whole of R 3 , and in fact the verification for the rest of the direct summands is simpler. Again we calculate the image of u ∧ v.
First, we obtain D 3 (u ∧ v) and D 4 (u ∧ v) in the standard way, writing and collecting the coefficients of t 3 , t 4 , respectively. We get is an eigenvector of D k with the eigenvalue which is the complete symmetric polynomial of x 1 , . . . , x l of degree k. Therefore, for a diagonizable endomorphism our Theorem 2.3 is reduced to the following identity. Denote by h i (x j ) the complete symmetric polynomial of degree i in x 1 , . . . , x j , and by e k (x n ) the elementary symmetric polynomial of degree k in x 1 , . . . , x n . Then, for n ≥ 1 and all 1 ≤ j ≤ n, we have One can deduce the identity, for example, from the formula (*) of [13, I 3 28].
This remark can be turned into a rigorous general proof, using a standard (though rather long) reasoning. Another possible way, which was suggested by our referee, is based on the Frobenius proof of the classical Cayley-Hamilton theorem for the complex matrices, [2]. We were not aware of that 1896 paper by Frobenius. Probably one could translate our arguments into the language of matrix minors. However, our approach, through the relationship to symmetric functions, is short, easy, and, in addition, allows us to concern with the infinite-dimensional case.
3.4. The case of a Q-algebra. If A is a Q-algebra, then for f ∈ End A M the exponential is well-defined. In the ring ( , there is the formal derivative by t, Recall the notation E r (t) given by (2.4), and write the characterictic polynomial of f as in (2.8). In [4] for any commutative ring R containing the rational numbers, the formal Laplace transform L : and its inverse L −1 are defined as follows, L n≥0 a n t n = n≥0 n!a n t n , L −1 n≥0 c n t n = n≥0 c n t n n! , a n , c n ∈ R.
Take the inverse formal Laplace transform of the HS-derivation D(t) = i≥0 D i t i corresponding to f ∈ End A M , Define p k (D) as the coefficient of t k in E r (t)D(t). We have Corollary 3.9. Let Q ⊆ A and the characteristic polynomial of f ∈ End A M be given by (2.8). Then the series D * (t) defined in (3.9) solves the ordinary differential equation (3.11) y (r) (t) − e 1 y (r−1) (t) + . . . + (−1) r e r y(t) = 0 Proof. Take the inverse formal Laplace transform of (3.10). We obtain Let us re-write the series u 0 , u −1 , . . . , u −r+1 in terms of H r (t) = 1/E r (t), see Remark 2.2(1), In [7], we proved that these series form an A-basis of solutions to the ODE (3.11) in R[[t]]. For R = End A ( M ) we get the claim.
Elementary remarks.
We finish this section with a few remarks relevant to the case when A is a Q-algebra.
(1) The characteristic polynomial of f ∈ End A M is given by (2.8) if and only if y(t) = exp(f t) satisfies the linear ordinary differential equation (3.11). This is our Corollary 3.9 restricted to M .
In particular, where (v j (t)) 0≤j≤r−1 is the standard A-basis of solutions to (3.11) . . , f r−1 are the initial conditions of the solution exp(f t).
In the context of endomorphisms of complex vector spaces, the formula for exp(f t) was obtained in 1966 by Putzer [14], and then re-obtained in 1998 by Leonard and Liz,[11,12], in a different way.
(2) The relation between the standard fundamental system v j (t)) 0≤j≤r−1 and the fundamental system u −j (t) 0≤j≤r−1 appeared in the proof of Corollary 3.9 is as follows. Consider the linear system of first order differential equations equivalent to our ODE (3.11), y ′ 1 = y 2 , y ′ 2 = y 3 , . . . , y ′ r−2 = y r−1 , y ′ r−1 = e 1 y (r−1) − . . . + (−1) r−1 e r y 1 . Denote the matrix of this system by P r . Then Q = exp(P r t) is the Wronski matrix of v 1 (t), . . . , v r−1 (t), r−1 (t) = u 0 (t). (3) As another elementary corollary of our considerations, we get formulas for the coefficients e k of the characterictic polynomial of f ∈ End A M in terms of its matrix elements. If C = (c ij ) is the r × r matrix of f in some A-basis of M , denote by D(i 1 , . . . , i k ) the determinant of the (r − k) × (r − k)-matrix obtained from the matrix C by deleting the i 1 -th,. . . ,i k -th rows and columns. Then This formula was obtained differently by Brooks in [1].
Countably generated Q-vector spaces
Let M 0 be a Q-vector space generated by (b j ) j≥1 and M 0 = r≥0 r M 0 be its exterior algebra.
In this section, we treat e 1 , . . . , e r as indeterminates. As we already pointed out in Section 2.2, the ring B r , given by (2.10), has a basis formed by Schur polynomials ∆ λ (H r ), see (2.13). The structure of a principal B r -module on r M 0 is defined via any of the two equivalent equalities (2.11) and (2.12).
Let (β i ) i≥1 be linear forms on M 0 defined by β i (b j ) = δ ij . Their linear span is, by definition, the restricted dual M * 0 . Each β j induces a Q-linear contraction map β j : As each ζ ∈ M 0 is a sum of homogeneous elements of the form m ∧ η, equation (4.2) defines the contraction operator over the entire exterior algebra M 0 . Proof. Under the hypothesis (4.3), suppose first that m ′ = am for some a = 1. If m = 0, then there is µ ∈ M * 0 such that µ(m) = 0. Because of the isomorphism If m and m ′ are not proportional, take their duals µ, µ ′ ∈ M * 0 and choose Proof. Let us check first that b 1 , b 2 , . . . , b r are B r -linearly independent. Denote by (β j ) j≥0 the generators of 0 , that is, a i = 0 for all 1 ≤ i ≤ r. Now, let us show that b i+r − e 1 b i+r−1 + · · · + e r b i = 0 for all i ≥ 0. This will prove, by induction, that M r is generated over B r by b 1 , b 2 , . . . , b r . It is enough to observe that is a polynomial of degree r (here we set b j = 0 for j < 1). By definition of the module structure, for each η ∈ r−1 M 0 we have We use now the agreement (4.1). As in Remark 3.5, we see that σ r vanishes on r−1 M 0 , hence the expression obtained above is a polynomial in t of degree r − 1. We have so proven that M r is a B r -module of rank r. Moreover σ 1 is B r -linear. In fact, Corollary 4.4. The elements e i , 1 ≤ i ≤ r, and h j , j ≥ 0, of B r are the eigenvalues of σ i and σ j , respectively, thought of as endomorphisms of r M r .
may be identified with ratios of characteristic series operators associated to the shift endomorphisms of step ±1 of M 0 .
Remark 5.1. Notice that E r (t) is indeed the characteristic polynomial of σ 1 , thought of as endomorphism of M r , and σ − (t −1 ) is the characteristic series operator associated to σ −1 .
Proof. We sketch the arguments of [5]. First of all, notice that for all r ≥ 1 Let σ − (t −1 )H r = (σ − (t −1 )h n ) n∈Z and σ − (t −1 )H r = (σ − (t −1 )h n ) n∈Z . Then Clearly formulas (5.1) and (5.2) do not depend on r when r is big enough (that is, at least the length of the partition λ). Thus these formulas hold for r = ∞ as well.
We set E ∞ (t) = 1 − e 1 z + e 2 t 2 + · · · and H ∞ (t) = 1/E ∞ (t). Notice that exp being the exponent of a first order differential operator, is a ring homomorphism whose value at h n coincides with σ − (t −1 )h n . This means that Similarly one shows that as claimed. | 2019-01-09T11:45:10.000Z | 2019-01-09T00:00:00.000 | {
"year": 2019,
"sha1": "34f0bc16725f3ec85420be6ec9a393df0073b446",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.02686",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "34f0bc16725f3ec85420be6ec9a393df0073b446",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
15904113 | pes2o/s2orc | v3-fos-license | FogMQ: A Message Broker System for Enabling Distributed, Internet-Scale IoT Applications over Heterogeneous Cloud Platforms
Excessive tail end-to-end latency occurs with conventional message brokers as a result of having massive numbers of geographically distributed devices communicate through a message broker. On the other hand, broker-less messaging systems, though ensure low latency, are highly dependent on the limitation of direct device-to-device (D2D) communication technologies, and cannot scale well as large numbers of resource-limited devices exchange messages. In this paper, we propose FogMQ, a cloud-based message broker system that overcomes the limitations of conventional systems by enabling autonomous discovery, self-deployment, and online migration of message brokers across heterogeneous cloud platforms. For each device, FogMQ provides a high capacity device cloning service that subscribes to device messages. The clones facilitate near-the-edge data analytics in resourceful cloud compute nodes. Clones in FogMQ apply Flock, an algorithm mimicking flocking-like behavior to allow clones to dynamically select and autonomously migrate to different heterogeneous cloud platforms in a distributed manner.
I. INTRODUCTION
Many large-scale applications are sensitive to latency as they rely on messaging sub-systems between geographically distributed devices and cloud services. Even if 10% of messages were delayed for longer than 150-300 ms, applications like remote-assisted surgery and real-time situation-awareness may not be feasible [1], [2]. A bounded tail end-to-end latency is a cornerstone for the realization of large-scale Internet of Things (IoT) applications near the network edge [3], [4].
When devices communicate through a middle message broker, successive packets queuing in multi-hop paths becomes a major source of latency. For example the average end-to-end latency of messages exchanged using a Redis broker [5] in a close amazon data-center is three times longer than deploying the same broker one-hop away from devices. Broker-less messaging using device-to-device communication does not necessarily solve the successive packets queuing problem. In IoT applications, a device communicates a large number of messages with many devices. Devices' limited processing and memory capacities become another major source of latency for large-scale distributed applications. Experiments show that direct device-to-device (D2D) messages can experience double the end-to-end latency when compared to brokering the messages through a one-hop away broker (see Section II).
When devices are cloned in a one-hop away cloudlet [6], a device's clone can provide message brokering service so that interacting devices can communicate with low latency while allowing them to offload their computation to resourceful nodes. Of course, communicating through a one-hop away clone may still cause long tail end-to-end latency -considering the 99-th percentile of computation plus communication latencies between clones/devices -when the broker service relays messages to distant devices. If a clone can measure: 1) messaging demand with other devices/clones, 2) the tail latency experienced by messages, and 3) the potential latency of other cloudlets/cloud platforms, clones can self-migrate between cloud platforms to always ensure a bounded weighted tail end-to-end latency. We show how autonomous clone migration can mimic birds flocking and prove that it is stable and achieves a tight latency that is (1 + ǫ)−far from optimal.
The use of cloudlets and dynamic service migration to solve latency problems are not new: Cloudlets [7] reduce the singlehop latency from 0.5-1 seconds to tens of milliseconds, and technologies like MobiScud and FollowMe [8], [9] migrate clones to sustain an average single-hop Round Trip Time (RTT) at nearly 10 ms. Such schemes struggle to make optimal migration decisions despite using central control units as they: adopt too constraining migration metric (average single-hop latency) and trigger migration only if devices locations change [10]. However, applications in fog computing [4] necessitate the deployment of inter-networking clones in heterogeneous platforms (cloudlet/clouds) without centralized administration. In this fog environment, clones communicate with several geodistributed devices and other clones where the tail weighted end-to-end latency becomes the primary latency measure instead of the average RTT of a single-hop.
We propose FogMQ, a clone brokering system design that allows clones to self-discover and autonomously migrate to potential cloud hosting platforms according to self-measured weighted tail end-to-end latency, thereby stabilizing clone deployments and achieving low latency. FogMQ has four key features: 1 Internet-scale cloud/edge platforms without needing centralized monitoring and control; 4. Simple and requires no change to existing cloud platforms controllers.
II. MOTIVATION AND CHALLENGES
We first show that multi-hop queuing along Internet paths is a major source of end-to-end latency for IoT applications. We then show that devices' limited compute and memory resources standstill against latency reduction by direct deviceto-device communication.
A. Sources of latency
When a message broker is hosted in a multi-tenant cloud platform, the end-to-end latency degrades due to network interference. Network interference occurs when the broker messages share: ingress/egress network I/O of its host, and one or more queues in the data-center network switches [11]. Hosts and network resources become spontaneously congested by traffic-demanding applications. For a singleauthority cloud, an operator can control network interference of latency-sensitive applications with: switching, routing, and queuing management policies, besides controlling contention for hosts' compute, memory, and I/O resources [12], [13].
Network interference is harder to control for IoT applications. As devices communicate using our cloud-hosted broker, messages share network resources of multi-hop paths with diverse and unmonitored traffic. Multiple, unfederated authorities manage network resources along these paths, which makes it hard to enforce unified traffic shaping or queuing management polices. Adding also variations in devices' traffic demand, communication pattern with other devices, and mobility, it becomes particularity hard to trace devices' traffic, delays, and infrastructure conditions to find optimal policies with centralized solutions. Multi-hop queuing along Internet paths can account for a 3x degradation in end-to-end latency on average. Fig. 1(a) illustrates an experiment setup to quantify this latency degradation. We install a Redis server in a Virtual Machine (VM)-instance in the nearest Amazon EC2 datacenter (EC2-Redis), and install another Redis server in a same capacity VM in a host co-located with our WiFi access point (Edge-Redis). Our host runs other workloads. We emulate devices as simple processes running on another host that uses the same access point. We ensure that all VMs and hosts are time-synchronized with zero delay and jitter during the experiment execution time. A device emulator A publishes 10K messages to either Redis servers, and another device emulator B subscribes to A's messages. Fig. 1(b) shows the Cumulative Distribution Function (CDF) of the end-to-end latency, measured as the time between receiving a message at B and publishing it from A. The tail end-to-end latency for Edge-Redis is 15.6ms, while it measured at 24.2ms for EC2-Redis accounting for 1.5x tail end-to-end latency improvement by avoiding multi-hop path to the closest EC2 instance and 3x improvement on average. Direct device-to-device communication using broker-less message queues can thought of to be better than using message brokers. The obvious reasons for broker-less queues, such as ZeroMQ [14], superiority are: their lightweight implementation, and their usage of minimal number of shared queues, switches, routers, and access points, between communicating devices. TABLE I shows the median and tail end-to-end latency of ZeroMQ and Redis under different loads, where ZeroMQ can deliver 10, 000 messages three times faster than Redis.
B. Why broker-less is not always the answer?
Unfortunately, if the devices are resource limited, the latency superiority of direct device-to-device communication is not always maintained. Let us return to our motivating experiment in Fig. 1(a). We limit the resources used by the device emulators using Linux cgroups such that a device emulator can use no more than 10% of the CPU time compared to EC2-Redis or Edge-Redis. Fig. 1(b) shows that the average end-to-end latency of D2D-ZeroMQ is 7 times longer than Edge-Redis, and the tail end-to-end latency is 4 times longer. Several factors can contribute to this deteriorated performance including the wireless environment loading and implementation details of either Redis or ZeroMQ. However, the main factor that limits direct device-to-device latency is the limited compute resources of the devices emulators.
To emphasis this observation, we increase the number of the publishing device peers (i.e. number of subscribing device emulators) until the Edge-Redis server becomes loaded. Fig. 1(c) shows the tail end-to-end latency for different numbers of peers. As we increase the number of peers, the latency superiority of the Edge-Redis starts to diminish, until we reach the 200 peers points at which our host becomes loaded at 90% utilization and the tail end-to-end latency of broker-less D2D-ZeroMQ becomes better by 14%. Broker-less device-to-device messaging is only better if a device computational resources are sufficiently large, which is an unrealistic assumption for most IoT devices.
III. FogMQ SYSTEM
Our motivating experiments show that multi-hop queuing along Internet paths is a major source of tail end-to-end latency for cloud-based messaging systems, and that the latency improvement promise from device-to-device communication cannot always be attained due to limited devices resources. FogMQ tackles multi-hop queuing by reducing the queuing of messages: primarily, if message brokers can self-deploy and migrate in cloudlets according to the communication pattern of the devices, then the impact of multi-hop queuing delay can be diminished. In the extreme case, if two resource-limited devices communicate through brokers in the same cloudlet using the same access point, we can achieve a finite minimal bound on the latency. Autonomous brokers migration is the foundation idea of FogMQ.
In this section, we derive an intuitive design of FogMQ by which we bound weighted latency given an arbitrary network of heterogeneous cloud platforms. Although the stability and bounded performance of our design are intuitive, we solidify this intuition by relating the design to the theory of singleton weighted congestion games [15], [16], where we show that self-deploying clones reach a Nash Equilibrium (NE) and tightens the Price of Anarchy (PoA) of the weighted end-toend delay.
A. Network of clones
To begin, we assume that devices communicate with each others according to IoT applications' requirements and form a social network of devices. Typically, the convergence of man-machine interactions in IoT will derive devices to form a social network [17]. This network can form according to existing social network structure of users or according to the required communication among devices that is inherited from application design.
The idea of application design for IoT is simple. An application is modeled as a graph (e.g. [18], [19], [20] participating in the execution of the IoT application publishes its data to its brokering clone. On the other hand, clones subscribe to each other according to the application-modeled graph, which forms an overlay network of clones to enable the IoT application. Upon completion of their executions, clones may push the results back to devices. Fig. 2 illustrates a simple tree aggregation application for data retrieved from three devices. Device A and C publish their data x and y to their clones. As device B is interested in the result x + y, clone B subscribes to data from clones A and C to retrieve x and y, evaluates x + y, and pushes the result to its device. The advantages of using a pub/sub system for interacting between clones and the devices are: 1) providing an efficient messaging middleware to manage largescale graph structures and multiple applications, 2) relying on the already in-place subscription and matching languages to effectively route information between devices and clones and inter-clones, and 3) simplifying the design of large-scale applications as overlay networks of and among the clones.
Generally speaking, the overlay network design of the clones is either structured or unstructured, and focuses mainly on minimizing a brokers fanout to minimize the communication between the clones. For example, topic-connected overlay networks are designed such that devices interested in the same topic are organized in a direct connected dissemination overlay [21]. The overlay network forms the foundation for distributed pub/sub, and directly impacts the system scalability and application performance [22], [23], [24]. We assume that an overlay topology of clones is given and we model it as a social network. We model the social network of clones as a graph G = (V, P ), where V denotes the set of n clones and P denotes the set of all clone pairs such that p = (i, j) ∈ P if the i-th and j-th clones communicate with each other.
B. FogMQ architecture
We now describe how FogMQ initially creates device clones, as well as the FogMQ's architectural design tradeoffs. Fig. 3 illustrates the architectural elements of FogMQ.
FogMQ initially clones a device at the closest cloud/cloudlet from a set A of m cloud/cloudlets that are available to all devices and that can communicate over the Internet. An RPC client in the device is responsible for the clone initiation and peer relationship definition with other devices by which a device participates in the execution of a distributed application described as an overlay network of clones. Each cloud/cloudlet runs FogMQ RPC server as a middleware around the cloud/cloudlet controller which implements different solutions that enable FogMQ to interact with heterogeneous cloud platforms (e.g. EC2, AWS, OpenStack cloudlet, or even a standalone host). A typical approach that clients can use to initiate clones is to query a global geo-aware domain name service load balancer to retrieve the IP address of the nearest FogMQ RPC server. With the integration of cloud computing in cellular systems [25], devices can also use native cellular procedures to initiate clones in the device's nearest cellular site. The FogMQ RPC server realizes the device clone in a cloud platform as a virtual machine, container, or native process, where processes are always a favorable design choice to avoid latency overhead. Recent Linpackbenchmark [26] shows that containers and native processes can achieve a comparable number of floating point arithmetic per second, at least 2.5x greater than virtual machines. Despite that containers have better privacy and security advantage, as they provide a better administration, network, storage, and compute isolation, containers networking configuration can account for 30µs latency overhead when compared to that incurred by native processes [27]. As we will discuss later, implementing clones as processes has an advantage over both virtual machines and containers as they incur lesser migration overhead.
Once FogMQ creates a clone for a device, the clone subscribes to the devices' published messages that contain preprocessed sensors reading. Subscribing to the devices' messages eases clone migration processes as we will detail later. If a clone migrates from one cloud to the other, any changes to the clone IP address or network configuration become transparent to the device. Upon migration, the clone resubscribes to the devices' messages, allowing the device to continue publishing its messages without needing the clone to notify the device of its migration.
A clone can process its device's messages in its computation offloading module (see Fig. 3) with high processing, memory, and storage capacities. The computation offloading module of the clone also executes distributed applications defined as overlay networks that interconnect several clones. To exchange messages between peer-clones of an overlay network, a clone creates a separate process for each of its peers. Each process subscribes to published messages from its corresponding peerclone to make messages from peers available for the computation offloading module. If needed a clone pushes messages and/or computation results back to its device using a push/pull messaging pattern. Fig. 3 illustrate messages flow between different modules for a simple example in which Device A is a peer to Device B.
The overlay optimization module and the peer-to-peer routing module are responsible for optimizing the fan-out of overlay networks and the routing decisions as we described earlier. Although the design of these mechanisms are integral to the performance of the overall system, the details overlay design and routing optimization algorithms are orthogonal to the scope of this paper in which we focus on autonomous migration decision that minimize the tail end-to-end latency.
C. Latency and peer-demand monitoring
Each device clone self-monitors and characterizes the demands with its peers, and evaluates latencies with the assistance of the hosting cloud. Let d ij ∈ R + denote the traffic demand between i and j and assume that d ij = d ji . Let x i ∈ A denote the cloud that hosts i and l(x i , x j ) > 0 be the average latency between i and j when hosted at x i and x j , respectively (Note: if i and j are hosted at the same cloud, x i = x j ). We assume that l is reciprocal and monotonic. Therefore, l(x i , x j ) = l(x j , x i ) and there is an entirely nondecreasing order of A → A ′ such that for any consecutive The reciprocity condition ensures that measured latencies are aligned with peer-VMs and imitates the alignment rule in bird flocking. We model l( is the average packet latency between x i and x j , and τ (x i , x j ) = τ (x j , x i ). The quantity ρ(x) is the average processing delay of x modeled as: ρ(x) = δ i∈V :xi=x j∈V d ij /(γ(x) − i∈V :xi=x j∈V d ij ), where δ is an arbitrary delay constant and γ(x) denotes the capacity of x to handle all demanded traffic of its hosted VMs.
D. Learning new targets
Each cloud runs FogMQ RPC server as a middleware. The FogMQ servers in different clouds form a peer-to-peer network that evolves autonomously. Bootstrap nodes assist newly joined FogMQ servers to discover other servers. Gossip protocol is used to spread information about new FogMQ servers.
IV. Flock-AN ADAPTIVE CLONE MIGRATION ALGORITHM
Clones should adapt themselves to changes in the infrastructure network interconnecting the heterogenous cloud platforms, and should change according to the network state, structure, and applications' requirements. We propose an adaptive, fully distributed algorithm for dynamic cloud selection. The algorithm allows each VM i to learn a set A i ⊆ A (referred to as i's strategy set) from its hosting cloud x i , and to autonomously select its hosting cloud based on local measurements only. Every cloud x updates its weight w x = i:xi=x u i (x) and broadcasts a monotonic, non-negative regularization function f (w x ) : R + → R + with α < f (w x ) < 1 for α > 0 to each VM hosted at x. As VMs can each only be hosted by one cloud and all have access to the same set of strategies, we model the clone migration problem as a singleton symmetric weighted congestion game that minimizes the social cost C(σ) = x∈A w x f (w x ). If f (w x ) ≈ 1, this game model approximates to minimizing i∈V u i (x i ). Let x i − → y denote that a VM i migrates from cloud x to cloud y and let η ≤ 1 denote a design threshold. We now describe our proposed clone migration algorithm: Flock: Autonomous VM migration protocol.
Initialization: Each clone i ∈ V runs at a cloud x ∈ A. Ensure: A Nash equilibrium outcome σ.
1: During round t, do in parallel: for all i ∈ V 2: i solicits its current set A i from x.
We now provide some simulation results, borrowed from [28] for completeness, to have an initial sense of how well Flock performs in terms of convergence and achievable PoA; more results on the performance of Flock can be found in [28]. Clouds are modelled as a complete graph with intercloud latency τ ∼ Uniform(10, 100) and cloud capacity γ ∼ Uniform(50, 100). We model peer-to-peer clone relations as a binomial graph with d ∼ Uniform(1, 10). Fig. 4 shows the average number of rounds, k, needed for the algorithm to converge to a Nash equilibrium at 95%confidence interval with 0.1 error. Observe that although the worst case of k is O(n log(nf max )) where f max is the maximum value of the regularization function f [29], the figure shows that Flock scales better than O(n) on average.
Distributed pub/sub systems organize brokers, devices, or routing functions as an overlays and sub-overlays at the application layer. Upon constructing an efficient overlay network, routing protocols above layer-3 build minimum-cost message dissemination paths to deliver messages to subscribers according to specific topic-interest. Caching policies replicate clones' contents closer to devices interested in a content for faster repetitive publishing. For a given routing, overlay topology, and caching mechanisms, FogMQ ensures that these mechanisms achieve their full potentials by self-reorganizing the deployment of brokers through migrations in heterogeneous, unmanaged, and dynamic cloud environments. Unlike widely adopted centralized systems (e.g. Redis), FogMQ suits the large-scale applications and use cases of IoT and avoids the limitations of broker-less systems (e.g. ZeroMQ).
Existing migration solutions are limited in their applicability to minimize the weighted end-to-end latency. Several existing solutions rely on a system-wide central controller to manage the states of clones, devices, and physical resources of cloud platforms [10], [42]. For the considered fog environment, these solutions lack scalability for an Internet-sized network without relaxations that potentially compromise solutions quality.
Consider Markov Decision Process (MDP) based solutions. MDP requires a central server to collect statistics of devices mobility, clones demands, and clouds connectivity and utilization. This server also executes the value iteration algorithm to evaluate an optimal migration policy [10], [42], [43]. It is intractable to model all possible states of clones and their hosting platforms; hence it is common to discretize states measurements to relax the complexity of the policy optimization algorithms [43], [10]. This compromises the solutions quality.
Game-theoretic approaches potentially decentralize the migration algorithms and improve their scalability. However existing game-theoretic solutions provide an unbounded PoA [44]. We cannot use them -as they are -and guarantee optimal or close to optimal weighted end-to-end latency. Finally, existing migration solutions serve specialized cloud providers' objectives (e.g. energy, load, and cost) to profitably manage providers' infrastructures [45], [44]. The existing models do not capture network latency between clones that are executing distributed IoT applications. Unlike existing solutions, FogMQ adopts a simple autonomous migration protocol that is stable and bounds the tail end-to-end latency (1 − ǫ) far from optimal.
VI. CONCLUSION AND FUTURE WORK
We proposed FogMQ, a cloud-based, message broker system, composed of an architecture and an online migration algorithm, that enables autonomous discovery, self-deployment, and online migration of message brokers across heterogeneous cloud platforms. The migration algorithm, called Flock, enables autonomous discovery of and migration to heterogeneous cloud/edge platforms in a (i) decentralized manner and (ii) without requiring changes to existing cloud platform controllers. The proposed architecture enables the deployment of message brokers (i) at the edge clouds (i.e., cloudlets) near the end-user devices, and (ii) while accounting for the devices' communication and relationship traffic patterns.
An implementation of FogMQ on real cloud platforms is currently underway. In this implementation, clones are implemented as processes, Redis key-value store as a device registry, and edge clouds as Linux VMs. Our implementation-based performance evaluation of our proposed FogMQ system, a clone-based architecture design and an online clone migration algorithm, will be published when available.
ACKNOWLEDGEMENT
This work was made possible by NPRP grant # NPRP 5-319-2-121 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors. | 2016-10-03T16:23:05.000Z | 2016-10-03T00:00:00.000 | {
"year": 2016,
"sha1": "2449fdf7b3d889c5d49697c38ab047a9b559b3ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2449fdf7b3d889c5d49697c38ab047a9b559b3ba",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
259725963 | pes2o/s2orc | v3-fos-license | Effects of zinc oxide nanoparticles on neuronal response of pyramidal neuron of the CA1 hippocampus in rat model of parkinson's disease
Due to the nano size, zinc oxide nanoparticles (ZoN) easily pass through the blood-brain barrier and can enter the brain cells and have different effects on different parts of the nervous system. In the present study, the effect of zinc oxide nanoparticles on the electrical activity of pyramidal cells in the CA1 region of the hippocampus in Parkinson's disease-like rats has been investigated. In this experimental study, adult male Wistar rats as model were randomly divided into five groups: Substantia nigra pars compacta (SNc) lesion (the lesions were induced by (Intra Peritoneal (IP)) injection of Rotenone 2mg/kg/19day/48h) and four groups of ZoN (lesions plus 0.5, 1, 1.5 and 3 mg/kg IP of ZoN. Spontaneous neural activity was recorded for all groups in the CA1 region of the hippocampus. The results revealed that IP injection of ZoN (1, 1.5 and 3 mg/kg) decreased neuronal spontaneous activity in the rat model of PD. The current study results suggested that acute IP injection of ZoN decreased neuronal response in the CA1 region of the hippocampal in a rat model of PD.
Protein aggregation is a common histopathological feature in Parkinson's disease (PD) and Alzheimer's disease (AD). PD is characterized by accumulation of misfolded protein α-synuclein (α-syn) in components called Lewy bodies (LB) in dopaminergic neurons, leading to severe motor dysfunction. Alzheimer's disease is characterized by the abnormal accumulation of amyloid-β (Aβ) plaques and tau neurofibrillary tangles, resulting in brain damage that affects critical cognitive processes. Emerging clinical and experimental results support the hypothesis that pathological α-syn, Aβ, and tau are prion-like peptides/proteins that can induce the proliferation of endogenous monomers and cause the cell-to-cell spread of proteinopathy [10].
In pathological conditions, native α-syn undergoes a misfolding process from a soluble, random conformation to an insoluble, fibrillar form. When α-syn aggregates are misfolded, they localize to mitochondria, causing mitochondrial fragmentation and reduced membrane potential. Aβ peptide accumulation is thought to be the result of inefficient production of mitochondrial reactive oxygen species (ROS) and metal dyshomeostasis due to oxidative stress. In AD, the microtubule-associated protein tau is known to undergo abnormal hyperphosphorylation, leading to the binding of tau with prion-like activity. Although the mechanism and effects of tau crosslinking are still poorly understood, there is a promise that tau is an effective therapeutic target [6]. Similar pathological mechanisms in both diseases increase the production of ROS, leading to a cascade of oxidative stress. By increasing cellular stress, microglial reactions, and increased expression of inflammatory cytokines, both diseases significantly increase neuroinflammation, which is thought to lead to cell death and further protein/peptide accumulation. Biological mechanisms affecting PD and AD as described are misfolded protein aggregation, oxidative stress, inflammation, and cell death [9]. Despite decades of clinical trials using traditional therapies, the highly successful treatment of oxidative stress and protein misfolding in neurodegenerative diseases has been elusive. Fighting amyloidosis in both AD and PD with small molecules, peptides, and monoclonal antibodies in particular has had little success. This opens the door to nanomaterials with attractive physicochemical properties, stability, and multifunctionality to improve the understanding and treatment of diseases [8]. Nanoparticles (NPs) can enhance the transport of therapeutics across the blood brain barrier (BBB) during pathological conditions in PD and AD. Characteristics of diseased BBB include increased vascular permeability, reduced expression of tight junctions and BBB transporters, and accumulation of blood-derived cells and debris in perivascular spaces.
Such pathological conditions disrupt concentration gradient-based diffusion and reduce the function of carriermediated transport (CMT) and receptor-mediated transport (RMT) [9]. Zinc oxide nanoparticles are one of the most widely used nanoparticles in the fields of industry, medicine, and health. Due to their small size, nanoparticles have a greater ability to cross biological barriers and easily enter the brain.
The aim of the present study is to investigate the effect of zinc oxide nanoparticles on the electrical activity and neural response of pyramidal neurons in the CA1 region of the hippocampus in Parkinson's disease-like rats.
2.1.Study animals
In the present study, Adult male Wistar albino rats (220±20 g) were prepared and all experimental protocols were approved by the Ethics Committee of the Shahid Chamran University of Ahvaz (Ahvaz, Iran) and tested according to the NIH guidelines for the care and use of laboratory animals (International Institute for Health Publications No. 23-80; revised in 1978). In this section, mice were exposed to controlled humidity (50 ± 6%) and light conditions (12 hours light/ dark cycle; light). The room temperature was set at 23±2 °C and food and water were freely available.
2.2.Study drugs
Rotenone (Sigma Aldridge), was dissolved in dimethyl sulfoxide (DMSO) and diluted with polyethylene glycol (PEG). Zinc oxide nanoparticles (manufactured by Tecnan, Spain, with a purity of 99.9% and a size of 20-30 nm) in the required doses daily and 30 minutes before the start of the experiment, dissolved in 0.9% physiological serum and with the help of an ultrasonic bath was dispersed and before each injection, the composition was again mixed for 1 minute by a shaker. The animals in the control group received saline.
2.3.Experimental procedure
In this research, the single unit recording method was used in anesthetized animals from the hippocampal CA1 region pyramidal neurons. The experiments were performed in a completely quiet room at a normal temperature of 25 ± 1°C. A total of 40 male Wistar rats were used in this experiment. There were five groups (n = 8) as follows: 1. Substantia nigra pars compacta (SNc) lesion was made by IP rotenone injection of 2mg/kg/19day/48h, 2. lesions + 0.5 mg/kg of ZoN (IP), 3. lesions + 1 mg/kg of ZoN (IP), 4. lesions + 1.5 mg/kg of ZoN (IP), 5. lesions + 3 mg/kg of ZoN (IP). The recovery period for the lesion group was seven days. After recovery, they were prepared for 120 mins single-unit recording, after baseline recording (15 minutes), ZoN or IP saline was injected and recording continued for 105 minutes thereafter. The change in firing activity of the recorded neurons after drug injection was calculated and interpreted as an indicator of the effect of the drug on the electrical properties of the neurons. The experimental design and groups of animals are shown in Fig. 1.
2.4.Induced Parkinson's disease model
To create Model D, the animals were first anesthetized with ketamine (78 mg/kg, IP, Alfasan, Netherlands) and xylacin (3 mg/kg, IP, Alfasan, Netherlands) and then the SNc was destroyed by rotenone injection. One week after surgery, the animals were prepared for electrophysiological testing and single-unit recording. A histological specimen confirming the degradation of SNc of PD was presented (Fig. 2).
2.5.Animal preparation and stereotactic surgery
Due to the fact that the use of ketamine for anesthesia blocks NMDA receptors and thus changes electrophysiological records. For this purpose, a substance that does not block brain receptors should be used to anesthetize animals. Urethane is a good material for this purpose. Animals were anesthetized with urethane (1.5 g/kg, IP; Sigma Aldrich, Germany) at supplemental doses (0.1 g/kg) every hour it needed to maintain a deep and stable level of anesthesia, as indicated by immobility. Response to strong tail pinching rats underwent tracheostomy to reduce respiratory impulses and maintain a stable airway waiting to be recorded. For this purpose, the hair on the front of the neck was shaved and an incision was made. The muscles and smooth tissue of the neck were then removed toward the trachea. A slit was made in the trachea and a polyethylene tube was placed in the lower part of the trachea and tightened with sutures. The animal was then gently placed in a stereotactic device (Stoelting, USA). The animals' skins were cleaned to reveal the surface of the skull, and the Bregma spot was designated as a reference for stereotactic. A hole of 2 mm in diameter was created above the CA1 region (AP -3.8 mm, ML ± 2.2 mm, DV -2.4 mm) of the hippocampus. Body temperature was maintained at 36-37 ° C for the entire experiment with a heating pad.
2.6.Extracellular single-unit recording and data acquisition
Extracellular recording of individual neurons was performed using tungsten microelectrodes (coated with Parylene, shaft diameter 127 μm, tip impedance 5 MΩ, Harvard device). The microelectrode was stereotactically transferred to the CA1 region of the hippocampus. Thereafter the electrode was moved slowly in the layer of pyramidal neurons using a microelectrode driver till a specific spike activity is recorded with a signal-to-noise ratio of >2 separations of the background activity. Spike signals were amplified (×10000 gain; 300 Hz, and 10 kHz for low and high filters, respectively) and displayed continuously on a storage oscilloscope as signals. The spike frequency was calculated and transmitted online in time bins of 1000 ms for the entire recording time by online sorter software (Spike; Science Beam, Tehran, Iran). The action potentials of the baseline activity were separated using a windows discriminator, which produced output pulses for single units based on the spike height, which calculated the number of spikes per unit of time. In this experiment, the recording time for data gathering was 7200 s with bin size 1000 ms constantly stored on the hard disk and the average frequency was computed by computer [3]. According to the results, pyramidal neurons in the CA1 region are known based on their spontaneous frequency of 8 or less [3]. Recording continued for about 15 minutes due to the identification of a pyramidal neuron with a constant firing frequency and a constant spike amplitude and waveforms as the baseline. After 15 minutes, the drug was injected and the recording continued for about 105 minutes. In the present study, the discharge of each neuron over a 60-second time interval was calculated using a data acquisition program to generate Peri-Stimulus Time Histograms (PSTHs) with a time interval of 15 minutes before injection to 105 minutes after drug injection. Data were analyzed offline using Windows Home Analysis software. In order to identify patterns of neural response to saline, ZoN 0.5, 1, 1.5, and 3 mg/kg were administered, and the entire perception period was cut into 60-second time buckets. Increasing or decreasing neuronal activity was considered as twice the standard time deviation from baseline activity for three consecutive points as a stimulatory or inhibitory response.
2.7.Histological confirmation
At the end of the electrophysiological recordings, the brains of the animals were removed and fixed in a 10% formalin solution. Then, 20 μm sections were removed from near the electrode and the incisions were stained using hematoxylin and eosin (H&E). Finally, a microscope (Japan; Olympus EX51) was used to determine the recording location in the CA1 region of the hippocampus and the results were compared with references (Fig. 3).
2.8.Statistical analysis
Data were recorded before and 105 minutes after the IP administration of drugs. The obtained data were analyzed using SPSS software version 20. To evaluate the data, paired t-test was used to evaluate the effect of the drug on nerve firing rate before and after drug injection. In addition, GraphPad Prism version 6.07 was used to plot the effect of the drug on the number of stimulatory, inhibitory, and ineffective neurons. The data were presented as Mean ± Standard Error of the Mean (SEM). P<0.05 was considered statistically significant.
3.Results
Aiming at the effect of saline on the electrical firing of pyramidal neurons in the CA1 hippocampus, 0.2 ml of saline was injected into the IP after basal recording and nerve firing was recorded for 105 minutes. From the obtained results, paired t-test was performed and the stimulatory response of neurons in the lesion group to saline injection did not show a significant increase in the frequency of neurons after injection compared to baseline activity (Figs. 4 and 5). In the present study, 12 neurons were recorded from 8 rats, of which saline was able to stimulate 2 neurons, inhibit 3, and also had no effect on 7 of them. Specifically, the effect of saline on neuronal stimulation was 38 to 54 minutes after IP injection. Also, the mean increase in the activity of pyramidal neurons in the CA1 region of the hippocampus showed that intraperitoneal saline injection was associated with a 20 to 50% increase in activity in 2 neurons and a 50 to 65% decrease in activity in 3 neurons. The results of injection of ZoN 0.5 mg/kg after lesion showed that the t-test of paired samples did not show a significant decrease in the frequency of neurons after injection compared to basal activity (t = -0.854, df = 18; P< 0.05) (Figs. 6 & 7). In this group, 13 neurons from 8 rats were recorded and it was observed that ZoN 0.5 mg/kg had no effect on 7 neurons, inhibited 3 neurons, and stimulated 3 neurons. After injection of 0.5 mg ZoN, stimulation of neuronal firing began within 34 to 53 minutes. Then, the group receiving ZoN 1mg/kg after the lesion showed a significant decrease in the firing frequency of neurons in area A after injection compared to the basal neuronal activity in the t-test of paired samples (t = -1.632, df = 16; P <0.05). (Figs. 8 & 9). Specifically, in this group of 8 mice, 13 neurons were recorded and it was found that 1mg of ZoN had a stimulatory effect on 6 neurons, had an inhibitory effect on 4 neurons, and had no effect on 3 neurons. After the injection of 1mg ZoN, stimulation was observed in the stimulation period of 45 to 56 minutes. The results of injection of 1.5 mg/kg ZoN after lesion showed that A significant decrease in neuronal firing frequency was observed compared to baseline activity in a paired t-test (t = -2.423, df = 18; P <0.005) ( Fig. 10 and 11). In this group of 8 mice, 12 neurons were recorded that 1.5mg of ZoN had a positive effect on 5 neurons, a negative effect on 5 neurons, and no effect on 2 neurons (Fig. 12). The onset of ZoN stimulation was observed in a period of 41 to 58 minutes. The results of injection of 3 mg/kg ZoN after lesion showed that A significant decrease in neuronal firing frequency was observed compared to baseline activity in a paired t-test (t = -2.7142, df = 17; P <0.005) (Figs. 12 and 13). In this group of 8 mice, 13 neurons were recorded that 3 mg of ZoN had a positive effect on 6 neurons, a negative effect on 5 neurons, and no effect on 2 neurons (Fig. 14). The onset of ZoN stimulation was observed in a period of 43 to 55 minutes.
4.Discussion
In the present study, saline and ZoN were injected at 0.5, 1, 1.5, and 3 mg/kg in rotenone-induced Parkinson's disease mice, and ZoN was found to decrease spontaneous activity at 1, 1.5 & 3 mg/kg. The pyramidal neurons become CA1, while the other values had no significant effect. Sarbigi et al. (2019) reported that rotenone alters the electrical activity of the hippocampus and its associated behavioral changes. Rotenone mice showed a significant reduction in standing movements during the 3 weeks compared to control animals [8]. Zinc oxide nanoparticles cause a significant decrease in long-term memory retrieval in the passive avoidance learning model in rats [5]. Oxide nanoparticles are able to disrupt spatial memory in male Wistar rats in the Mauritius water maze [11].
Zinc ion is an antagonist of glutamate NMDA receptors. Zinc oxide nanoparticles reduce the activity of the NMDA receptor and as a result disrupt the memory function [7]. Researches have shown that this ion is able to enter the neuron through different routes, including: glutamate receptors and L-type voltage-dependent calcium channels.
Zinc ion in small amounts causes learning and memory, but when its amount exceeds a certain limit [2]. Also, research has shown that zinc oxide nanoparticles attack beta cells of the pancreas, causing the destruction of these cells and the reduction of insulin hormone in the body. With the decrease of insulin, and 3β-GSK 4, the phosphorylation of JNK 5 protein kinases in the cell is reduced and causes an increase in neurofibrillary tangles and amyloid plaques, followed by atrophy and neuronal death in different areas of the brain, including the hippocampus [1]. These nanoparticles increase the production of active oxygen species and cell death by affecting the mitochondrial organelle in the cell. Exposure to different amounts of zinc oxide nanoparticles causes oxidative stress, lipid peroxidation, cell membrane damage and DNA damage [4].
5.Conclusion
The results of the present study and according to the studies, it seems that ZoN can at least partially reduce the activity of pyramidal neurons in the CA1 region of the hippocampus in mice with Parkinson's disease and can be a suitable option for the treatment of diseases Nervous including Parkinson's and Alzheimer's to be used.
Conflicts of Interest
The author declares no conflict of interest. | 2023-07-12T08:30:49.817Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "99d8898b2fda12692f93ad725abddec39d37f345",
"oa_license": "CCBYNC",
"oa_url": "https://cnj.araku.ac.ir/article_705211_0a6c18632ace7c467c444d803f2776d3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "acdd42e0fdb67ace3bc8677fd8e430994f89be3d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
} |
16428518 | pes2o/s2orc | v3-fos-license | Multiple dark-bright solitons in atomic Bose-Einstein condensates
We present experimental results and a systematic theoretical analysis of dark-br ight soliton interactions and multiple-dark-bright soliton complexes in atomic t wo-component Bose-Einstein condensates. We study analytically the interactions b etween two-dark-bright solitons in a homogeneous condensate and, then, extend ou r considerations to the presence of the trap. An effective equation of motion is derived for the dark-bright soliton center and the existence and stability of stationary two-dark-bright soliton states is illustrated (with the bright components being either in- or out-of-phase). The equation of motion provides the characteristic oscillation frequencies of the solitons, in good agreement with the eigenfrequencies of the anomalous modes of the system.
I. INTRODUCTION
Over the past few years, the macroscopic nonlinear structures that can be supported in atomic Bose-Einstein condensates (BECs) have been a topic of intense investigation (see, e.g., Refs. [1][2][3][4] for reviews in this topic). The first experimental efforts to identify the predominant nonlinear structure in BECs with repulsive interatomic interactions, namely the dark soliton, were initiated over a decade ago [5][6][7][8][9]. However, these efforts suffered from a number of instabilities arising due to dimensionality and/or temperature effects. More recently, a new generation of relevant experiments has emerged, that has enabled the overcoming (or quantification) of some of the above limitations. The latter works have finally enabled the realization of oscillating, and even interacting, robust dark solitons in atomic BECs. This has been achieved by means of various techniques, including phase-imprinting/density engineering [10][11][12], matter-wave interference [13,14], or dragging localized defects through the BECs [15].
Atomic dark solitons may also exist in multi-component condensates, where they are coupled with other nonlinear macroscopic structures [1,2,4]. Of particular interest are dark-bright (DB) solitons that are supported in twocomponent [16] and spinor [17] condensates. Such structures, are frequently called "symbiotic" solitons, as the brightsoliton component (which is generically supported in BECs with attractive interactions [3]) may only exist due to the interspecies interaction with the dark-soliton component. Darkbright solitons have also attracted much attention in other contexts, such as nonlinear optics [18] and mathematical physics [19]. In fact, DB-soliton states were first observed in optics experiments, where they were created in photorefractive crystals [20], while their interactions were partially monitored in Ref. [21]. In the physics of BECs, robust DB- * URL: http://nlds.sdsu.edu solitons were first observed in the experiment of Ref. [10] by means of a phase-imprinting method, and more recently in Refs. [22][23][24] by means of the counterflow of the two BEC components. The above efforts led to a renewed interest in theoretical aspects of this theme: this way, DB-soliton interactions were studied from the viewpoint of the integrable systems theory in Ref. [25], DB-soliton dynamics were investigated numerically in Ref. [26], while DB-solitons in discrete settings were recently analyzed in Ref. [27]. Furthermore, higher-dimensional generalizations -namely, vortex-brightsoliton structures-were recently studied as well [28].
Our aim in the present work is to study multiple-DB solitons in two-component BECs confined in harmonic traps. First, we present our experimental results, based on the counterflow of two rubidium condensate species [22][23][24], which demonstrate the existence (and indicate the robustness) of such multiple-DB-soliton states. Motivated by the experimental observations, we then proceed to analyze the interactions of DB solitons, first in the case of a homogeneous system and, next, in the presence of the trap. Our analytical approximation relies on a Hamiltonian perturbation theory, which leads to an equation of motion of the centers of DB-soliton interacting pairs. Employing this equation of motion, we demonstrate the existence of stationary two-and three-DB-soliton states, find semi-analytically the equilibrium distance of the constituent solitons, as well as the oscillation frequencies around these equilibria. The oscillation frequencies correspond to the characteristic anomalous modes' eigenfrequencies that we compute via a Bogoliubov-de Gennes (BdG) analysis. This way, we are able to quantify the properties of stationary multiple-DB-solitons in harmonically confined two-component BECs, and provide analytical results for their in-and out-of-phase motions.
The paper is organized as follows. In Section II, we provide our experimental results. In Section III we describe our theoretical setup and present the DB-soliton states. Section IV is devoted to the study of the interactions of two DB-solitons, while Section V contains the results for multiple DB-solitons in the trap. Finally, in Section VI we summarize our findings and discuss future challenges.
II. EXPERIMENTAL RESULTS -MOTIVATION
Since our scope is the study of multiple-DB-solitons in atomic BECs, we start by presenting some experimental results, which showcase the existence of such structures. These results, apart from being interesting in their own right -as they demonstrate the formation of DB-soliton clusters in twocomponent BECs-provide the motivation for a systematic analysis of multiple-DB-solitons, which will be presented in the following sections.
Our soliton generation scheme is based on a counterflow induced modulational instability details of which have been described in Refs. [22,23]. Briefly, we start with a BEC of about 800, 000 87 Rb atoms in the |F, m F = |1, −1 hyperfine state. The atoms are confined in an elongated optical dipole trap with measured trap frequencies of 2π×{1.5, 140, 178} Hz. About half the atoms are then transferred to the |2, −2 state with a brief microwave sweep, thus producing a weakly miscible two-component mixture. Subsequently, a magnetic gradient of 10.4 mG/cm is applied along the elongated axis of the BEC, inducing counterflow of the two components. As a result, a dense modulation instability pattern arises. Once the pattern has fully developed, the gradient is turned off. During the subsequent in-trap evolution, the initially very regular pattern becomes irregular, and over a time scale of several seconds displays the dynamics of interacting solitons originating from the modulational instability. Images taken in several experimental runs 5 sec after the switch-off of the gradient are presented in Fig. 1, in which the upper cloud (and the red curves in the insets) in each image shows atoms in the |2, −2 state after 7 ms of free expansion, while the lower cloud (and black curves in the insets) shows the atoms in the |1, −1 state after 8 ms of expansion. This difference in expansion time just serves to separate the two states vertically for imaging.
An intriguing observation is the frequent formation of large gaps in one component (which constitutes the component supporting the dark-solitons) that are filled by bright-solitons in the other component. Interestingly, these gaps are structured by small, periodic density bumps, indicating that these regions are composed of merged solitons. Some of these features are marked by the boxed regions in Fig. 1, with corresponding cross sections shown as insets. We clearly observe clusters of two-and three-merged solitons [see Fig. 1(a-c)], and also have some indications of clusters composed of four-to five-solitons -see Fig. 1(d, e).
While our destructive imaging technique does not allow us to analyze the dynamics and lifetime of the clusters in detail, the occurrence of large DB-soliton clusters strongly supports the theoretical part of our work that we will present below: in fact, we will study analytically the interaction between two-DB solitons, and we will demonstrate the existence of twoand multiple-DB stationary states, resembling the ones observed in the experiment. Furthermore, we will study the stability of these states and discuss their dynamics in the presence of the harmonic trap.
A. Coupled GPEs and dark-bright solitons
Following the experimental observations of the previous section, we consider a two-component elongated (along the x-direction) BEC, composed of two different hyperfine states of rubidium. As is the case of the experiment, we consider a highly anisotropic trap, with the longitudinal and transverse trapping frequencies such that ω x ≪ ω ⊥ . In the framework of the mean-field theory, the dynamics of this two-component BEC can be described by the following system of two coupled GPEs [1,2,4]: Here, ψ j (x, t) (j = 1, 2) denote the mean-field wave functions of the two components (normalized to the numbers of atoms N j = +∞ −∞ |ψ j | 2 dx), m is the atomic mass, µ j are the chemical potentials, and V (x) represents the external harmonic trapping potential, V (x) = (1/2)mΩ 2 x 2 where Ω = ω x /ω ⊥ . In addition, g jk = 2 ω ⊥ a jk are the effective 1D coupling constants, a jk denote the three s-wave scattering lengths (note that a 12 = a 21 ) accounting for collisions between atoms belonging to the same (a jj ) or different (a jk , j = k) species. In the case of the hyperfine states |1, −1 and |2, −2 of 87 Rb considered in the previous section, the scattering lengths take the values a 11 = 100.4a 0 , a 12 = 98.98a 0 and a 22 = 98.98a 0 (where a 0 is the Bohr radius) [22,23]. Thus, we may safely use the approximation that all scattering lengths take the same value, say a ij ≈ a [38]. To this end, measuring the densities |ψ j | 2 , length, time and energy in units of 2a, a ⊥ = /ω ⊥ , ω −1 ⊥ and ω ⊥ , respectively, we may reduce the system of Eqs. (1) into the following dimensionless form, Below, we will consider a situation where the component characterized by the wavefunction ψ 1 (ψ 2 ) supports a singleor a multiple-dark (bright) soliton state, and the respective chemical potentials will be such that µ 1 > µ 2 . Note that the external potential in Eqs. (2) We assume that a single-or a multiple-dark-soliton state is on top of a Thomas-Fermi (TF) cloud with density |ψ TF | 2 = µ 1 − V (x); this way, the density |ψ 1 | 2 in Eqs. (2) is substituted as |ψ 1 | 2 → |ψ TF | 2 |ψ 1 | 2 . Furthermore, we introduce the transformations t → µ 1 t, x → √ µ 1 x, |ψ 2 | 2 → µ −1 1 |ψ 2 | 2 , and cast Eqs. (2) into the following form: whereμ = µ 2 /µ 1 , while can be viewed as a system of two coupled perturbed nonlinear Schrödinger (NLS) equations, with perturbations given by Eqs. (5). In the absence of the trap (i.e., for Ω = 0), the perturbations vanish and Eqs. (3)-(4) actually constitute the completely integrable Manakov system [29]. This system conserves, among other quantities, the Hamiltonian (total energy), as well as the total number of atoms, +∞ −∞ |ψ j | 2 dx; additionally, the number of atoms of each component, N 1 and N 2 , is separately conserved.
Considering the boundary conditions |ψ 1 | 2 → 1 and |ψ 2 | 2 → 0 as |x| → ∞, the NLS Eqs. (3)-(4) possess an exact analytical single-DB soliton solution of the following form (see, e.g., Ref. [16]): where φ is the dark soliton's phase angle, cos φ and η represent the amplitudes of the dark and bright solitons, D and x 0 (t) denote the width and the center of the DB soliton, while k = D tan φ = const. and θ(t) are the wavenumber and phase of the bright soliton, respectively. The above parameters of the single DB-soliton are connected through the following equations: whereẋ 0 = dx 0 /dt is the DB soliton velocity. Below, we will mainly focus on stationary solutions, characterized by a dark soliton's phase angle φ = 0 [in this case, the bright soliton component is stationary as well -see Eq. (10)]; nevertheless, we will also consider the near-equilibrium motion of DB solitons, characterized by φ ≈ 0.
To describe a two-DB-soliton state (for Ω = 0) composed by a pair of two equal-amplitude single DB solitons traveling in opposite directions, we will use the following ansatz: where X ± = D (x ± x 0 (t)), 2x 0 is the relative distance between the two solitons, and ∆θ is the relative phase between the two bright solitons (assumed to be constant); below we will consider both the out-of-phase case, with ∆θ = π, as well as the in-phase case, corresponding to ∆θ = 0. Notice that in either cases of single-or multiple-DBsolitons, the number of atoms of the bright soliton, N 2 , may be used to connect the amplitude η of the bright soliton(s), the chemical potential µ 1 of the dark soliton(s) component, as well as the width D of the DB soliton. In particular, in the case of a single-DB-soliton, one finds that N 2 = 2η 2 √ µ 1 /D [for the variables appearing in Eqs. (2)], while for the case of a two-DB-soliton state (with well-separated solitons) the relevant result is approximately twice as large, namely: In our numerical computations, we will initially obtain (by means of a fixed-point algorithm) stationary solutions of Eqs. (2), in the form ψ 1 (x, t) = u(x) and ψ 2 (x, t) = v(x), and then consider their linear stability. This is numerically studied via the well-known BdG analysis (see, e.g., Refs. [1,2,4]), upon introducing the following ansatz into Eqs. (2), The resulting equations are linearized (keeping only terms of order of the small parameter ε), and the ensuing eigenvalue problem for eigenmodes {a(x), b(x), c(x), d(x)} and eigenvalues λ = λ r + iλ i is solved [note that the asterisk in Eqs. (15)-(16) denotes complex conjugation]. In the case of a single DB soliton, the excitation spectrum can be wellunderstood in both cases, corresponding to the absence and the presence of the harmonic trap, using the following arguments.
First, in the absence of the trap, the system of Eqs.
(2) features not only a U (1) (phase) invariance in each of the components but also a translational invariance; thus, the system has three pairs of eigenvalues (each associated with one of the above symmetries) at the origin of the spectral plane (λ r , λ i ). In this case, the phonon band (associated with the continuous spectrum of the problem) covers the entire imaginary axis of the spectral plane.
Second, in the presence of the trap, the single DB soliton "lives" on the background of the confined ground state, i.e., {ψ 1 , ψ 2 } = {ψ TF , 0} (as discussed above). It is wellknown [1,2] that the harmonic potential introduces a discrete (point) BdG spectrum for this spatially confined ground state. In addition to that, the translational invariance of the unconfined system is broken and, due to the presence of the DB soliton, a single eigenvalue λ (AM) emerges. The respective (negative energy) eigenmode is the so-called anomalous mode (AM), while the associated eigenvalue λ (AM) is directly connected with the oscillation frequency of the DB soliton in the harmonic trap, similarly to the case of a dark soliton in one-component BECs [30]. In fact, the imaginary part of the eigenvalue λ (AM) reads λ where ω osc is the oscillation frequency of the single DB soliton, given by [16]: The above results are illustrated in Fig. 2, where a typical example of a stationary single DB-soliton state is depicted (top panel); additionally, the eigenvalues λ i characterizing the excitation (BdG) spectra of such stationary states, are shown as functions of the chemical potentials µ 1 and µ 2 in the middle and bottom panels of the figure, respectively. As observed in these two bottom panels, there exist two types of spectral lines, namely "slowly-varying" ones (analogous to ones that are present in the spectrum of a dark soliton in one-component BECs [13]) and "fast-varying" ones due to the presence of the second (bright-soliton) component. The latter, as was pointed out also in the recent work of Ref. [24] may, in fact, collide with the internal anomalous mode of the DB soliton and give rise to instability quartets which are barely discernible in Fig. 2 (see, e.g., the bottom panel for µ 2 > 1.4 where a merger of eigenvalues occurs). Generally, however, it is found that the analytical prediction (red dashed line) is excellent in capturing the relevant anomalous mode pertaining to the DB-soliton oscillation.
The above discussion sets the stage for the presentation of our results for multiple DB-soliton states.
IV. INTERACTION BETWEEN TWO DARK-BRIGHT SOLITONS
We start by considering the case where the external trap is absent, i.e., for Ω = 0. To study the interaction of two identical DB solitons, as described in the ansatz of Eqs. (12)-(13), we will employ the Hamiltonian approach in the framework of the adiabatic approximation of the perturbation theory for matter-wave solitons (see, e.g., Refs. [2,4]). In particular, we assume that the approximate two-DB-soliton state features an adiabatic evolution due to a weak mutual interaction between the constituent solitons and, thus, the DB soliton parameters become slowly-varying unknown functions of time t. Thus, φ → φ(t), D → D(t) and, as a result, Eqs. (9)-(10) become: x where we have used Eq. (14). The evolution of the parameters φ(t), D(t) and x 0 (t) can then be found by means of the evolution of the DB soliton energy as follows. First, we substitute the ansatz (12)-(13) into Eq. (6) and perform the integrations under the assumption that the soliton velocity is sufficiently small, such that cos(kx) ≈ 1 (and sin(kx) ≈ 0). Then, we further simplify the result assuming that the solitons are wellseparated, i.e., their relative distance is x 0 ≫ 1. This way, we find that the total energy of the system assumes the following form: where E 1 is the energy of a single DB soliton, namely, while the remaining terms account for the interaction between the two DB solitons. In particular, E DD , E BB , and E DB denote, respectively, the interaction energy between the two dark solitons, the two bright ones, and the interaction energy between the dark soliton of one component and the bright one in the other component. The above interaction energies are given by the following (approximate) expressions: where terms of order O(e −6Dx0 ) and higher have been neglected (nevertheless, it has been checked that their contribution does not alter the main results that will be presented below).
Having determined the two-DB-soliton energy [up to order O(e −6Dx0 )], we can find the evolution of the soliton parameters from the energy conservation, dE/dt = 0. In fact, we focus on the case of low-velocity, almost black solitons (witḣ D(t) ≈ 0 and cos φ(t) ≈ 1), for which energy conservation leads to the following nonlinear evolution equation for the DB soliton center:ẍ In the above equations, F int is the interaction force between the two DB solitons (depending on the soliton coordinate x 0 ), which contains the following three distinct contributions: the interaction forces F DD and F BB between the two dark and two bright solitons, respectively, as well as the interaction force F DB of the dark soliton of the one soliton pair with the bright soliton of the other pair. These forces have the following form: where D(t) ≈ D 0 since we are assuming thatḊ(t) ≈ 0. The equation of motion for the two-DB-soliton state [cf. Eq. (26)] provides a clear physical picture for the interaction between the two DB solitons. In order to better understand this result, first we note that the leading order interaction force between the bright soliton components is ∝ exp(−2D 0 x 0 ) and, as a result, it is decaying more slowly for large x 0 than the one between the two dark solitons which is ∝ exp(−4D 0 x 0 ); the interaction between dark and bright is also to leading order ∝ exp(−2D 0 x 0 ). This result is in accordance with earlier predictions, where the same dependence of the force over the soliton separation was found (see, e.g., Refs. [31] and [14,32,33] for bright and dark solitons, respectively).
Let us now consider the role of the bright-soliton component. In its absence, i.e., for χ = 0 [cf. Eq. (19)], it is clear that F BB = F DB = 0 and Eq. (26) describes the interaction between two dark (almost black) solitons; in this case, taking into regard that D 0 = 1, it can readily be found that the pertinent (repulsive) interaction potential is ∝ 2 exp(−4x 0 ), which coincides with the result of Ref. [33] obtained by means of a variational approach (see also a relevant discussion in Refs. [4,14]). On the other hand, when bright solitons are present (i.e., for χ = 0), the principal nature of the brightbright-soliton interaction -and also of part of the dark-brightsoliton interaction one-depends on the factor cos ∆θ: when the relative phase between the bright-soliton components is ∆θ = 0 (∆θ = π), i.e., in the in-phase (out-of-phase) case, the interaction is repulsive (attractive). This conclusion stems from the fact that the coefficients of the terms ∝ cos ∆θ are positive definite since the parameter D 0 ≥ 1 for every χ > 0.
According to the above, it is clear that the competition between repulsive (for dark solitons) and attractive (for out-ofphase bright solitons) forces leads to the emergence of fixed points in the equation of motion (26) or, in other words, to the existence of a stationary two-DB-soliton state [39]. Below we will demonstrate this effect in detail: we will determine the fixed (equilibrium) points, x eq , as solutions of the transcendental equation resulting from Eq. (26) forẍ 0 = 0 in the out-of-phase case, and subsequently study their stability. Nevertheless, before proceeding further, we should mention that stationary two-DB-solitons were also found numerically and experimentally in Ref. [21] in the context of nonlinear optics, but their existence details and stability properties were not considered. Additionally, although exact two-DB-soliton solutions (as well as N -DB-soliton solutions) do exist in the Manakov system [25,34], their complicated form does not allow for a transparent physical description of the relevant dynamics, as provided above.
Let us now study the stability of the equilibrium points in the framework of Eq. (26). Introducing the ansatz x 0 (t) = x eq + δ(t), and linearizing with respect to the small-amplitude perturbation δ(t), we derive the following equation: where the oscillation frequency ω 0 is given by: where the phase-difference between the bright-soliton components is taken to be ∆θ = π. Physically speaking, the oscillation frequency ω 0 represents the internal (out-of-phase) motion of the two DB-solitons; in fact, as here we deal with the homogeneous case (i.e., in the absence of the trap), the in-phase motion of the solitons is associated with the neutral translation mode due to the translational invariance of the system (the respective in-phase Goldstone mode has a vanishing frequency). The above analytical predictions have been compared with numerical simulations. First, we have confirmed the existence of the stationary two-DB-soliton state (in the out-of-phase case); a prototypical example of such a state is shown in the top panel of Fig. 3 (for µ 1 = 3µ 2 /2 = 3/2). We have also determined the dependence of the equilibrium soliton positions (denoted by x 0 in the middle panel of Fig. 3) Fig. 3. To obtain the numerical results, we have used a (least squares) fitting algorithm to accurately identify the amplitude η, inverse width D, and equilibrium center of mass x 0 of the bright component. The numerical findings for x 0 and ω 0 (the latter is obtained via a BdG analysis, as the imaginary eigenvalue λ i of the stationary two-DB-soliton state) are directly compared with the semi-analytical results of Eqs. (26) and (32), respectively. Taking into account the approximate nature of the fitting scheme, we find that there is a very good quantitative agreement between the analytical and numerical results (see middle and bottom panels of Fig. 3). Notice that despite the motion of this eigenvalue through the continuous spectrum, no instability is observed in the parametric window shown in Fig. 3.
V. MULTIPLE DARK-BRIGHT SOLITONS IN THE TRAP
Next, let us consider the case of multiple DB-solitons in the presence of the harmonic trap. In the presence of the trap, each of the multiple-DB-soliton structures is subject to two forces: (a) the restoring force of the trap, F tr [in the case of a single DB-soliton, this force induces an in-trap oscillation with a frequency ω osc -see Eq. (17)], and (b) the pairwise interaction force F int [cf. Eq. (27)] with other dark-bright solitons. Thus, taking into regard that F tr = −ω 2 osc x 0 [16], one may write the effective equation of motion for the center x 0 of a two-DB-soliton state as follows: One can thus straightforwardly generalize the above equation for N -interacting DB-soliton states, similarly to the case of multiple dark solitons in one-component BECs [13,14,35].
It is interesting to observe that, in the presence of the trap, the restoring force F tr can generate equilibrium positions, not only for out-of-phase bright solitons (whereby such a state could be stationary even without the trap as found above), but also for in-phase bright solitons. In the latter case, the repulsion between both the dark-and the bright-soliton component(s), is balanced by the trap-induced restoring force. In the case of two-DB solitons placed at x = ±x 0 , the equilibrium points, x eq , can readily be found (as before) as solutions of the transcendental equation resulting from Eq. (33) forẍ 0 = 0, in both the in-and out-of-phase cases. To study the stability of these equilibrium points in the framework of Eq. (33), we may again use the ansatz x 0 (t) = x eq + δ(t), and obtain a linear equation for the small-amplitude perturbation δ(t), similar to that of Eq. (31), namely:δ + ω 2 1 δ = 0, where the frequency ω 1 is given by, where ω 0 is given by Eq. (32). Similarly to the case of dark solitons in one-component BECs [14] (see also Ref. [4]), by construction, this mode captures the out-of-phase motion of the DB-soliton pair. Furthermore, by symmetry, the in-phase oscillation of the DB-soliton pair in the trap will be performed with the frequency ω osc . These two characteristic frequencies coincide with the eigenfrequencies of two anomalous modes of the BdG spectrum of the trapped DB-soliton pair (see, e.g., Refs. [4,14,36] for a relevant discussion of such modes). The rest of the spectrum will still be discrete (due to the presence of the harmonic trap -see also Sec. III), but in this case also collisions of these anomalous modes of the DB-solitons with the modes of the background may induce instabilities (see also Ref. [14]) -see below.
We now turn to a systematic numerical investigation of the above features and of the multiple-DB-soliton states. At first, , it is observed that larger chemical potential (number of atoms) in the second component leads to stronger repulsion and, hence, larger distance from the trap center. In the out-of-phase case (bottom right panel), we observe a similar effect but in the reverse direction (due to the attraction of the out-of-phase bright-soliton components) for smaller values of the chemical potential. Notice that in both cases a good agreement is observed between the numerically observed equilibrium separations and the theoretically predicted ones from Eq. (33).
To study the validity of Eq. (34) -pertinent to smallamplitude oscillations around the fixed points-we show in Once again, good agreement is observed between the two; the differences may be partially attributed to the "interaction" (i.e., collisions) of these modes with other modes of the BdG spectrum. It is clear from the comparison of the corresponding columns that there exist narrow instability windows, arising due to the crossing of the anomalous mode(s) of the DB-soliton pair with eigenmodes of the background of the two-component system. These instabilities arise in the form of Hamiltonian-Hopf bifurcations [37] through the emergence of quartets of complex eigenvalues resulting from the collision of two pairs. The growth rates of the pertinent oscillatory instabilities are fairly small (i.e., the instabilities are weak) in both the in-and out-of-phase cases; it should be noted, however, that in the latter case, the formation of the quartets appears to be occurring in very narrow intervals. Naturally, the above considerations can also be generalized to three-or more DB-solitons, although the analytical calculations become increasingly more tedious; again, as we will show below, in-phase or out-of-phase The left and right columns of panels correspond, respectively, to an in-phase and an out-of-phase three-DBsoliton configurations. The top row of panels depicts the respective stationary states, for µ1 = 3/2, µ2 = 1 and Ω = 0.1; solid (blue) lines depict the dark-soliton components, dashed (green) lines the bright ones, while the dashed-dotted (red) line shows the harmonic trap. The second row of panels depicts the spectral planes for the above stationary states, while the third and fourth rows of panels are equivalent to those of Fig. 5, but for the three-DB-soliton configurations.
sponds to the in-phase three-DB-soliton state, while the second column corresponds to the out-of-phase variant thereof. In the case under consideration, there exist narrow parametric intervals of dynamical instability, which are narrower for the out-of-phase case (as in the case of the two-DB-soliton states). We should mention, in passing, that the dynamics of twoand three-DB soliton configurations was recently studied in Ref. [26]; our study complements the latter by yielding analytical approximations and a numerical continuation/bifurcation approach towards such states.
VI. CONCLUSIONS AND DISCUSSION
In the present work, we have studied multiple quasione-dimensional dark-bright (DB) solitons in atomic Bose-Einstein condensates. Our theoretical results were motivated and supported by the experimental evidence of the formation of DB-soliton clusters in a two-component, elongated rubidium condensate, confined in a harmonic trap. The theoretical analysis was based on the study of two coupled, onedimensional Gross-Pitaevskii equations.
Starting from the case of a homogeneous condensate (i.e., in the absence of a trapping potential), we have employed a Hamiltonian perturbation theory to analyze the interaction between two DB-solitons. Assuming that the DB-solitons are of low velocity and sufficiently far from each other, we have found approximate expressions for the interaction forces between the same or different soliton components. This way, we derived a classical equation of motion for the center of mass of the DB-soliton pair, and revealed the role of the phasedifference between the bright-soliton components: we have shown, in particular, that the repulsion between the dark soliton components may be counter-balanced by the attraction between out-of-phase bright components, thus inducing the existence of stationary DB-soliton pairs even in the case when the external trapping potential is absent. We have found the equilibrium distance between the two DB solitons that compose the stationary DB-soliton pair, with the semi-analytical result being in excellent agreement with the relevant numerical one. Additionally, we have demonstrated the linear stability of these stationary DB-soliton pairs by means of analytical and numerical techniques [the latter were based on a Bogoliubov-de Gennes analysis]. It was shown that the analytical result for the oscillation frequency of small-amplitude perturbations around the equilibrium distance is in excellent agreement with the pertinent eigenvalue characterizing the excitation spectrum of the DB-soliton pair.
We have then studied multiple-DB-solitons in the trap. In this case, we have employed a simple physical picture, where the total force acting on the DB-solitons was decomposed to an interaction force (derived in the homogeneous case) and a restoring force induced by the trapping potential; the relevant characteristic frequency associated with the latter was the oscillation frequency of a single-DB-soliton in the trap (which was found to coincide with the pertinent anomalous-mode eigenvalue of the single DB soliton system). Following this approach, we were able to find stationary in-trap DB-soliton pairs even in the case where the bright-soliton components were repelling each other: in this case, the trap-induced restor-ing force was able to counter-balance the repulsive forces between the dark-and the bright-soliton components. The semianalytical results for the equilibrium distance and the oscillation frequencies (for the in-and out-of-phase cases) were again found to be in very good agreement with respective numerical results, including the anomalous modes' eigenfrequencies pertaining to the in-and out-of-phase motion of solitons. The stability analysis of the DB-solitons in the trap indicated the possibility of the existence of unstable modes through Hamiltonian-Hopf instability quartets, although the latter would typically only arise over narrow parametric intervals -and with rather weak instability growth rates. Results pertaining to three-DB-solitons in the trap were presented as well; the main features of these states were found to be qualitatively similar to the ones of the DB-soliton pairs.
Coming back to our experimental findings, we should note the following. Given the nature of the available initial conditions, our experimental observations were not able to identify, in a straightforward way, genuinely stationary DB-soliton complexes and/or to identify precisely their internal modes. Nevertheless, the frequent and persistent occurrence of DBsoliton clusters in the experiment is highly indicative of the robustness of such "DB-soliton molecules". This is in tune with our existence and stability results.
It would be particularly interesting to further explore the dynamics of multiple-DB-soliton complexes, and potentially the formation of "DB-soliton gases" comprising such interacting atomic constituents. Deriving Toda-lattice-type equations describing such gases, and identifying their stationary states, excitations and (mesoscopic) solitons (as in the case of single-component dark solitons [35]), would be challenges for future work. Another possibility is to extend the present considerations to the vortex-bright solitons found in Ref. [28]. There, it would be relevant to identify whether molecular states consisting of two-or of three-vortex-bright solitons can be constructed, and whether the relative phases of 0 and π between the bright components can still yield different stationary states. Relevant studies are presently in progress. | 2011-04-21T22:18:37.000Z | 2011-04-21T00:00:00.000 | {
"year": 2011,
"sha1": "987d992de66e151bf7222d66817f6596c1cece28",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.84.053630",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "987d992de66e151bf7222d66817f6596c1cece28",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
246947195 | pes2o/s2orc | v3-fos-license | Pre-Crime Prediction: does It Have Value? Is It Inherently Racist?
This paper considered the emerging use of predictive analytics in the justice domain with respect to potential bias. It discussed predictive algorithms and methods from the perspectives of reported crime and community safety in the United States. Although predictive algorithms, techniques, and implementation contexts are emerging, imperfection exists with respect to their use. Despite any effectiveness or efficiency of using predictive algorithms, such use should neither deny human rights nor transgress societal laws. Regardless, the emergence of predictive policing fuels and enhances the classic debate of balancing liberty versus security within a civil society.
From a law enforcement and community safety perspective, the foundation for crime prediction is the concept that people behave predictably (to some degree) and future behavior may be both anticipated and predicted (Hayes, 2015). If Hayes's assertion was true, can human behavior data be analyzed to examine whether behavior patterns may be anticipated? As a result, could more efficient intervention to deter crime and maintain societal order be crafted while not crossing the line with civil or human rights violations?
PREdICTIVE ALGoRITHMS
Cambridge Dictionary (n.d.) defined algorithms as "a set of mathematical instructions or rules that, especially if given to a computer, will help to calculate an answer to a problem." Predictive algorithms relied on artificial intelligence (AI) being applied to machine learning (Marr, 2016a), and all three (algorithms, machine learning, and artificial intelligence) were based on mathematical principles, such as probability theory and inferential statistics. Rigano (2019, para. 4), indicated that, "Conceptually, AI is the ability of a machine to perceive and respond to its environment independently and perform tasks that would typically require human intelligence and decision-making processes, but without direct human intervention." While people should tend to think of mathematics as dealing in absolute truths and as an objective science, O'Neil (2016O'Neil ( , 2017, a mathematician and data scientist, insisted that algorithms were nothing more than opinions embedded in code. An algorithm was a computer-coded instruction, written by human programmers, that allowed the discerning of patterns within massive amounts of historical data. Afterward, by assuming found patterns were fixed facts, outcomes of future predictions were provided for single locations and/or individual people (Ferguson, 2017a). In the case of crime prediction, 'hot spots' were flagged. Regarding recidivism risk, a single score was assigned to individuals identifying them within the justice system as having a high risk to recidivate. These predictive algorithms were used, over the past half dozen years, as supportive tools for decision-making within all components of the criminal justice system (courts, corrections, and law enforcement). Within policing, predictive algorithms have become a "multi-million dollar business" (Ferguson, 2017a(Ferguson, , p. 1132.
LAw ENFoRCEMENT dATA ANALySIS
It should be unsurprising that law enforcement in the United States uses such emerging technologies toward more efficiently providing a wide range of services. Business entrepreneurs identified the law enforcement community as a consumer of technology. They continuously adapted technologies to appeal to the law enforcement arena .
The nature of law enforcement was historically reactive by responding to calls for assistance. Law enforcement agencies across the nation used 'hot spot analysis' in an attempt to anticipate future service needs. For many agencies, the analysis consisted of compiling lists of reported crimes and calls for assistance, overlaying those locations onto a map, and assigning departmental resources within those areas. The public safety sector identified and employed strategies from the ones that were capable of employing and utilizing resources (McElreath et al., 2014). Regarding a primary goal of predictive policing, the focus shifted toward forestalling future crime rather than emphasizing traditional crime response (Zedner, 2007). This was a focus on pre-crime versus post-crime. Pre-crime, as a term, was first used by science fiction author, Philip K. Dick, in a novella titled The Minority Report, which later became a 2002 Steven Spielberg blockbuster movie (Wray, 2018).
Various RAND researchers identified several categories of predictive methods in use to forecast, places vulnerable to crime, persons at risk of offending in the future, likely offenders of past crimes and potential victims of crime (Hayes, 2015). The RAND researchers cautioned, however: "[C] ompared with predictions related to spatiotemporal crime techniques, methods for making predictions involving people are much less mature" (Perry et al., 2013, p. 81) and recognized the potential civil rights violations inherent in predictive methods targeting individuals. The Level of Service Inventory -Revised (LSI-R) was a frequently used tool to assess and classify offender risk by using 54 items to calculate a single score for any person; the higher the score, the greater the presumed risk (Perry et al.,p. 81). O'Neal (2016) criticized the LSI-R for using information that was inadmissible in a court of law because of its prejudicial nature, such as "circumstances of a criminal's birth and upbringing, including his or her family, neighborhood, and friends" (p. 26). While some states used the LSI-R merely as a tool in order to determine who should receive anti-recidivism treatment programs while imprisoned, other states depended on the scoring to make decisions on length of sentencing (O'Neal, 2016). Thus, there is a qualitative difference in whether the predictive method was used to help or harm an individual inmate. Ultimately, the decision on whether applying such a predictive technique was constitutional depended entirely on the palliative-punitive intention.
While the potential application of the technology was significant, in one way or another predictive policing was implemented for years and predated the development of the computer (McElreath et al., 2013, p. 229). Doss, et al., (2013, p. 501) indicated that early predictive approaches attempted identification of "individual tendencies" or "key indicators" through which predicting future behaviors could occur. Additionally, Holmes and Holmes (2009) indicated that profiling was used for predicting risks of criminal incidents and criminal behaviors. Law enforcement agencies have long identified high crime areas and criminal hot spots into which increased resources are typically directed. Drawing from earlier theories of resource utilization in law enforcement, the concept of preventive patrol which embraced the idea of significant law enforcement presence in an area would deter criminal activity, was accepted as a productive enforcement strategy. The use of this technology for predicting crime opened a new dimension in the strategic response to crime as the technology evolved.
PRE-CRIME ANALySIS ANd PREdICTIVE ENFoRCEMENT
The goal of predictive policing was to forecast where and when crimes would occur (Moses & Chan, 2016). However, was data analysis for the purpose of pre-crime prediction another form of profiling, which in itself was controversial? While solid data from which to determine true value was largely unavailable, various firms worldwide claimed and anticipated its value.
The tech firm Predictive Policing (PredPol), a California company formed in 2012, claimed data analytics algorithms could improve crime detection by somewhere between 10% to 50% in some cities (Smith, 2018) through identification of areas in a neighborhood where serious crimes were more likely to occur during a particular period (Rieland, 2018). PredPol used an analysis initially developed for earthquake prediction, which was modified to predict crime (Moses & Chan, 2016). Police department officials used this data to direct assignment of resources toward deterring criminal activity or responding more efficiently to incidents (Lapowsky, 2018). PredPol maintained a focus on using past data to predict future property crimes. Three data points were used to develop its analytical model: crime type, crime location, and crime date/time.
Within the PredPol (2020) Internet site, an overview of the algorithm was displayed, and the site explained how the model included the three aspects of offender behavior: repeat victimization, nearrepeat victimization, and local search. Repeat victimization assumed, for instance, that once a house was burglarized, the likelihood of that same house being burglarized again was increased. Near-repeat victimization assumed that every house in the surrounding neighborhood would then be at increased risk of being burglarized, and local search assumed that burglars would continue to operate within their established comfort zone area. The premise was that past behavior or events would reliably predict future behavior or future events. At least 60 police departments nationwide used Predpol as a predictive policing tool (Puente, 2019). The Los Angeles Police Department (LAPD) developed PredPol in collaboration with a UCLA professor. After a 2019 internal audit, the LAPD reported that there was not enough evidence to definitively state that PredPol was instrumental in reducing crime (Puente, 2019). Such lack of evidence was unsurprising. For instance, despite the usefulness of artificial intelligence approaches, human knowledge regarding initiatives involving data management and AI was constrained with respect to strategic initiatives (Lichtenthaler, 2021). Lum and Isaac (2016) and the Human Rights Data Analysis Group (HRDAG) compared police records in Oakland, California, with a "demographically representative synthetic population" (para. 16) database of all crimes in Oakland, whether reported to police. This was done by estimating the number of drug users in that area through utilization of the 2011 National Survey on Drug Use and Health (O'Donnell, 2019). Lum and Isaac (2016) found PredPol dispatched police more often to minority neighborhoods and concluded that the PredPol algorithm tended to target Blacks at two times the rate of Whites. This disparate outcome was a consequence of a "pernicious feedback loop" (O'Neil, 2016, p. 87) in which predictive policing deployment created new data which justified and reinforced previous patterns observed in the original data.
A Chinese company, Cloud Walk, developed technology allowing it to track the travel patterns of people and rate them on how likely they were to commit a crime using location data. HIS Markit, an industry research company, reported that China currently has over 170 million surveillance cameras in place (Ng, 2017). Researchers at Shanghai Jiao Tong University performed a study linking criminality and facial images. Training algorithms with headshots of over 1,000 faces from government IDs and 700 criminal images were used as a basis for recognition functions. Such supervised learning systems could segregate criminal and non-criminal groups and determine reoccurring traits in both sets (Reaney, n.d.). China's widespread use of machine-learning algorithms for predicting crime and for preemptively detaining citizens brought criticism from the Human Rights Watch (HRW, 2018) which reported that Chinese ethnic minority groups received disproportionately higher risk scores, and these scores appeared to be -at least to some degree -related to religious practices (i.e., information was collected on how many times per day a person prayed). The HRW estimated that, in only a two-year period, tens of thousands of ethnic minorities, because of elevated scores, were sent to "political education centers" (HRW, para. 4), and some detainees were subsequently sent to prisons.
In searching for information pertaining to the use of computer-based resources by law enforcement in the prediction of crime trends, the technology was applied to a wide range of perceived threats. McCulloch and Pickering (2009) discussed the use of the information drawn from computer-based process in the struggle against radicalism in the United Kingdom. In 2001, 9/11 occurred within the United States, and then in 2005 the 7/7 bombings occurred within the United Kingdom; both were considered causative events for the heightened need for preemptive security measures globally (Dencik, Hintz, & Carey, 2018;Gillham, 2011). National security threats catalyzed the integrating of big data and probability theory to determine future risk (Arodou & Blanke, 2015;Dencik, Hintz, & Carey, 2018;& Hildebrandt, 2013). The British Broadcasting Company (BBC) reported that, out of the 14 police forces identified by Liberty as utilizing predictive policing programs, two of them -the Cheshire and Kent police forces -had terminated their use (Kelion, 2019).
The Los Angeles Police Department expanded the use of computer-based data during Operation ASER in 2011, which rates individual potential involvement in crime within a point system. The LASER system considered individual criminal history, including gang membership or affiliation, prior arrests and convictions, probation, and parole status. Those with high scores were identified and listed within the Chronic Offender Bulletin (Lapowsky, 2018). Uchida and Swatt (2013), through a panel design study, found that Operation LASER was effective in decreasing gun crime by 5.2% in the Newton Division of Los Angeles (the area covered by Newton patrol officers). When individual reporting districts (RDs) within the division were analyzed, the results showed that RDs utilizing interventions for both chronic offender and for chronic location were successful in decreasing gun crime.
The city of Memphis, Tennessee applied data analysis within the city's Blue CRUSH (Crime Reduction Utilizing Statistical History) initiative beginning in 2006. Initially started as a pilot program, Operation Blue CRUSH was first launched in 2005 (Ashby, 2006). While the Blue CRUSH attracted nationwide attention with its use of data-driven enforcement, the benefit of the program was undetermined (Tulumello, 2016). The Memphis Police Department (MPD) indicated that both overall crime and violent crime had decreased as a result of Blue CRUSH, and a Nucleus Research case study reported an 863% return on MPD's investment in the predictive software (Kanaracus, 2010;Perry et al., 2013); The software Beware analyzed behavior patterns and criminal trends to predict future activity by providing law enforcement officers with a color-coded threat score for an individual. It was derived from predictive algorithms that processed data from three sources: criminal records, purchased commercial information, and information publicly accessed from social media (Hoggard, 2015;Yang, 2019). One of the concerns was the potential of error and identifying people with nothing to do with crime. For instance, Ferguson (2017b) described that during a public hearing on the Beware program, one of the local council members requested that his address be run through the system, and the results were that his home was flagged as "an elevated yellow threat level" (p. 85), and no one present could explain why that city official's residence was misidentified as high risk.
Instead of relying solely on predictive algorithms, the use of the data analysis might best be viewed as a solitary step in the process of crime response. As the application of automated data analysis is relatively new within the justice domain, continued refinements of software and its application will occur through time.
The information placed in many of the predictive policing algorithms was drawn from a wide variety of sources, not just the records of the justice agencies (i.e., see Beware above). Criminologists addressed crime and behavioral theories to anticipate future activity. For almost a century, social science researchers have recognized the significance of family and peer influence, economic and social conditions, and labeling as factors influencing behaviors (including criminal and deviant behaviors). The significance of relationships in data analysis played a key role in predicting not only the future of someone, but also of others who are similar. As an example, certain stressors had greater influence than others regarding those committing criminal offenses (Agnew & Scheuerman, 2015).
Additionally, Elijah Anderson (1994), in The Code of the Street, classified social units as families, the decent families, and the street families. He believed the decent families were more inclined to accept the values of mainstream society while street families demonstrated a lack of consideration of those values and of others. From the models he proposed, the children from the street families had a greater tendency to become involved in patterns of criminal behavior (Anderson, 1994).
If humans are creatures of habit, then it is reasonable to assume that the future may also be predictable. A consistent conclusion in the criminological literature is that past behavior can be a predictor for future behavior (Akers, 2017). Coding has no conscience, but the results of any analysis depend on the accuracy of the information entered into the system, the method of processing, and the ability of analysts to place outcomes within a larger context for proper interpretation.
IS PRE-CRIME ANALySIS ANoTHER FoRM oF PRoFILING? CRITICISM oF dATA ANALySIS IN CRIME PREdICTIoN
There are concerns about racial and other biases hidden within datasets. Pre-crime software was intended to predict where and when specific crimes will occur. The information from which the analysis was drawn must remain updated in order to provide usable forecasts. Most predictive analysis depends on the input and analysis of three types of information: 1) information as to the type of crime committed or nature of the call for service, 2) the location the offense or call for service and 3) the time of the call for service or crime. This information, when analyzed, can be used to predict crime and service calls. Human reasoning was needed to analyze the data and to understand its limitations. The human aspect of decision algorithms must not be discounted. After all, sound outcomes from decisions result from good decision processes (Galli, 2020). As Pearl and Mackenzie, award-winning computer scientist and statistician, respectively, and well-known proponents for the probabilistic approach to A.I., cautioned: "[D]ata are profoundly dumb" (p. 6). Machine learning presumes reliable input to produce outcomes through the process of recognizing patterns within the data. Even Deep Learning systems will not know if those patterns are predicated on past prejudicial practices, and the old adage applies: garbage in, garbage out (Perry et al., 2013).
While members of society reject the use of age and race as an element of profiling, it is difficult to believe those elements are not factors in crime prediction because much individual information is available among any number and type of interconnected data bases. From a moral perspective, there was always a concern how data was used resulting in judgements. One major concern was the body of research suggesting that minority neighborhoods may have been historically overpoliced thereby resulting in disproportionately higher arrest rates for African Americans (Beckett, Nyrop, & Pfingst 2006;Black & Reiss, 1970;Gaston, 2019;Hetey & Eberhardt, 2018;Kochel, Wilson, & Mastrofski, 2011;Lynch et al., 2013;Smith & Holmes, 2014;Smith, Visher, & Davidson, 1984). Therefore, even when race is not explicitly included in formulating an algorithm, frequency of arrests will be included, which means that past policing actions -whether legitimate or not -will be perpetuated by any machine learning system. Expecting a good outcome from bad input is, according to Bakke (2018), "like expecting Google Maps to find your destination after you typed in the wrong address" (p. 135). Claiming machine learning output is objective when dirty data were used is what critics call "tech-washing" --which is the idea that "the veneer of machine empiricism" covered injustice (Rainie & Anderson, 2017). Was there a reason for concern? History demonstrated that injustices, and even atrocities, occurred in the name of science (Shapiro, 2020;Weigmann, 2001). Thus, it became prudent to proceed with caution when using predictive algorithms as a decision-making tool.
On the other hand, there is value in being able to foresee and forestall tragedy. The individual responsible for the killings at the Pensacola Naval Air Station posted on his Twitter account statements about his hatred of the United States (a possible red-flag of future violence). Unfortunately, the postings were unidentified and not reported to the military or law enforcement rapidly enough to have prevented the tragedy (Schmitt, Robles, & Bogel-Burroughs, 2019).
PRoBLEMS wITH PREdICTING FoR AN INdIVIdUAL
Hildebrandt (2013) referred to "Big Data" as a proper noun and succumbed to perceiving big data algorithmic systems as having minds of their own. Big data involves the amalgamating of a vast accumulation of semi-structured, unstructured, and structured data (Sangwan & Bhatnagar, 2020). Big data processing necessitated a much greater quantity of computing, processing, and storage capacities than ordinary database systems (Sangwan & Bhatnagar, 2020). Within the context of the justice system, big data applications were appropriate for fraud detection (Sangwan & Bhatnagar, 2020).
Deep learning systems were deemed "high speed real time autonomic computing systems that increasingly determine our external environment" (Hildebrandt, 2013, p. 2). They were touted as being extremely accurate, more so than smaller data driven systems, mainly because of the massive amounts of data they analyzed. It was claimed that their "n" equaled "all" (n=all) meaning that they did not have to worry about generalizing to a larger population as they claimed to be the population. The bottom line, however, was that they were data modeling systems (Hildrebrandt, 2013, p. 2), and very similar to O'Neil's (2016, 2017), termed them as opinions embedded in code. All predictions, even when machine-generated, are estimates (Perry, et al., 2013), and estimates are nothing more than educated guesses (Siegel, 2017).
Statistics were useful for aggregates and sample descriptors, and not for individuals. Although individual data point estimators were made in any model for a sample in order to generate 'predictions' for the population and generalize findings from the sample, it did not mean that individuals within any population were individually pinpointed with any degree of accuracy for a certain future outcome. For instance, it is accurate to state that someone fits into a category in which 70% of the people will reoffend and 30% will not, but one would have to be either a psychic or genuine fortune-teller to know whether one specific 'someone' will end up being in the 70% or 30% subgrouping.
Much can be learned from the literature within bioethics and reputable medical journals. When doctors predict a chance of survival for a single terminal patient, the prediction seldom lines up with actual survival (Christakis & Lamont, 2000;Glare et al., 2003). Unless all possible variables for the individual person can be considered -even ones for which there is currently no known measurement -a 100% accurate and reliable individual prediction is impossible. Henderson and Keiding (2005, p. 703) surmised the reasoning as, "[I]n all realistic scenarios we can imagine, the intrinsic statistical variations in lifetimes are so large that predictions based on statistical models and indices are of little use for individual patients. This applies even when the prognostic model is known to be true and there is no statistical uncertainty in parameter estimation." Henderson and Keiding (2005) concluded, "Prognostic indices or palliative scores can be useful in assigning patients to risk groups and from some viewpoints-insurers perhaps-all that is necessary is to know the proportion of each group who will survive any given time. A difference between groups of 10% in one year's survival probability can be important. For the individual patient however, our view is that such a between-group difference is small compared with the variability in residual lifetimes, even between patients with identical characteristics" (p. 705). This conclusion can be equally applied to pre-crime predictions for individuals. The inherent dangers associated with artificial intelligence and deep learning systems was what prompted the Future of Life Institute, in 2017, to publish a list of 23 Principles for Beneficial Artificial Intelligence, which was signed by over 1,600 AI researchers, including Steven Hawking (Rainie & Anderson, 2017). Boddington (2017, p. 105) indicated that a primary AI goal was, ". . . to create not undirected intelligence, but beneficial intelligence." The following principles were particularly relevant to predictive algorithms implemented within the criminal justice system: • Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority. • The application of AI to personal data must not unreasonably curtail person's real or perceived liberty. • The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
An inevitable error rate existed which shall always be a part of forecasting and prediction. When future dangerousness predictions were evaluated, the findings showed that false-positive rates were significantly higher than false-negative rates; this meant that, on average, people were wrongly classified as high risk 65% of the time (Slobogin, 1984(Slobogin, , 2006. Thus, false-positives must be considered from a constitutional rights perspective.
CoURT RULINGS oN PREdICTIVE ALGoRITHMS
Although the United States Supreme Court has not yet heard a case directly regarding the use of predictive algorithms within the criminal justice system, two state appellate courts rendered judgments (Zavrsnik, 2020). In 2016, Loomis v. Wisconsin (2016 WI 680), Wisconsin's Supreme Court Judge Bradley affirmed the circuit court's decision in allowing COMPAS, a risk assessment tool, to be one factor in sentencing a defendant. Justice Bradley ruled, "We determine that because the circuit court explained that its consideration of the COMPAS risk scores was supported by other independent factors, its use was not determinative in deciding whether Loomis could be supervised safely and effectively in the community" (p. 244). Interestingly, Judge Bradley, in her written opinion, referenced the American Bar Association's recommendation that the states adopt assessment tools. Judge Roggensack, in a concurring opinion in order to clarify the Court's position, wrote: "[C]onsideration of COMPAS is permissible; reliance on COMPAS for the sentence imposed is not permissible" (p. 286).
The Kansas Supreme Court, as of 2014, in Rule 110B(c) mandated that judicial branch court services officers complete the LSI-R assessment (2017 Kan. S.Ct. R 182). In Kansas v. Walls (396 P.3d 1261, 2017), a sentence was appealed on the grounds that the defendant was disallowed access to the LSI-R, which was relied on by the judge when probation conditions were imposed. The Kansas Court of Appeals vacated the defendant's sentence and remanded the case back to the lower court "with directions to allow Walls access to the complete diagnostic LSI-R assessment and report" (para. 16).
CURRENT LIMITATIoNS
The desire to accurately predict crime is not new. While the concept of predictive policing is clear, what is unclear (at this early stage in its development and evolution) are several issues. Ultimately, how effective will this tool prove to be when applied? Second, what type of data will be required to support the data analysis process (Hayes, 2015)?
Three major issues have been identified with predictive algorithms: accuracy, accountability, and transparency. No predictive algorithm, regardless of accuracy, will ever be 100% error-free. Amazon's recommendation algorithm was highly successful and considered "optimized," but was erroneous frequently (Fry, 2019). When Amazon's predictive algorithm exhibited erroneous behavior, there were no dire consequences. Accountability involved the idea that machine decision-making allowed humans to be absolved from responsibility as well as the "temptation to outsource aspects of the decision-making process, and thus responsibility and accountability for the decision itself, to technological tools" (Moses & Chan, 2016, p. 817). Transparency was problematic because deep learning algorithms continually updated their pattern-finding function autonomously to reach the point where human programmers and/or analysts were unable to understand exactly how the final outcome was reached (i.e., the "black-box conundrum") (O'Donnell, 2019, p. 546).
Another limitation of predictive algorithms was the lack of theoretical frameworks for interpreting the outputs and for formulating fair and effective law enforcement responses (Innes, Fielding, & Cope, 2005;Kitchin, 2014;Meijer & Wessels, 2019;Vlahos, 2012). Much of this may be attributed to the different domains involved with this policing tool, such as the private software companies seeking to generate income, as well as the computer programmers and data analysts far removed from the human elements involved. Accountability and transparency were important considerations to ensure external stakeholder buy-in to the benefits, but a better understanding of evidence-based theory -such as Tyler's procedural justice theory (1990) in which personal respect and concern for human dignity were prioritized -provided a basis for ensuring that internal stakeholders remained acutely aware that machine learning need not (and, in an ethical sense, should not) automatically result in a machine-generated decision. A computer cannot substitute for humanity when it comes to interpreting context or understanding the complexities involved in the 'why' of any generated score; that will always require human reasoning.
Again, a lesson from medicine may be appropriate: do no harm. In the case of predictive policing, it would mean doing no harm to either society or to its citizens. The cost of public safety is too high if it comes at the expense of private citizens' rights. The public may be more accepting of machinegenerated predictive policing if it resulted in (rather than purely punitive objectives) social services directed toward those at risk in an attempt to break past patterns in people or places. A carrot is easier to accept than a stick, and may be more effective in the long run.
oRGANIZATIoNAL USE oF PREdICTIVE TooLS
Entities within the federal justice system may have sufficient budgets to afford sophisticated predictive tools and resources whereas smaller organizations, such as a small-town sheriff's office or a state police entity, may lack financial resources to expend toward purchasing such large-scale systems. Their predictive analyses may accommodate crime that occurs within their respective jurisdictions. For instance, they may predict the quantity of traffic tickets that occur within a given period via the use of the moving average technique. Hyndman and Athanasopoulos (2018) indicated that the concept of moving average originated in the 1920s, and was extensively used through the 1950s. Mathematically, Doss, Guo, and Lee (2012) indicated that its fundamental premise was the summation of demand within the preceding n periods divided by the number of periods used within the moving average.
Although they may have a need to implement mathematical formulae, they may lack sophisticated software (e.g., SPSS, SAS, etc.) and personnel knowledgeable of its operations. In such cases, they may acquire predictive tools that are commensurate with their municipal budgets, such as Microsoft Excel or Calc Doss, Sumrall, & Jones, 2012). Calculating a moving average may be accomplished through the use of MS Excel or Calc. With respect to predictive methods, the Excel data analysis function facilitates the use of moving averages and regression. For example, if an agency desired to implement a moving average for predictive purposes, use of MS Excel would be both affordable and straightforward. Using public, open access secondary data obtained from the Federal Bureau of Investigation Uniform Crime Reports database, implementing a moving average within MS Excel could easily be accomplished by specifying the 'Moving Average' function from the Data Analysis interface. Next, one would enter the desired data input range of cells, the interval period, and the desired cell for generating data output. The period for generating the moving average values was five years. Table 1 shows the results of these steps when applied against motor vehicle theft data for the state of Mississippi spanning twenty years.
Justice system entities may also use MS Excel to perform regression analysis. With respect to the years between 2001 and 2020, a question may be explored: What was the relationship between automobile theft incidents between the states of Mississippi and Alabama? Both were adjacent states that shared numerous transportation routes. Initially, in order to ensure that mathematical analysis could occur, data values across the years were expressed in the terms of annual crime incidents per 100,000 population. Doing so ensured a ratio basis for comparison of crime rates through time and accommodated population fluctuations. Table 2 shows the data sets expressed in the terms of motor vehicle thefts per 100,000 population.
Afterward, using the Excel Data Analysis menu, the regression option was chosen to examine the relationship between the Mississippi and Alabama values. Using an alpha value of 0.05 and the p-value approach regarding the hypothesis that no relationship existed between the Mississippi and Alabama data sets representing motor vehicle theft, the Excel regression analysis showed a statistically significant outcome (p = 0.00; α = 0.05; Multiple R = 0.83; R 2 = 0.69). Thus, the null hypothesis was rejected in favor of the alternative hypothesis that some relationship existed between Alabama and Mississippi motor vehicle theft.
Both the values of Table 1 and Table 2 were analyzed using MS Excel. Excel is an acceptable resource for data sets of such size and type (Anderson, Sweeney, & Williams, 2015). Usually, justice systems entities may have available a copy of Excel (or Calc) as a component of its office suite (Doss, Sumrall, & Jones, 2012;Doss, Sumrall, McElreath, & Jones, 2013). In so doing, the chances are greater that justice system personnel may be easily trained to use Excel software to perform a variety of predictive functions.
Conclusions
Does this article make the claim that predictive algorithms have no value? Not at all. The desired outcome or value of precrime prediction is to facilitate safer communities. Within this outcome is embedded the hope of more efficient assignment and usage of justice system resources. However, when attempting to achieve some level of security, some may question the pervasive use of technology as a means of enhancing enforcement. In other words, issues may arise regarding the basic premise of the immutability of security versus liberty with respect to the emerging of an integration of predictive technologies, policing, and enforcement. Where would demarcation occur between perceptions of an authoritarian state versus one of civil freedoms, liberties, and privacy? After all, technology is a neutral tool useful for fulfilling human purposes. However, the human intention toward achieving any given purpose may be utopian, dystopian, or neutral. Despite moral, legal, or ethical debates or questions, the developing and refining of predictive tools are continuous processes. While it is impossible to identify and input information on every single crime that is committed, or to ensure data will be coded accurately, predictive policing appears promising. As time passes and the collection, analysis, and use of the data and corresponding predictive methods and models become further refined, their practical and societal values will increase. After all, the justice system is an agent for public good within society. In other words, public sector organizations are intended to ensure public safety and service. From the lens of public service, their resources exist to generate outcomes that are societally beneficial (Baporikar, 2020).
Predictive resources may be viewed in terms of resource availability and allocation within the context of justice system organizations. For instance, large organizations, such as federal agencies (e.g., U.S. Marshal Service, Federal Bureau of Investigation) may possess large-scale tools for performing predictive analysis, but smaller or local law enforcement agencies may lack such resources. Regardless, they still possess the ability to perform some types of predictive analysis, such as the moving average example demonstrated herein.
Implications
Some uses of predictive technologies involve recidivism. When one has paid a debt to society for a crime committed and completed a period of imprisonment, then would former offenders be prone to heightened scrutiny via the use predictive technologies? After all, if the debt to society was paid and one regained freedom, then would not the members of a civil society disfavor intrusions of privacy or surveillance of specific individuals (unless some form of post-condition was mandated)? Basically, how would society view the various uses of predictive tools as a measure to diminish recidivism? If such technologies showed substantial promise toward abating the recidivism rate thereby reducing the impact of crime and decreasing the societal costs of the justice system, how would society react to their use(s)?
However, even if the predictive algorithms were to become 100% accurate (which currently could never be), they should not come at the expense of human rights. Given the constraints of technology, decision domains, and data, there will always be an error rate involved in forecasting or predicting the behavior of one single individual or any single location, and there will also always be unforeseen and unintended consequences when certain neighborhoods or people are targeted for more intensive surveillance. There is an old truism: If you look for dirt, you will find it.
Recommendations
This article highlighted the primary concepts that were fundamental to the emerging domain of predictive policing. Thus, any consideration of the integrating of man and machine was beyond the scope of discussions herein. However, Guermah, et al., (2021, p. 110) indicated that systems considerations were secondary to emphasizing the "user and his actions." Future studies may examine predictive policing from the perspective of human-computer interaction.
Future research may consider viewing prediction from the perspective of resource allocation. With respect to the precrime prediction goal of safer communities, one may consider the efficient assigning and using of agency resources toward generating public good. Such a context introduces the basic theme of economics: how does one allocate scarce, limited resources to satisfy the unlimited human needs and wants of society? Future studies may consider predictive algorithms and their uses from such a perspective.
The emerging and using of predictive technologies will influence policy decisions among a variety of organizations, both public and private. At the time of this writing, an insufficient amount of time existed for data to accumulate regarding the impacts of predictive policing from a long-term, strategic view. Frei and Ruloff (1989) and Doss, et al., (2021Doss, et al., ( , 2020 indicated that an average of 20 years of data should be collected and analyzed before effective policy assessment or evaluation may occur. Given that upwards of two decades of data may be necessary for such analysis, future researchers may examine predictive tools as strategic organizational resources when sufficient time has passed to accumulate data.
FUNdING AGENCy
The publisher has waived the Open Access Processing fee for this article. | 2022-02-19T16:30:59.977Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "326433548ac9f9af4b25387e31795bf4667c83da",
"oa_license": null,
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=298672&isxn=9781799884378",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2e88a6810609bb85e1803b84b3a4ed0315fc3f9c",
"s2fieldsofstudy": [
"Political Science",
"Law",
"Sociology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258049797 | pes2o/s2orc | v3-fos-license | Efficacy and safety of abrocitinib monotherapy in adolescents and adults: a post hoc analysis of the phase 3 JAK1 atopic dermatitis efficacy and safety (JADE) REGIMEN clinical trial
Abstract Background Differences in atopic dermatitis (AD) disease course and manifestation with age may extend to treatment response. Objective To evaluate response maintenance with continuous-/reduced-dose abrocitinib or withdrawal and response to treatment reintroduction after flare in adolescent and adult participants in JADE REGIMEN (NCT03627767). Methods Adolescents (12–17 years) and adults with moderate-to-severe AD responding to abrocitinib 200-mg induction were randomly assigned to 40-week maintenance with abrocitinib (200 mg/100 mg) or placebo. Patients who experienced flare during maintenance received rescue treatment. Results Of 246 adolescents and 981 adults, 145/246 (58.9%) and 655/981 (66.8%), respectively, responded to induction. Similar proportions of adolescents and adults experienced flare during maintenance with abrocitinib 200 mg (14.9%/16.9%), 100 mg (42.9%/38.9%), and placebo (75.5%/78.0%). From the abrocitinib 200-mg, 100-mg, and placebo arms, respectively, Eczema Area and Severity Index response was recaptured by 28.6%, 25.0%, and 52.9% of adolescents and 34.3%, 33.7%, and 58.0% of adults; Investigator’s Global Assessment response, by 42.9%, 50.0%, and 73.5% of adolescents and 34.3%, 50.6%, and 74.1% of adults. Abrocitinib had a similar safety profile regardless of age; nausea incidence was higher in adolescents. Limitations Adolescents represented 20% of the trial population. Conclusion Abrocitinib was effective in preventing flare in adolescents and adults.Clinicaltrials.gov listing: NCT03627767.
Introduction
Atopic dermatitis (AD) is a relapsing-remitting, inflammatory skin disease characterized by eczematous lesions, intense itch, and impaired quality of life (QOL) (1)(2)(3)(4)(5).AD develops in approximately 12% of children aged 6 months to <12 years and in 15% aged 12 to <18 years (1,6).Childhood-onset AD commonly resolves before adulthood; however, approximately 5% of patients diagnosed before the age of 18 continue to experience AD for at least another 20 years (7).The 12-month prevalence of AD in US adults is 5% (8), and 53% of US adults with AD report onset during adulthood (9), indicating a complex and variable disease course.
The predilection sites of lesions, trigger factors, skin microbiome, and underlying pathophysiology of AD differ in pediatric and adult patients (10,11).In infants, lesions are widely distributed on the face and the trunk; in adolescents and adults, flexural involvement is more common (1).The microbiome of nonlesional skin is significantly more diverse in pediatric than in adolescent or adult patients with AD (11), and the cytokine expression profile in the skin of individuals with AD fluctuates with the age of the patient (12).Physiology also varies in an age-dependent manner; altered skin structure and function lead to a vulnerable epidermal barrier and a higher risk of absorption of topical drugs in young children than in adults (10).This age-related diversity of AD pathophysiology suggests that treatment response may also vary with the age of the patient.
JADE REGIMEN was conducted to evaluate response to abrocitinib, an oral, once-daily, Janus kinase 1 (JAK1)-selective inhibitor, over a period of 40 weeks, with continuous treatment (200 mg), dose reduction (100 mg), or withdrawal in patients who initially had a short-term response to treatment with abrocitinib 200 mg (13).In this post hoc analysis, we evaluated results for adolescent and adult participants in JADE REGIMEN to identify potential differences by age.
Patients
Eligible patients were aged ≥12 years (body weight ≥40 kg) and had moderate-to-severe AD (Investigator's Global Assessment [IGA] score ≥3, Eczema Area and Severity Index [EASI] score ≥16, percentage of body surface area (%BSA) affected ≥10, and Peak Pruritus Numerical Rating Scale [PP-NRS; used with permission from Regeneron Pharmaceuticals, Inc., and Sanofi] score ≥4) for ≥1 year.Patients had a recent history (≤6 months) of inadequate response to topical medicated therapy or had previously required systemic therapy.Full details of the inclusion and exclusion criteria were published previously (13).
Study design
JADE REGIMEN was conducted in agreement with the Declaration of Helsinki and the International Council for Harmonization Good Clinical Practice Guidelines.This research was approved by institutional review boards or ethics committees at each site.Internal and external review committees monitored the safety of patients throughout the study.All patients provided written informed consent.
Endpoints
The primary endpoint of JADE REGIMEN was the proportion of patients who had a loss of response (i.e., flare) during the maintenance period and required rescue treatment (7).This post hoc analysis assessed the proportion of patients achieving EASI-75, IGA 0/
Statistical analysis
For this post hoc analysis, data were analyzed for adolescents (12-17 years) and adults (≥18 years).Confidence intervals (CIs) for the probability of loss of response were derived using the log-log transformation with back transformation to an untransformed scale.CIs for Kaplan-Meier estimate of time to flare were based on the Brookmeyer and Crowley method (16).Hazard ratios (HRs) and CIs were estimated from Cox proportional hazards regression models, including fixed effects of treatment, categorical variables of randomization strata (age category), and disease severity at study baseline in the main model, and a continuous variable of weight at study baseline added as a covariate in the sensitivity analysis.The study baseline was defined as the last measurement collected on or before day 1 of trial treatment.
Patient demographics and baseline disease characteristics
A total of 246 adolescents and 987 adults were treated in the open-label induction period (Supplemental Figure S1).The mean age (SD) was 15.
Rescue period
A larger proportion of adolescents (49.0%) than adults (39.4%) from the abrocitinib 100-mg maintenance arm entered the rescue period.Similar proportions of adolescents and adults entered the rescue period from the abrocitinib 200-mg (14.9% and 16.9%) and placebo (77.6% and 78.0%) maintenance arms.
Quality of life
At the study baseline, median CDLQI/DLQI scores were 12.0 (moderate effect) in adolescents and 16.0 in adults (very large effect).At the randomization baseline, approximately 95% of adolescents and adults achieved ≥4-point improvement in response per the CDLQI/DLQI from the study baseline (CDLQI4/DLQI4; data not shown).By week 52, the respective proportions of adolescent and adult patients who achieved improvement per the CDLQI4/DLQI4 were 65.1% and 64.5% in the abrocitinib 200 mg arm, 40.0% and 47.8% in the abrocitinib 100 mg arm, and 14.0% and 14.5% in the placebo arm (Figure 2).
Safety
Similar proportions of adults and adolescents reported treatment-emergent adverse events (TEAEs) during the open-label induction period (Supplemental Table S1); the severity of most TEAEs was mild to moderate.Serious adverse events (SAEs) occurred in 1.2% of adolescents and 1.7% of adults during the open-label induction period.
The incidence rates (95% CI) for nausea were 39.49 ( Platelet counts decreased from baseline during the induction period, reaching a nadir at week 4 in both adolescent and adult patients.During the maintenance phase, platelet counts returned to near baseline levels in adolescents who received abrocitinib 100 mg but not in adults who received abrocitinib 100 mg; platelet counts remained low in adolescents and adults who received continued-dose abrocitinib 200 mg (Supplemental Figure S4).Despite fluctuations, median platelet counts remained within the clinically normal range (150,000-450,000) throughout the trial.Platelet count abnormalities led to treatment discontinuation in one adult patient during the induction period and in zero patients during the maintenance period.No clinically significant changes were observed in the other laboratory values measured in either adolescents or adults (data not shown).
Discussion
Abrocitinib was effective and well tolerated by adolescents and adults throughout the JADE REGIMEN study.More than half the adolescents (59%) and two-thirds of adults (67%) achieved an IGA 0/1 and EASI-75 response with abrocitinib 200 mg by week 12 of the induction period and were randomly assigned to the maintenance period.During the randomized maintenance period, abrocitinib prevented flare more effectively than placebo in both adolescents and adults.The risk of the flare was similar for adolescents and adults in all treatment arms.IGA 0/1, EASI-75, and PP-NRS4 response rates in adolescents and adults in the induction period were consistent with the overall population (13).
Our findings show that abrocitinib efficacy was comparable between adolescents and adults.This may be due to its mechanism of action; selective blocking of JAK1 inhibits the signaling of a broad panel of pro-inflammatory cytokines, including interleukin (IL)-4, IL-13, and thymic stromal lymphopoietin (17).Upregulation of these cytokines has been shown to occur to a similar extent in the lesional skin of adolescent and adult patients with AD compared with the skin of age-matched controls (12).
Treatment with abrocitinib also resulted in improvements in patient QOL.At week 40 of the maintenance period, the majority of both adolescent (65%) and adult (64%) patients receiving abrocitinib 200 mg reported a ≥ 4-point improvement from baseline in CDLQI/DLQI.These results are consistent with other reports of approved therapies for AD in similar patient populations; clinically meaningful improvements were achieved in DLQI scores (defined as minimal clinically important difference [MCID] of at least 4 points from baseline) with upadacitinib, baricitinib, and dupilumab, and CDLQI scores (MCID of at least 6-8 points from baseline) with dupilumab (18)(19)(20)(21)(22)(23).Health-related QOL data from randomized controlled trials are lacking for adolescent patients with AD and represent a significant unmet need (24).The findings from this study support the hypothesis that the use of effective therapies can improve QOL outcomes in adolescents with AD, a patient population in whom significant levels of anxiety, depression, and poor sleep quality along with disease symptoms contribute to a profound negative impact on their daily lives (3,(25)(26)(27).
The safety profile of abrocitinib was generally consistent in adolescents and adults, with some differences.The overall frequency of TEAEs was higher in adolescents than adults during the induction period.The incidence rates of nausea during the randomized maintenance period were also numerically higher in adolescents than adults, albeit with overlapping CIs.Despite these differences, rates of discontinuation due to TEAEs were low during the induction and maintenance periods and comparable in both age groups.
The key strength of this analysis is the evaluation of efficacy, safety, and QOL outcomes across different abrocitinib dosing regimens from a phase 3 study in a well-defined population of both adults and adolescents with AD.This study also had limitations.Differences observed between age groups in this analysis should be interpreted with caution due to the small sample size of the adolescent group (only 20% of the total JADE REGIMEN trial population).Slight imbalances were noted in the previous treatment characteristics between adolescents and adults: adolescent patients were more likely to have received only topical agents before day 1 of the trial, whereas adult patients were treated using a broader armamentarium, including systemic immunomodulatory agents.The differences in efficacy, safety, and QOL outcomes between adolescent and adult patients were assessed post hoc and not powered for statistical significance.The extent of the statistical comparison was limited and risk factors for flare could not be determined.
Conclusions
This post hoc analysis of the JADE REGIMEN study confirms the efficacy and safety of abrocitinib in adolescents and adults over a period of 40 weeks of treatment, although disease response rates for some outcomes seemed numerically lower in adolescents.These results reinforce that treatment with abrocitinib using either induction with abrocitinib 200 mg followed by maintenance with reduced-dose abrocitinib 100 mg, or continuous dosing with abrocitinib 200 mg is an effective therapeutic approach in adults and adolescents with moderate-to-severe AD.
Figure 1 .
Figure1.Probability of protocol-defined flare for adolescent and adult patients during the randomized maintenance period of the JaDe regimen study.flare was defined in the protocol as ≥50% loss of initial eaSi response at week 12 with a new iga score ≥2.Patients at risk were defined as patients who did not experience flare and were continuing treatment.missing event times were considered as right censored (censored at random) on the last date of randomized treatment.Patients included in the adolescent group were aged 12-17 years.Patients included in the adult group were aged ≥18 years.ci: confidence interval; eaSi: eczema area and Severity index; Hr: hazard ratio; iga: investigator's global assessment.
Figure 2 .
Figure2.Proportion of patients who achieved ≥4-point improvement from study baseline in health-related quality-of-life index score at week 40 of the randomized maintenance period of the JaDe regimen study.Quality of life of adolescents (12-17 years old) was assessed using the cDlQi.Quality of life of adult patients (≥18 years old) was assessed using the DlQi.Study baseline was defined as the last measurement collected on or before day 1 of treatment.Patients who withdrew from the trial or experienced flare and received rescue treatment after randomization were counted as not having achieved the outcome after withdrawal.cDlQi: children's Dermatology life Quality index; ci: confidence interval; DlQi: Dermatology life Quality index.
Table 1 .
Patient demographics and baseline characteristics.
%BSa: percentage of body surface area; aD: atopic dermatitis; cDlQi: children's Dermatology life Quality index; DlQi: Dermatology life Quality index; eaSi: eczema area and Severity index; iga: investigator's global assessment; na: not applicable; PP-nrS: Peak Pruritus numerical rating Scale; Q1: quartile 1; Q3: quartile 3; Qol: quality of life.a other includes american indian or alaskan native, native Hawaiian or other Pacific islander, multiracial, and not reported.b cDlQi scores were collected for adolescent patients (12-17 years) only.median scores at baseline correspond to a 'moderate' effect on Qol in adolescents.c DlQi scores were collected for adult patients (≥18 years) only.median DlQi scores at baseline correspond to a 'very large' effect on Qol in adults.d topical agents include corticosteroids, calcineurin inhibitors, and crisaborole.e Systemic agents include corticosteroids, cyclosporine, nonbiologic agents, and biologic agents.f other biologic agents include lebrikizumab, tralokinumab, nemolizumab, and etokimab.
Table 2 .
Summary of patient-year and incidence rates for teaes (any causality) for adolescents and adults treated in the randomized maintenance period of JaDe regimen.Patients included in the adolescent group were aged 12-17 years.Patients included in the adult group were aged ≥18 years.incidence rate expressed as number of events per 100 PY. a frequencies reflect exposure during the randomized maintenance period only.b Saes include any untoward medical occurrence at any dose that results in death; is life threatening; requires inpatient hospitalization/prolongation of existing hospitalization; results in persistent or significant disability/incapacity; results in congenital anomaly/birth defect; or is considered an important medical event.
ci: confidence interval; ir: incidence rate; PY: patient-year; Sae: serious adverse event; teae: treatment-emergent adverse event.c Severe aes are judged by the investigator to interfere significantly with the patient's usual function but do not meet the criteria for Saes.d incidence rates reflect exposure during the open-label induction period and randomized maintenance period. | 2023-04-11T06:17:41.381Z | 2023-04-10T00:00:00.000 | {
"year": 2023,
"sha1": "4eddef66cc4310bf93631d0209a9e116bb67b149",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1080/09546634.2023.2200866",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3e721ae60a1d768cef753e75628df112de68abba",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259950967 | pes2o/s2orc | v3-fos-license | Inscribed and circumscribed radius of $\kappa$-convex hypersurfaces in Hadamard manifolds
Let $P$ be a convex polygon in a Hadamard surface $M$ with curvature $K$ satisfying $-k_2^2 \ge K \ge -k_1^2$. We give an upper bound of the circumradius of $P$ in terms of a lower bound of the curvature of $P$ at its vertices.
Introduction and main result
Let M be a complete m-dimensional Riemannian manifold. Given a domain Ω ⊂ M , an inscribed ball or inball is a ball in M contained in Ω with maximum radius, which is called the inradius of Ω, and we shall denote it by r. A circumscribed ball of Ω is a ball in M containing Ω with minimum radius, which is called the circumradius and we shall denote it by R.
In the book of Blaschke [2] it is proved that if Γ is a closed convex regular curve in the Euclidean plane that bounds a compact convex region Ω and the curvature κ of Γ is bounded from below by some constant κ 0 > 0, then, the circumradius R of Ω is bounded from above by 1 κ 0 .
This result was extended by H. Karcher ([8]) for the other space forms. Further developments of related Blaschke theorems has been done by obtaining conditions under which a convex set in R n can be included in other ( [10,7,6]).
In all these theorems, the hypothesis of strong convexity (κ ≥ κ 0 > 0) is necessary, the theorem is not true for κ ≥ 0. Then it cannot be applied to closed convex polygons. In [4] we used two definitions of curvature at the vertex of a polygon which allowed us to obtain a version of the Blaschke Theorem for polygons in space forms. Here we use the same definitions to obtain a less precise upper bound of the circumradius R for polygons in Hadamard surfaces with curvature −k 2 2 ≥ K ≥ −k 2 1 . In order to do that, first we use comparison theorems to bound the inradius r of a domain. Then, we apply Th. 3.1 in [5] to get an upper bound of R. Then we use this to obtain the upper bound of R for polygons. This is done by a combination of the comparison theorems mentioned above with the comparison of the angles given by The research is partially supported by grant PID2019-105019GB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of makimg Europe", and by the grant AICO 2021 21/378.01/1 funded by the Generalitat Valenciana.
Toponogov Theorem, which has to be done carefully because these inequalities go in opposite sense.
Let us recall some known definitions and properties before giving precise statements. We shall work on an n-dimensional Hadamard manifold M n , that is, a simply connected complete n-dimensional Riemannian manifold with sectional curvature bounded from above by a constant −k 2 2 ≤ 0). In such a manifold, we define Definition 1. Let λ > 0, an orientable smooth (C 2 or more) hypersurface L of a Hadamard manifold M n is called λ-convex if there is a suitable selection of the unit normal vector of L such that the normal curvatures k N of L satisfy k N ≥ λ.
If Ω is convex in M , then ∂Ω is a topological embedded hypersurface which is smooth except for a set of zero measure.
Definition 3.
A convex domain Ω of M is λ-convex if for every point p ∈ ∂Ω there is a smooth λ-convex hypersurface L through p such that there is a neighborhood of p in Ω contained in he convex side of L (that is, the side in M where the unit normal vector to L points).
It is known that if ∂Ω is smooth, then Ω is λ-convex if and only if ∂Ω is λ-convex.
In [1, 3]), it was proved that
Theorem ( [1,3]). If P is a compact orientable locally convex and immersed hypersurface of dimension n ≥ 2 in a Hadamard manifold M , then P is embedded, homeomorphic to the sphere and it is the boundary of a convex set Ω.
For n = 1, the above theorem is not true even when the immersion is C 2 . In this paper, even for dimension 1 we shall consider only compact embedded convex curves which are the boundary of a convex domain Ω ⊂ M , that is, for any dimension, we shall adopt the following By the inradius and the circumradius of P we understand the inradius and circumradius of Ω.
We remark that, with this general definition, P can be λ-convex and, at the same time, contain conical or rudge points, or points where P is only C 1 , where it allowed to say that normal curvature is infinite in some directions.
In the next sections we shall prove the following: Definition 6. In a surface M , let P be a polygon. If A is a vertex of the polygon, α the interior angle at A and 1 , 2 the lengths of the sides of P that meet at vertex A, then the curvature of P at A is defined by: (3) Let P be a polygon with sides of lengths i and vertices A i . If κ A i ≥ π 2 k 1 coth(k 1 ρ) and coth(k 2 ρ) ≥ k 1 k 2 , then the inradius r and the circumradius R of P satisfy that is: 2 Proof of Theorem 1 We shall use the following result: If Ω is a compact k 1 -convex domain, then where τ = tanh(k 1 r/2). Moreover this bound is sharp.
Let S be the geodesic sphere of M which is the boundary of an inball of P . From standard comparison theory (see [9]) we have that the normal curvature of S at any point satisfies Let Q 0 ∈ S ∩ P . Since P = ∂Ω, which is k 2 coth(k 2 ρ)-convex, there is a smooth k 2 coth(k 2 ρ)-convex hypersurface L through Q leaving a neighbourhood of Q in Ω (and then in S) in the convex side of L, then S and L are tangent at Q and where k L N (Q 0 ) is the normal curvature of L at Q 0 . From (7) and (8) we obtain that from which we obtain the first inequality of (2). Now, by using the hypothesis (1) we have that P is k 1 -convex, then we can apply Lemma 3 to obtain the second inequality of (2).
Proof of Theorem 2
Let i , i+1 be the lengths of the sides having A i as a common vertex. As in the case of constant curvature (see [4]), we consider the segments of circles C i from A i−1 to A i of radius ρ i and center O i and C i+1 from A i to A i+1 of radius ρ i+1 and center O i+1 . Now, on the Hyperbolic space M with sides with the same lengths than the the corresponding triangles From the Toponogov comparison theorem on the angles of a triangle we have that interior angles of the triangles in M are bigger than the corresponding ones in M 2 λ . We want that the curve obtained by the union os the segments of circle C i be convex, and this will happen if and only if the
but this occurs if and only if
, we can use the definition (3) to write the inequality (10) under the form: On the other hand we have is satisfied. Now we are going to check that the hypothesis on the lower bound of κ A i implies (12). From hyperbolic trigonometry applied to the triangles one has that tanh(k 1 i /2) = tanh(k 1 ρ i ) sin(δ i ) for i and for i + 1.
Since δ i ∈ [0, π/2], δ i ≤ π/2 sin(δ i ). Moreover, we shall take ρ i = ρ. Then which is the desired condition. Now, let us take a parallel C ε to C at distance ε. C ε is the union of segments of circles C i of radius ρ + ε and center O i and circles of radius ε centered at A i , then it is C 1,1 and its normal curvature k N satisfies k 2 coth(k 2 ε) ≤ k N ≤ k 1 coth(k 1 ε) at points of C ε at distance ε from the vertices A i and k 2 coth(k 2 ρ + ε) ≤ k N ≤ k 1 coth(k 1 ρ + ε) for others. Then, for every ε, k 2 coth(k 2 ρ i + ε) ≤ k N , and we can apply Theorem 1 to conclude k 1 coth(k 1 r) ≥ k 2 coth(k 2 (ρ + ε)) for C ε , then for P because the domain bounded by P is included in the domain bounded by C ε . Taking ε → 0, we obtain k 1 coth(k 1 r) ≥ k 2 coth(k 2 ρ), which is (4).
Bounds for Theorem1 in terms of only k 1
We have stated Theorem 1 under a form that has a direct application for the proof of Theorem 2. This form implies some restrictions for the number ρ (ρ ≤ 1 k 2 arccoth( k 1 k 2 )), which appears in the lower bound of the normal curvatures of the hypersurface P . The same arguments than for the proof of Theorem 1 allow to obtain another upper bound for r with hypotheses that impose no restriction on ρ. The statement of this other result is: Theorem. 1' Let M be a Hadamard manifold with sectional curvature K satisfying 0 ≥ K ≥ −k 2 1 , k 1 > 0. Let P be a compact k 1 coth(k 1 ρ)-convex hypersurface of M , then r ≤ ρ and R ≤ ρ + k 1 ln 2.
4.2 The theorems when k 2 = 0 In this case, in the hypotheses, the lower bounds for k N or k a i will be 1/ρ instead of k 2 coth(k 2 ρ), and ρ has to satisfy 1/ρ ≥ k 1 . Then, the statement of the theorems, under similar proof that the ones given before, is Theorem. 1" Let M be a Hadamard manifold with sectional curvature K satisfying 0 ≥ K ≥ −k 2 1 , k 1 > 0. Let P be a compact hypersurface of M such that Theorem. 2' Let M be a Hadamard surface with Gauss curvature K satisfying 0 > K ≥ −k 2 1 , k 1 > 0. Let P be a polygon with n sides of lengths i and vertices A i . If κ A i ≥ π 2 k 1 coth(k 1 ρ) and 1 ρ ≥ k 1 , then the inradius r and the circumradius R of P satisfy
Theorem 2 with other definition of k A .
In [4] another definition of the curvature of a polygon in a surface of constant sectional curvature is done. It coincides with (3), but in spaces of constant sectional curvature −k 2 1 it takes the form Taking this definition for surfaces with 0 ≥ K ≥ −k 2 1 clould seem as natural as taking definition (3). With the computations that we have done, the result will be again Theorem 2. The reason is that in the last inequality of (13), we have bounded tanh(k 1 i /2) + tanh(k 1 i+1 /2) k 1 ( i + i+1 )/2 by 1/k 1 , but, with the new definition of κ A i , inequality (12) changes to δ i + δ i+1 ≤ κ A i ( 1 k 1 (tanh( i /2) + tanh( i+1 )/2), and the above quotient is 1/k 1 , the same value of the bound we have taken.
4.4
The theorems when k 2 = k 1 For Theorem 1, the hypothesis will be only P is k 2 1 coth(k 1 ρ)-convex and the thesis will be r ≤ ρ and R ≤ ρ + k 1 ln 2. If we compare this with the corresponding Theorem by Karcher when P is C 2 , where, with the same hypothesis, we obtain R ≤ ρ we see that our bound is not the best one: we bounded r for which should be a bound of R. Then or bound is far from being the best one. Similar remarks on Theorem 2: The hypothesis will be "the same ", and the conclusion will be r ≤ ρ. In [4] we obtained, with the same hypothesis, R ≤ ρ, again the same difference than with Theorem 1. | 2023-07-19T04:36:14.028Z | 2023-07-18T00:00:00.000 | {
"year": 2023,
"sha1": "937a8870e59fc9eea93c8c32558c22f87046b9f5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "937a8870e59fc9eea93c8c32558c22f87046b9f5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
6133766 | pes2o/s2orc | v3-fos-license | Long-Lasting Improvements in Liver Fat and Metabolism Despite Body Weight Regain After Dietary Weight Loss
OBJECTIVE Weight loss reduces abdominal and intrahepatic fat, thereby improving metabolic and cardiovascular risk. Yet, many patients regain weight after successful diet-induced weight loss. Long-term changes in abdominal and liver fat, along with liver test results and insulin resistance, are not known. RESEARCH DESIGN AND METHODS We analyzed 50 overweight to obese subjects (46 ± 9 years of age; BMI, 32.5 ± 3.3 kg/m2; women, 77%) who had participated in a 6-month hypocaloric diet and were randomized to either reduced carbohydrates or reduced fat content. Before, directly after diet, and at an average of 24 (range, 17–36) months follow-up, we assessed body fat distribution by magnetic resonance imaging and markers of liver function and insulin resistance. RESULTS Body weight decreased with diet but had increased again at follow-up. Subjects also partially regained abdominal subcutaneous and visceral adipose tissue. In contrast, intrahepatic fat decreased with diet and remained reduced at follow-up (7.8 ± 9.8% [baseline], 4.5 ± 5.9% [6 months], and 4.7 ± 5.9% [follow-up]). Similar patterns were observed for markers of liver function, whole-body insulin sensitivity, and hepatic insulin resistance. Changes in intrahepatic fat und intrahepatic function were independent of macronutrient composition during intervention and were most effective in subjects with nonalcoholic fatty liver disease at baseline. CONCLUSIONS A 6-month hypocaloric diet induced improvements in hepatic fat, liver test results, and insulin resistance despite regaining of weight up to 2 years after the active intervention. Body weight and adiposity measurements may underestimate beneficial long-term effects of dietary interventions.
OBJECTIVEdWeight loss reduces abdominal and intrahepatic fat, thereby improving metabolic and cardiovascular risk. Yet, many patients regain weight after successful diet-induced weight loss. Long-term changes in abdominal and liver fat, along with liver test results and insulin resistance, are not known.
RESEARCH DESIGN AND METHODSdWe analyzed 50 overweight to obese subjects (46 6 9 years of age; BMI, 32.5 6 3.3 kg/m 2 ; women, 77%) who had participated in a 6-month hypocaloric diet and were randomized to either reduced carbohydrates or reduced fat content. Before, directly after diet, and at an average of 24 (range, 17-36) months follow-up, we assessed body fat distribution by magnetic resonance imaging and markers of liver function and insulin resistance.
RESULTSdBody weight decreased with diet but had increased again at follow-up. Subjects also partially regained abdominal subcutaneous and visceral adipose tissue. In contrast, intrahepatic fat decreased with diet and remained reduced at follow-up (7.8 6 9.8% [baseline], 4.5 6 5.9% [6 months], and 4.7 6 5.9% [follow-up]). Similar patterns were observed for markers of liver function, whole-body insulin sensitivity, and hepatic insulin resistance. Changes in intrahepatic fat und intrahepatic function were independent of macronutrient composition during intervention and were most effective in subjects with nonalcoholic fatty liver disease at baseline.
CONCLUSIONSdA 6-month hypocaloric diet induced improvements in hepatic fat, liver test results, and insulin resistance despite regaining of weight up to 2 years after the active intervention. Body weight and adiposity measurements may underestimate beneficial long-term effects of dietary interventions.
Diabetes Care 36:3786-3792, 2013 I ncreases in visceral and subcutaneous abdominal fat as well as ectopic fat deposition contribute to the development of metabolic abnormalities in obesity (1). In particular, intrahepatic fat accumulation is associated with increased insulin resistance and promotes the development of type 2 diabetes (2,3), independently of total or visceral fat mass (4,5). Excessive hepatic fat also predisposes to nonalcoholic steatohepatitis, which may progress to cirrhosis and hepatic cancer (6). Thus, interventions reducing hepatic fat address the root cause for both obesity-associated metabolic disease and liver disease. Lifestyle interventions including hypocaloric diets are a cornerstone of obesity management because diet-induced weight loss improves insulin sensitivity (7) while preventing type 2 diabetes (8). Weight reduction through caloric restriction decreased hepatic fat in studies lasting up to 12 months (9,10). The improvement in hepatic fat during dieting was primarily related to caloric restriction rather than macronutrient composition (11). Two important issues are involved in weight reduction studies. First, there may be dissociation between body weight changes and cardiovascular and metabolic risk factors over time. For example, whereas bariatric surgery decreases the risk for new-onset diabetes for many years, the risk for arterial hypertension may not be reduced despite sustained weight loss (12). Second, many subjects regain weight after diet-induced weight loss (13). Whether weight regain negates previous improvements in hepatic fat and liver function has not been investigated. Given the importance of hepatic fat in the pathogenesis of obesityassociated metabolic disease, we assessed long-term changes in visceral fat, subcutaneous fat, liver fat, liver test results, and insulin resistance after dietary weight loss in overweight or obese subjects.
Participants
Between March 2007 and June 2010, 108 overweight and obese subjects without signs and symptoms of cardiac, vascular, renal, metabolic, and gastrointestinal diseases completed a 6-month hypocaloric diet (230% energy intake) with either reduced carbohydrate or reduced fat content. Subject characteristics had been described previously (11). Subjects received no medications and required no other medical care. We excluded subjects reporting more than 2 h of physical activity per week, subjects consuming .20 g/day of alcohol, and pregnant or nursing women. All subjects who completed the study were invited to participate in a long-term follow-up. The study was performed in accordance with the Declaration of Helsinki (1996). Our Institutional Review Board approved the study and written informed consent was obtained before entry.
Study design
The follow-up investigation presented here is part of the B-SMART study (Clinical trial reg. no. NCT00956566, clinicaltrials. gov), which compared weight loss and changes of associated metabolic and cardiovascular markers in hypocaloric diets with reduced carbohydrates and reduced fat. The randomized 6-month dietary intervention consisted of individual and group dietary counseling with the goals of reducing energy intake by 30% with a minimum of 1,200 kcal/day and adherence to one diet or the other. The reduced carbohydrate diet contained #90 g carbohydrates, 0.8 g protein/kg body weight, and $30% fat. The reduced-fat diet contained #20% fat, 0.8 g protein/kg body weight, and the remaining energy content comprised carbohydrates. All subjects who completed the 6-month diet were invited to participate in a followup visit. The only contacts during follow-up were occasional telephone calls to monitor weight changes and to invite subjects to the final examination. No intervention of any kind was offered during follow-up. The follow-up visit occurred between 17 and 36 months after completion of the supervised dietary intervention (Supplementary Fig. 1). All anthropometric, metabolic, and magnetic resonance imaging studies were conducted in an academic clinical research center with the same equipment, scientific staff, and protocols as applied during the baseline and postdiet measurements (11).
Anthropometric and metabolic evaluation
We measured body weight, waist circumference, and height in a standardized manner after an overnight fast. We obtained blood samples at baseline and at 15, 30, 45, 60, 90, and 120 min after glucose ingestion (75 g glucose/300 mL; oral glucose tolerance test [OGTT]) to measure glucose and insulin. Imaging studies were performed after another overnight fast. Participants provided a 7-day food protocol that was analyzed for energy intake and macronutrient content using Optidiet (V3.1.0.004; GOE, Linden, Germany), an analysis software based on nutritional content of food as provided by the German National Food Key.
Abdominal and liver fat imaging A clinical 1.5-T magnetic resonance scanner (Sonata and Avanto; Siemens Medical Solution AG, Erlangen, Germany) was used to measure abdominal subcutaneous and visceral fat mass as well as liver fat content as previously described (14). Briefly, we applied a T1-weighted, watersuppressed, gradient echo technique (repetition time, 80 ms; echo time, 6.11 ms; 512 3 512 matrix; field of view, 500 3 500 mm; slice thickness, 10 mm; interslice gap, 10 mm) to image abdominal fat during repetitive breath-holds. Axial slices were acquired from the diaphragm to symphysis. We quantified visceral and subcutaneous adipose tissue by semiautomated image segmentation software using a contour-following algorithm (Vitom). In addition, we measured intrahepatic lipids by respiratory-gated 1 H spectrometry (spin-echo: repetition time according to respiratory cycle (.5 s); echo time, 30 ms). Unsuppressed spectra were acquired in end-expiration from a single 30-3 30-3 20-mm 3 voxel located at liver segment 7 (24 averages). Intrahepatic lipid content was quantified using peak areas and expressed (as percent) as fat 4 (fat + water).
Biochemical measurements and calculations
Glucose (mmol/L), insulin (mU/mL), lipoproteins, alanine aminotransferase (U/L), aspartate aminotransferase (U/L), and g-glutamyltransferase (U/L) were determined by standard methods in a certified clinical chemistry laboratory. Insulin resistance was estimated by homeostasis model assessment of insulin resistance index. Homeostasis model assessment of insulin resistance was calculated from fasting insulin and glucose by the following equation: (insulin [mU/mL] 3 glucose [mmol/L]) 4 22.5). Whole-body insulin sensitivity was calculated by the composite insulin sensitivity index (15). Composite insulin sensitivity index = 10,000 4 ![(fasting plasma glucose 3 fasting plasma insulin) 3 (glucose 3 insulin)]; fasting plasma glucose was expressed as mg/dL and fasting plasma insulin was expressed as mU/mL, and glucose (mg/dL) and insulin (mU/mL) were the mean glucose and mean insulin concentrations during the glucose load. The hepatic insulin resistance index was estimated from the results of the OGTT. This approach has been validated in nondiabetic subjects by using euglycemic insulin clamp testing including labeled glucose administration (16).
Statistical analysis Data were first tested for normal distribution and variance homogeneity with Kolmogorov-Smirnov test and the Levene test, respectively. Differences between time points (baseline, after diet, followup) were analyzed using ANOVA for repeated measures with Bonferroni post hoc test. Univariate associations between parameters were described by Pearson correlation coefficient. To test for interactions between diet groups or weight loss groups over time, we used two-way ANOVA for repeated measures and Bonferroni post hoc test. To identify independent predictors of hepatic fat content at baseline and of long-term reduction in hepatic fat after the dietary intervention, we conducted multivariate linear regression analyses. All statistical analyses were performed with SPSS 18 (SPSS, Chicago, IL). Significance was accepted at P , 0.05. Unless otherwise stated, values are given as mean 6 SD. Post hoc power analysis was calculated with G*Power 3.1.7 (http://www.psycho.uni-duesseldorf.de/ abteilungen/aap/gpower3).
RESULTSdFifty subjects had complete data sets at follow-up and were included in the analysis (Table 1). Participant flow is shown in Supplementary Fig. 1. The time between completion of diet and follow-up visit was 24 6 6 months (range, [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36]. The long noninterventional follow-up period was associated with a substantial discontinuation rate (54 of 108 eligible subjects participated in the follow-up visit). Approximately half of the subjects changed housing and contact details without notifying the investigators (lost to contact). The other subjects withdrew consent to further participation because of time constraints or disinterest in further scientific evaluations. To test the possibility that the high discontinuation rate introduced a substantial bias, we compared participants and nonparticipants with respect to baseline data and changes at 1, 3, and 6 months after diet initiation (Supplementary Table 1). Subjects who discontinued the follow-up care.diabetesjournals.org DIABETES CARE, VOLUME 36, NOVEMBER 2013 3787 period had similar age, baseline BMI, and improvements in anthropometric and metabolic parameters during the active intervention period compared with those who participated in the follow-up evaluation. Total energy intake was reduced during the dietary intervention period (P , 0.01) and remained reduced at follow-up (baseline: 2,245 6 619 kcal/d; end of diet: 1,720 6 450 kcal/d; follow-up: 1,836 6 468 kcal/d). To analyze predictors of baseline hepatic fat content, we conducted a multivariate regression analysis with age, sex, BMI, insulin sensitivity, total cholesterol, LDLs, HDLs, triglycerides, free fatty acids, visceral and subcutaneous abdominal fat mass, cardiorespiratory fitness (Vo 2max ), total energy intake, and percentage of fat and carbohydrate intake relative to total energy intake. The analysis revealed only visceral fat mass (b = 0.64; P = 0.006) as an independent predictor of baseline hepatic fat content.
We calculated weight regain as the difference between body weight at followup and body weight after the active dietary intervention period at 6 months. After successful reduction of body weight and BMI with dietary intervention, we observed a significant weight regain at follow-up (Table 1 and Fig. 1). The concomitant increase in visceral and subcutaneous abdominal fat mass followed a parallel trend (Fig. 1). In contrast, intrahepatic lipids decreased during the dietary intervention but remained reduced at follow-up (Fig. 1). Body weight regain was correlated with a regain in visceral (r = 0.70; P , 0.001) or subcutaneous abdominal adipose tissue (r = 0.90; P , 0.001), but less so with increased hepatic fat content (r = 0.30; P , 0.05). Longterm reduction of hepatic lipids was associated with sustained improvements in serum alanine aminotransferase, aspartate aminotransferase, and g-glutamyltransferase activities (Table 1). Finally, indices of insulin resistance were obtained from OGTT. All measured data and calculated variables improved with the active diet and remained so at followup (Table 1 and Fig. 2). When analyzed separately for subjects initially randomized to reduced-carbohydrate or reduced-fat diets, all analyzed variables changed similarly in both groups over time.
Hepatic fat reduction is particularly relevant in subjects with nonalcoholic fatty liver disease (NAFLD). We stratified subjects into a group with NAFLD and a group without NAFLD at baseline (hepatic fat .5.6% or ,5.6%; Table 2). Groups were similar in age, body weight, and sex distribution. Both groups lost similar amounts of body weight and visceral and subcutaneous abdominal fat mass with the same total energy intake reduction during dietary intervention. Subjects with NAFLD showed sustained improvement in hepatic fat content and liver function despite modest weight regain during follow-up.
We conducted a multivariate regression analysis using age, sex, baseline BMI, type of diet, changes in body weight with diet, changes in visceral and subcutaneous fat mass with diet, changes in hepatic fat with diet, changes in total cholesterol with diet, and changes in insulin sensitivity with diet as independent variables; changes from baseline to long-term followup of liver fat was used as the dependent variable. Only changes in total body weight with diet (b = 0.31; P = 0.02) and changes in hepatic fat with diet (b = 0.86; P , 0.001) predicted long-term intrahepatic fat loss. The model explained 68% of the total variation in the observed long-term liver fat reduction.
CONCLUSIONSdThe important and novel finding of our study is a sustained improvement in hepatic fat content, liver function tests, and insulin resistance over .2 years after a 6-month hypocaloric diet in overweight and obese subjects. All improvements occurred despite regain of body weight, abdominal visceral adipose tissue, and subcutaneous adipose tissue mass during follow-up. Our findings highlight the beneficial long-term effects of a well-controlled dietary lifestyle intervention on hepatic fat content and metabolism. Furthermore, our findings suggest that the benefit of dietary interventions should not be judged solely by anthropometric measurements. In fact, reductions in caloric intake may have weightindependent effects on liver fat and metabolism.
Various types of caloric restriction are effective in decreasing body weight (13) and fat mass (17). Whereas short-term reductions of body weight and fat mass can be impressive, most studies with longterm follow-up were discouraging and reported at least partial body weight regain within 1-2 years (13). Data regarding changes of visceral adipose tissue and ectopic fat storage during long-term follow-up are scarce. The issue is relevant because visceral adipose tissue is independently associated with cardiovascular and metabolic risk (1). Furthermore, ectopic fat storage in liver, skeletal muscle, pancreas, and the heart adversely affects organ function through a mechanism often referred to as "lipotoxicity." Intrahepatic fat content is an important predictor of whole-body insulin resistance, increased secretion rates of VLDLs, and progression to type 2 diabetes (2,4,18). Type 2 diabetes, in turn, increases the risk of progressive liver disease (e.g., nonalcoholic steatohepatitis, cirrhosis, and hepatocellular carcinoma) (19)(20)(21). Our study is the first to assess abdominal fat mass and liver fat several months after diet-induced weight loss and has scientific as well as clinical implications.
Abdominal visceral adipose tissue drains into the portal vein and the liver is exposed to large amounts of free fatty acids derived from this metabolically active fat depot. In our study, hepatic fat at baseline was strongly related to visceral fat mass. Yet, visceral adipose tissue mass may not be the sole determinant of how hepatic fat responds to dietary interventions (22). Short-term studies suggested that hepatic fat accumulation is, at least in part, regulated independently of body weight and abdominal visceral fat mass. In obese subjects, caloric restriction decreased hepatic fat by 10-30% within 48 h (23). Furthermore, ,5% weight loss decreased hepatic fat by 28-40% (24)(25)(26). Finally, aerobic exercise training ameliorated hepatic fat content while body weight (27) and visceral fat mass (28) remained stable. Our study extends these observations and suggests that differential regulation of adipose tissue mass and hepatic fat stores may not be restricted to the acute weight loss period. Moreover, our observations further support the idea that intrahepatic fat is independently associated with metabolic risk (4,5,29).
There is an ongoing debate whether hepatic insulin resistance is primarily mediated through fatty acid and adipokine release from expanded and dysfunctional visceral adipose tissue or through generation of intrahepatic lipid mediators, such as diacylglycerols and ceramides that directly interfere with insulin signaling (30). Our finding that insulin sensitivity remained improved during follow-up despite the weight and fat mass regain strengthens the notion that intrahepatic lipids and their metabolites are mediators of hepatic insulin resistance. We observed a constantly reduced energy intake during follow-up but regain of body weight and fat mass. A possible explanation is that weight loss and caloric restriction led to compensatory reductions in metabolic rate (31). Approximately 80% of liver fatty acids originate from circulating fatty acids (32) that are mobilized predominantly from adipose tissue. Improved insulin resistance with weight loss attenuates adipose tissue lipolysis and enhances fatty acid storage within the large adipose tissue depots. Fatty acids are taken-up into liver and adipose tissue through fatty acid transport proteins and FAT/CD36 (33). In obese subjects, intrahepatic lipid content determined the expression of fatty acid transporters and FAT/CD36 in liver and adipose tissue. Fatty acid transporters were expressed to a lesser extent in the liver and to a higher extent in adipose tissue when subjects had normal liver fat content, and this was reversed in subjects with high liver fat content, suggesting that fatty acid flux from adipose tissue to the liver determines liver fat content (33). Overall, our study suggests that successful weight loss achieved through hypocaloric dieting improves fatty acid flux from the liver toward the primary storage site in the long-term. The same pattern, rerouting of free fatty acids from the liver toward adipose tissue together with improved insulin sensitivity, also has been observed with thiazolidinedione treatment (34).
Our clinically important observation is that a controlled 6-month intervention elicited sustained lifestyle changes in a surprisingly large proportion of our subjects. Caloric intake was still reduced during the follow-up assessment. Thus, important lifestyle improvements may be obscured if the focus is on weight changes only. We also observed that similar sustained improvements in hepatic fat, markers of liver function, and insulin resistance occurred in subjects initially assigned to carbohydrate-reduced or fat-reduced diets. This finding extends our results observed at the end of the 6-month diet period (11) and suggests that changes in energy balance may be more important than changes in dietary composition for sustained hepatic fat reduction. Also clinically important is the observation that subjects fulfilling diagnostic criteria of NAFLD had a particularly pronounced reduction in hepatic fat over time. NAFLD Table 2dChanges with 6-month diet and at long-term follow-up in subjects stratified for the presence of NAFLD at baseline affects up to 30% of adults in developed countries (35) and is related to the ongoing obesity epidemic (3). The condition is an independent risk factor for cardiovascular disease and type 2 diabetes, predisposes to nonalcoholic steatohepatitis, and may progress to cirrhosis and hepatic cancer (6). The 45% reduction of hepatic fat with hypocaloric diets observed in this study is a particular benefit in subjects with NAFLD. Our findings suggest that this important improvement in hepatic liver fat can be maintained in the longterm even when body weight is partly regained.
The major limitation of our study is that 50% of the subjects who had finished the active intervention period discontinued participation during the follow-up period. Discontinuation rates ranged between 33 and 50% in two recent trials reporting 1-year treatment data for Food and Drug Administration-approved weight loss drugs (36,37). In these studies, discontinuation occurred during the active treatment period, whereas our participants discontinued during an observational period after the active intervention while contact was maintained through occasional telephone calls only. Clearly, more close contact may have increased the retention rate; however, the longterm value of our weight loss program for the influence on hepatic steatosis could not have been assessed as planned because close and regular contact represents an intervention by itself. Instead, we wanted to determine the impact of the intervention with the least interaction with subjects as possible. This study design, however, resulted in the problem of losing contact with approximately half of the subjects who discontinued participation. The other half discontinued by withdrawing consent; however, if we were informed correctly by these subjects, withdrawn consent was not related to unsuccessful weight loss or any other undesired developments during follow-up. Based on mathematical models estimating discontinuation rates in dietary intervention studies (38,39), the rate in our study is in the expected range. However, subjects more successful at losing weight and maintaining their diets could be more motivated to participate in follow-up investigations. As shown in Supplementary Table 1, subjects who discontinued participation did not differ from participants included in the follow-up analysis in terms of baseline anthropometric measurements, metabolism, or early and sustained weight loss success during the intervention period. Furthermore, we observed no differences in sex distribution between participants and those who discontinued study participation. The analysis suggests that discontinuation was not limited to a specific patient population, thus rendering a major bias less likely.
To test whether the sample size was sufficiently large to assess the reduction in intrahepatic fat from baseline to 24-month follow-up, we conducted a post hoc power calculation. With a statistical power of 80%, a sample size of 33 would have been sufficient to detect the observed difference in liver fat of 3.1 6 6.1% with a one-sided a-error of 0.025. Given the inclusion of 50 subjects in our study, the probability of the detected difference with a one-sided a-error of 0.025 was 94%. Nevertheless, losing 50% of participants could have introduced bias; therefore, results should be interpreted with caution.
There are other methodological limitations of our study. First, we did not perform liver biopsies. Thus, we cannot prove that the sustained improvement in hepatic fat in our study was associated with beneficial changes in hepatic inflammation and fibrosis. In contrast to steatosis, advanced liver fibrosis is less amenable to therapeutic interventions (35). Second, we are aware that hyperinsulinemic euglycemic clamping would have provided more direct information regarding insulin sensitivity compared with OGTT. Third, we did not measure food intake directly. Because of the possibility of underreporting, the magnitude of the reduced caloric intake after diet and at follow-up needs to be interpreted with caution (40).
Despite these limitations, particularly the high discontinuation rate, we conclude that hypocaloric dietary interventions have a long-lasting effect on intrahepatic lipid accumulation despite weight regain. Because obesity management programs commonly focus on body weight and anthropometric measurements, beneficial long-term effects of dietary interventions on human metabolism may be underestimated. We suggest that the beneficial response to reduced caloric intake in our study may be driven in large part by changes in liver metabolism.
AcknowledgmentsdThis study was part of a joint project between Metanomics GmbH (Berlin, Germany) and Charité University Medical School, which was supported by the Federal Ministry of Education and Research (BMBF-0313868). A.L.B. was supported by a grant from the German Research Foundation (BI1292/4-1). The German Competence Network of Obesity (projects 01Gl0830 and 01GI1122D; S.E.) supported the project.
No potential conflicts of interest relevant to this article were reported.
S.H. wrote the manuscript and researched data. V.H. and W.U. researched data and contributed to discussion. A.L.B. and J.S.-M. contributed to discussion and reviewed and edited the manuscript. S.J. researched data. J.B. conceived and designed the study and researched data. A.M. researched data. F.C.L. and J.J. conceived and designed the study, contributed to discussion, and reviewed and edited the manuscript. M.B. conceived and designed the study and contributed to discussion. S.E. conceived and designed the study, researched data, and wrote the manuscript. All authors had access to the study data and reviewed and approved the final manuscript. S.E. is the guarantor of this work and, as such, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. | 2016-05-15T02:51:00.310Z | 2013-10-15T00:00:00.000 | {
"year": 2013,
"sha1": "4549bb3d44c75a7ad850da7fe5ab76561c8a67d6",
"oa_license": "CCBYNCND",
"oa_url": "https://care.diabetesjournals.org/content/diacare/36/11/3786.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6161004bf4dabf7e0d792a3038f91931f0cd2030",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257803353 | pes2o/s2orc | v3-fos-license | Online mentoring for girls in secondary education to increase participation rates of women in STEM: A long‐term follow‐up study on later university major and career choices
An important first step in talent development in science, technology, engineering, and mathematics (STEM) is getting individuals excited about STEM. Females, in particular, are underrepresented in many STEM fields. Since girls’ interest in STEM declines in adolescence, interventions should begin in secondary education at the latest. One appropriate intervention is (online) mentoring. Although its short‐term effectiveness has been demonstrated for proximal outcomes during secondary education (e.g., positive changes in elective intentions in STEM), studies of the long‐term effectiveness of STEM mentoring provided during secondary education—especially for real‐life choices of university STEM majors and professions—are lacking. In our study, we examine females’ real‐life decisions about university majors and entering professions made years after they had participated in an online mentoring program (CyberMentor) during secondary education. The program's proximal positive influence on girls’ elective intentions in STEM and certainty about career plans during secondary education had previously been demonstrated in several studies with pre–post‐test waitlist control group designs. Specifically, we compared the choices that former mentees (n = 410) made about university majors and entering professions several years after program participation with (1) females of their age cohort and (2) females of a group of girls comparably interested in STEM who had signed up for the program but then not participated (n = 71). Further, we examined the explanatory contribution to these later career‐path‐relevant, real‐life choices based on (1) mentees’ baseline conditions prior to entering the program (e.g., elective intentions in STEM), (2) successful 1‐year program participation, and (3) multiyear program participation. Findings indicate positive long‐term effects of the program in all areas investigated.
INTRODUCTION
To meet the challenges of our time-the COVID-19 pandemic or climate change, for example-and realize positive visions for a common future, outstanding scientists are crucial. Talent development toward eminence and innovation has been the subject of research for decades.
Bloom's seminal interview study of 120 individuals who had achieved world-class levels of performance in various domains has made an important contribution to a better understanding of what eminence and innovation entail. 1 He identified three stages of talent development. Stage 1 is about interest development. Individuals fall in love with a subject, an idea, or a discipline. Stage 2 is about skill acquisition. Individuals develop technical mastery through deliberate practice. continue to play an important role in practice. 3 In science, technology, engineering, and mathematics (STEM), the first stage of talent development poses a particular problem. This can, for example, be seen in the chronic shortages of skilled professionals. 4 Attracting talented girls and women to STEM has proven particularly challenging. Although the situation at this initial stage of talent development has improved in recent years, women are still less likely to opt for STEM majors and professions in many countries, especially in disciplines such as computer science and engineering. [5][6][7] For this reason, extensive efforts have been made in recent decades to improve the situation. In this context, it has proven important to start interventions as early as possible, at the latest early on in secondary education, as girls' interest in STEM subjects declines substantially during adolescence, 8 and decisions related to future career choices are made during this period. 9,10 As crucial as it is that interventions to attract talented girls to STEM careers start early, it is difficult to verify the long-term effectiveness of such measures. Definitive real-life choices for or against long-term engagement with STEM domains-in the sense of decisions about university majors and about careers-often take place years after an intervention targeting girls. Research studies often rely on self-reports completed after early interventions, for example, about girls' elective intentions or certainty about future career plans, and assume that any such proximal improvements in STEM-related outlooks or preferences presage definitive later choices about university majors and about careers. In more rigorous studies, researchers specify developmental trajectories based on participants' self-reports made before, during, and after an intervention and compare these with responses provided by suitable control groups (i.e., groups of individuals with similar initial characteristics but who did not receive the intervention). 11 However, follow-up studies on later real-life choices are hard to find. 12,13 The aim of our study was, therefore, to investigate whether a measure that has proven effective for getting girls interested in STEM subjects early on-namely, online mentoring being offered to girls enrolled in secondary education-also increases the rates at which par-ticipants make STEM career choices after high school (i.e., majoring in a STEM subject at university and/or entering a STEM profession).
We examined this for CyberMentor, a Germany-wide online mentoring program for girls enrolled in university-track secondary education.
CyberMentor has been scientifically evaluated and extensively studied for nearly two decades in terms of its short-term effectiveness and determinants of success. [14][15][16] Attracting girls in secondary education to STEM via mentoring Mentoring is a relatively stable dyadic relationship between one or more experienced individuals (mentors) and one or more less experienced individuals (mentees) characterized by mutual trust, goodwill, and the shared goal of mentees' advancement and growth. 17 Mentoring is important for youth career guidance 13 and for Bloom's Stage 1 of talent development 1 in three ways. First, it provides mentees with a connection to adults who probe and cultivate mentees' interests and help them achieve their goals. Second, mentors serve as role models who share their own experiences in identifying (career) interests and their career paths with their mentees. For career guidance and the promotion of girls' STEM interests, female role models who are themselves studying a STEM subject or working professionally in a STEM field are particularly effective. 18 Third, mentors act as advocates for their mentees, giving them access to other individuals and institutions to explore and deepen their career interests. 19 Online mentoring is a specific form of mentoring in which inter- In the context of STEM promotion for girls, online mentoring has proven to be particularly successful for at least four reasons. First, online mentoring makes it easier to find suitable female role models as mentors. The rationale is that while women who are themselves studying a STEM subject or working in a STEM field act as particularly suitable role models (e.g., Ref. 18), it is often difficult to find them in the mentees' immediate vicinity due to the low participation rates of women in STEM. Online mentoring facilitates matching mentees with suitable mentors due to its spatial and temporal flexibility. Moreover, the online format makes frequent communication (ideally on a weekly basis) easier, which is essential for successful mentoring. 22 Second, by using an appropriate platform, online mentoring enables mentees to network with other females interested in STEM. Networking with other mentors prevents counterproductive subtyping processes, 23 that is, girls recognize that their individual mentors are not an exception (i.e., a subtype), but that their mentors are among many women successful in STEM. This can help to reduce the stereotype that STEM is a typically male domain. [24][25][26] In addition, by networking with other female mentors, mentees learn about different STEM career paths and discuss a wide variety of STEM-related topics, thereby deepening their interests. Networking with other mentees in an online mentoring program can make mentees aware that many other girls are also interested in STEM, which is usually not the case in their immediate social environments. 15 Moreover, the other mentees in a program provide same-age role models. Same-age role models have been shown to be particularly effective for changing girls' perceptions of STEM subjects as unfeminine. 27 Furthermore, studies indicate that peer support plays an important role in girls' willingness to stay in STEM fields. 28 Third, online mentoring enables the establishment of an optimal learning environment for girls in STEM. 17 For example, it is possible to host topic chats on STEM or provide low-threshold access to STEM lectures, discussions, and Q-and-A sessions. Mentees and mentors can use collaboration tools to work jointly on STEM projects or manuscripts, or, with proper facilitation, to organize their own symposia. 3 Finally, some aspects that predict successful mentoring 29 are particularly well realized in online mentoring. For example, it is more feasible for employed mentors to participate in training sessions essential to mentoring success before and during a mentoring program 22 if these are offered via instructional videos or online. 30 The continuous support of participants by trained program staff (e.g., regular check-ins with mentees and mentors) is an important criterion for successful mentoring 31 and easier to implement online.
Research on mentoring during secondary education and females' later STEM-related choices
There is evidence that mentors have an influence on mentees' career choices in STEM (i.e., majoring in a STEM subject at university or entering a STEM profession). For example, in a retrospective survey of 1425 female graduates of selective science, high schools in the United States 32 found that having a teacher as a mentor during high school correlated with university STEM major choices and degrees in STEM.
However, studies examining associations between participation in formal STEM mentoring programs during secondary education and later career choices are lacking. 13,33 To date, evaluation studies of such programs have mainly examined more proximal program effects on precursors of later choices of STEM majors or careers (e.g., elective intentions in STEM, certainty about career plans, or career interests; for an overview, see Ref. 13). In the following, we describe the results of two evaluation studies reporting proximal beneficial effects of STEM mentoring programs offered during secondary education.
In their evaluation study of the Spanish Inspira STEAM program, which aims to increase girls' participation in STEM, 34 The results of these studies are encouraging. However, they focus on the early prerequisites of later career choices. Whether these promising changes in elective intentions in STEM, in certainty about career plans, and for other relevant constructs-observed while girls were in secondary education and participating in online and in-person mentoring programs-actually lead to real-life decisions, often made years later, to major in a STEM subject at university and/or to choose a STEM career has not yet been investigated. 13,33 Although studies with adults suggest that online and in-person mentoring do indeed influence career-path-relevant, real-life choices, 36,37 in these studies, the career choices occurred immediately or very shortly after participation. In the case of online and in-person mentor-
Research questions
In our study, we, therefore, investigated whether girls who participated in the online mentoring program CyberMentor for at least 1 year during secondary education were more likely to make a STEM career choice (majoring in a STEM subject or entering a STEM profession) later on. To do this, we compared the STEM career choices of former mentees with different control groups. We formulated three research questions. In the following, we state each research question and provide additional rationale for Research Questions 2 and 3.
Research Question 1: A few years after participating in the program, are former CyberMentor participants significantly more likely to make a career choice in STEM than women of the same age cohort?
Girls who enroll in longer-term STEM programs, such as CyberMentor, differ from girls who do not enroll in such programs on several characteristics. 15,38 For example, they exhibit significantly greater interest in STEM and have better grades in STEM subjects. Therefore, to determine whether subsequent career choices can actually be attributed to program participation, it is not sufficient to compare the proportion of participants' career choices to those of the same age cohort. Rather, it is important to compare the proportion of participants' career choices with those of comparable nonparticipating females of the same age who were similarly interested in STEM at the time of program participation. Only in this way can it be determined whether participation in the mentoring program is (partly) responsible for subsequent career choices in STEM, or whether the above-average interest would have led such girls to make these choices even without having participated in such a mentoring program.
Research Question 2: A few years after participating in the program, are former CyberMentor participants significantly more likely to make a STEM career choice than females who had also originally enrolled in the program (i.e., girls of the same age who were similarly interested in STEM at the time) but then did not participate?
Not only does STEM interest differ among girls who enroll in long-term STEM programs. Often, these girls are characterized by greater elective intentions in STEM and less certainty about career plans before program participation. 15
CyberMentor as a research setting
CyberMentor is a Germany-wide online mentoring program founded in 2005. Its goal is to inspire girls for STEM and to contribute to an increased participation of women in STEM in the long term. The program takes place on a members-only online platform that was planned and programmed by the project team based on the mentoring goals and the underlying mentoring concept. Mentees are enrolled in grades 5−13 of university-track secondary education in Germany. a Each mentee is mentored for at least 1 year by a mentor, a woman who is majoring in a STEM subject or working in a STEM field. Up to 800 mentees and up to 800 mentors participate in the program annually.
The mentees and mentors communicate with one another on a weekly basis for at least half an hour via emails, instant messages, and forum posts. The program is free of charge for the students, and the mentors volunteer their time.
In CyberMentor, dyads are matched based on the mentees' STEM interests and the mentors' STEM fields, as well as shared personal interests. Mentors act as successful STEM role models, provide insight into their careers and everyday work, discuss STEM-related topics as well as personal issues, and provide support for mentees' STEM projects. To prevent subtyping processes and to provide both role models that are professionally successful in STEM (mentors) and same-age role models who are also interested in STEM (mentees), the platform offers extensive networking opportunities with other mentors and mentees.
The mentoring year is divided into four phases of equal length. In the first quarter, the focus is on getting to know one another and learning more about STEM majors and professions. In addition, mentees and mentors jointly investigate where STEM plays a role in everyday life.
The results of these discussions are made available to the entire online mentoring community in STEM wikis written to stimulate platformwide discussions about STEM in everyday contexts and illustrate that STEM plays a role in diverse everyday settings that are often not apparent at first glance. In the second quarter, two mentoring dyads with similar STEM interests (e.g., computer science) collaborate on a project in their STEM field (e.g., programming an app). In the third quarter, several mentoring dyads from different STEM fields work together on an interdisciplinary project (e.g., researching and writing a plan for surviving on Mars). The last quarter is dedicated to a review of and reflection on the first three quarters of the mentoring year. For example, participants write articles for the monthly program magazine, CyberNews, and present their most interesting projects from the mentoring year.
a In most German federal states, secondary education starts in fifth grade and runs through twelfth and in some federal states the thirteenth grade. For Research Question 2, we compared the percentage of the 410 former mentees who had later made a STEM career choice with the percentage of STEM career choices in the group of the 71 women who had applied for the program during secondary school but never participated. As the two groups were not comparable concerning age and STEM interest when entering the program, propensity score matching was used to make the groups as comparable as possible (described in detail in the section on data analysis), which resulted in a matched sample of 265 former mentees who were then compared to the 71 women who had not participated in the program but had originally applied for participation. b Although our response rate is appropriate for our sample size, 53 selection bias could still be a problem. Therefore, we analyzed differences between respondents and nonrespondents for age, elective intentions in STEM, and certainty about career plans at the beginning of the mentoring period. While there were significant differences, the differences were small. Cohen's ds were 0.13 for both age (respondents were slightly older) and certainty about career plans (lower values for respondents) and 0.27 for elective intentions in STEM (higher values for respondents), which indicates low selection bias. c While some of our former mentees reported only their recent job in the follow-up, it is often possible to deduce what they studied at university. When analyzing a reduced sample of former mentees who were still studying, the differences compared to the age cohort were even slightly larger.
STEM interests
Participants indicated with "yes" or "no" whether they were interested in each of the following six areas: mathematics, computer science, biology, chemistry, physics, and technology. It was possible to select "yes" or "no" more than once.
Length of program participation
We counted the number of years the mentees participated in the program.
STEM career choice
Depending on which applied to them, participants either indicated which major they were currently studying at university, or they indicated their current profession via an open-response item. Using the classification of the National Pact for Women in STEM Professions, 39 we coded whether their major or profession was in a STEM field (1) or not (0).
STEMM career choice
We additionally coded a broader STEM variable, called STEMM. It refers to science, technology, engineering, mathematics, and also medical sciences.
Computer science and engineering career choice
As the participation of women in STEM and STEMM in Germany is not generally low in all domains but is especially low in computer science and engineering, we also coded choices in these fields separately.
Data analysis
For answering Research Question 1, simple descriptive statistics were used. For properly assessing the treatment effect for Research Question 2, we first had to check the balance of our two groups concerning relevant pretreatment covariates. We considered age, the CyberMentor cohort (cohort 1, 2, or 3), and the pretreatment STEM interests from the application questionnaire described in the methods section (i.e., in mathematics, computer science, biology, chemistry, physics, and technology) as relevant variables. Information about these variables was available for all participants as the corresponding questions were part of the program application.
There was a significant age difference as well as marginally significant differences concerning interest in chemistry and interest in technology. However, even nonsignificant differences should be reduced as much as possible. 40 Therefore, we used propensity score matching based on all noted pretreatment variables using the program PS Matching 3.0.4 to achieve balance concerning these variables. 41 More specifically, we used nearest-neighbor matching without replacement and a ratio of 1:4. The 1:4 ratio means that we found four matches for each person in our control group, drawing from the larger treatment group. There are many possible procedures for matching, but as the goal of each matching procedure is to achieve balance, a procedure that achieves balance can be deemed suitable. 40 Our analyses for Research Question 3 are based on the latent growth-curve approach. In the latent growth-curve approach-which is situated in the framework of structural equation modeling 42
Estimation of the models
The analyses were conducted with Mplus 8. 43
Research Question 1
As can be seen in Table 1
Research Question 2
As can be seen in Table 2, in the treatment and the control groups, similar means and standard deviations concerning age and STEM interests were achieved using the nearest-neighbor propensity score matching procedure described in the prior section on data analyses. The matching procedure also resulted in a similar composition of the dif-
Research Question 3
Next, we tested within our sample of former mentees which variables predict later STEMM career choices. The structural equation models showed very good model fit according to every index we examined (see Table 3).
In Note. Beta indicates the standardized probit regression weights. LL and UL indicate the lower and upper limits of the 95% confidence interval, respectively.
The predictors explained 16.4% of the variance in the outcome, and the baseline value of elective intentions in STEM was clearly the best predictor (see Table 4, step 1).
In step 2 (Model 2), we examined how much the prediction improved when adding the changes (slopes) in elective intentions in STEM and certainty about career plans during the first mentoring year as further predictors. The explained variance almost doubled to 31.0%, and both slope variables significantly contributed to this increase (see Table 4, In the next step, we went beyond predictors from the first mentoring year and added the overall length of program participation (the overall number of years the mentees participated in the program) as another predictor. While the overall length of program participation was a significant predictor, it only improved the explained amount of variance to 32.6% (Model 3, see Table 4, step 3). Women's overall participation rates-at all stages of talent development and all levels of seniority-are significantly lower than men's. [5][6][7] This situation has led to extensive attempts in recent decades to inspire females to pursue STEM and to increase females' participation in STEM majors and careers. Because girls' interest in STEM declines markedly during adolescence 8 With our study, we endeavored to make a contribution to addressing this research gap. Specifically, we investigated the real-life choices of STEM majors and careers made by women who had-years before while enrolled in secondary education-participated in the Germanywide online mentoring program CyberMentor. We chose this program because online mentoring is a particularly promising intervention for engaging girls in STEM. [14][15][16] Furthermore, the short-term effectiveness of CyberMentor has been demonstrated in numerous evaluation studies that meet quality standards. 33 Girls who participated in the program had more positive developmental trajectories in terms of elective intentions in STEM and certainty about career plans than did girls in waitlist control groups who were comparably interested in STEM. 15 Thus, the program was particularly well suited to investigating the significance of successful STEM promotion provided to girls during secondary education for their later real-life STEM choices.
DISCUSSION
In a first step, we investigated whether years after program participation of former CyberMentor participants were more likely to choose STEM majors and professions than females of the same age cohort. The results of these analyses were encouraging. Former female CyberMentor participants were more than twice as likely to choose STEM majors (51.2% vs. 23.9%). When medical science-a STEM field in which women are actually overrepresented in many countries-was included in the analysis, former mentees were still almost twice as likely to choose STEM majors as females in their age cohort (61.7% vs. 31.1%). Interestingly, however, this means the ratio between the groups became somewhat smaller when medical science was included.
The real-life choices for computer science and engineering, that is, the two STEM fields that are by far the least likely to be chosen by females in Germany, 45 confirm this tendency of higher ratios in fields with fewer women. While only 10.6% of females in the age cohort chose these fields, more than twice as many of CyberMentor's former mentees did, namely, 24.9%. Thus, these findings suggest that there was an even stronger increase in percentages in STEM fields in which there are few females. One reason for this could be that a disproportionately high number of mentors in the CyberMentor program came from these fields. 51 As CyberMentor not only offers one-on-one mentoring, but also facilitates networking with numerous other mentors and mentees on the platform, it is conceivable that the particularly large number of role models from the computer science and engineering fields were partly responsible for the frequent choices of these STEM fields that former participants went on to make after high school. Our study shows that online mentoring can make an important contribution to Stage 1 talent development according to Bloom 1,2especially when it comes to inspiring females to enter a STEM field and motivate them to stay in the STEM talent development pipeline.
To our knowledge, this is the first study to systematically examine whether successful participation in a mentoring program offered to girls enrolled in secondary education can positively influence real-life STEM choices made years after participation. While studies have examined the impact of (online) mentoring on real-life choices of STEM majors and careers, 32 they were either retrospective surveys that did not address formal (online) mentoring programs or were studies with adults. 36,37 Studies with students in secondary education have so far primarily examined the short-term effectiveness of formal (online) mentoring programs (e.g., influences on STEM interests and elective intentions in STEM). 13 Moreover, many of these studies evince methodological shortcomings (e.g., inappropriate or wholly lacking control groups, reliance on postprogram satisfaction surveys), making even conclusions about the short-term effectiveness of the programs difficult in some cases. 13 Our evaluation study, which included appropriately designed randomized waitlist control groups and followed up with former participants years later, provides preliminary evidence that a formal online mentoring program that had been demonstrated to be effective in the short term did indeed also contribute to females' later real-life STEM choices and, therefore, also possesses long-term effectiveness. However, our study also has its own limitations. These should be considered when interpreting our findings and can inform future research.
Limitations and future research directions
A first limitation is the generalizability of our results. CyberMentor is a Germany-wide online mentoring program that has been continuously optimized and adapted to the needs of participants via an extensive program of accompanying research. 52 practice, important quality standards of mentoring programs often go unaddressed or are insufficiently attended to. 11,55,56 In the case of mentoring programs in secondary education aiming to increase the labor-force participation rates of females and other underrepresented groups for the long term, a crucial weakness has been a lack of evaluation studies that considered the long-term effects of such programs to understand whether and, if so, to which extent such programs are effective for achieving such long-term goals. This
ACKNOWLEDGMENTS
We thank Dr. Daniel Balestrini for his feedback on our manuscript and assistance with language revisions and Manuel Hopp for his assistance with data collection.
Open access funding enabled and organized by Projekt DEAL. | 2023-03-30T06:16:35.128Z | 2023-03-29T00:00:00.000 | {
"year": 2023,
"sha1": "a852f7a0555c8a562beb810eded23e0b4cdaaff4",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nyas.14989",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "31d4186a3f580c3e81e19d386041f344c4e2544b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233029763 | pes2o/s2orc | v3-fos-license | Early development of artificially initiated turbulent spots
Abstract An experimental investigation was carried out in a low-turbulence wind tunnel to study the early development of artificially initiated turbulent spots in a laminar boundary layer over a flat plate. The reproducibility of the experiments allowed us to observe fine structural details that have not been observed previously. Initial velocity disturbances quickly developed into hairpin-like structures that multiplied downstream, which increased the width, length and height of the incipient turbulent spots. Only those disturbances that were greater than a threshold value developed into turbulent spots while the others decayed. The rate of development was also affected by the duration of the initial disturbances. We found that the behaviour of turbulence generation within a turbulent spot is similar to the burst events in the turbulent boundary layer, where ejection events are followed by sweep events.
Introduction
During boundary layer transition, turbulent spots can be observed (Emmons 1951) as a part of the process that leads to a fully developed turbulent boundary layer. Turbulent spots are considered as the building blocks of turbulence; therefore, much effort has been spent on their study. The original investigation of artificially initiated turbulent spots was performed by Schubauer & Klebanoff (1956), who used a spark from a short needle to create a velocity disturbance in the laminar boundary layer. Their hot-wire measurements indicated an abrupt increase in the turbulent velocity at the spot front. Their measurements also showed a slow exponential-like fall, which they called the 'calmed region', at the back of the turbulent spot. They observed a remarkable similarity between the growth in the envelope of a turbulent spot and a turbulence wedge created by a roughness element. Elder (1960) investigated the conditions that are required for a breakdown of turbulence to initiate a turbulent spot. He concluded that the breakdown of turbulence in the laminar boundary layer can occur independent of the Reynolds number if the intensity is greater than 0.2 times the freestream velocity. In a study of the swept leading-edge flow, Gaster (1967) induced turbulent spots using sparks to determine how the turbulent regions expand or contract as they propagate along the attachment line. The leading edge of the turbulent spots were shown to move faster than that of the trailing edge for large momentum thickness Reynolds numbers, which resulted in the expansion of the turbulent region of the spots as they propagated. Below the critical Reynolds number, however, the turbulent spots contracted and finally decayed. Therefore, the development or decay of turbulent spots was determined by the difference in convection velocities between the leading edge and the trailing edge. Wygnanski, Sokolov & Friedman (1976) used electrical discharges to trigger a laminar boundary layer to generate turbulent spots for a detailed study of the structure. They confirmed that the breakdown to turbulence occurred over the entire range of parameters being investigated when the velocity intensity of the disturbance was greater than 20 %. The shape of the spot was independent of the disturbance generated. Further investigation was carried out by Katz, Seifert & Wygnanski (1990), who evaluated the effect of a favourable pressure gradient on the development of turbulent spots. Their results indicated that the spot growth was significantly inhibited and reduced both the streamwise and spanwise spreading rates by 50 %. Another investigation, which was carried out by Seifert & Wygnanski (1995) in a laminar boundary layer with an adverse pressure gradient, demonstrated the opposite effect, where an enhancement in the growth of turbulent spots was observed. Some of these results were later confirmed by Chong & Zhong (2005). Cantwell, Coles & Dimotakis (1978) carried out laser Doppler anemometry measurements and flow visualisation in a water channel, where turbulent spots were generated by jets issued from an orifice. Ensemble-averaged velocity profiles were used to construct a contour of the turbulent spots as well as unsteady streamlines by assuming flow symmetry about the centre plane. Results of similarity analysis indicated that the turbulent spots contained two vortex structures, a large vortex in the middle of the spot and a trailing small vortex near the wall. A similar analysis was carried out by Wygnanski, Zilberman & Haritonidis (1982) to confirm some of these findings. Van Atta & Helland (1980) used the temperature-tagging technique to investigate the structure of turbulent spots over a heated flat plate in a wind tunnel. Their study demonstrated that the heated upper-forward portion of a turbulent spot is created by the upward transport of hotter fluid near the wall, whereas the near-wall tongue with relatively cooler fluid arose from the downward transport of colder wind tunnel air. These results were later validated by Schroder & Kompenhans (2004) using the multi-plane stereo particle image velocimetry technique. Van Atta & Helland (1980) also showed that the maxima and minima in the temperature disturbance coincided with the locations of the two vortex structures identified by Cantwell et al. (1978).
Based on a towing tank test, where turbulent spots were generated by an ejection of fluid through a small hole, Gad-el-Hak, Blackwelder & Riley (1981) proposed the 'growth by destabilisation' mechanism to explain the lateral growth of a turbulent spot. With this mechanism, turbulence is generated by the lateral induction of velocity perturbation by turbulent eddies in a spot. A similar mechanism of spot growth was proposed by Perry, Lim & Teh (1981) and Sankaran, Sokolov & Antonia (1988). Johansson, Her & Haritonidis (1987) used a microphone and a hot-wire sensor to carry out variable-interval time-averaging (VITA) conditional sampling of the pressure and velocity fluctuations within turbulent spots. They concluded that the high-pressure pulse coincides with the acceleration of the velocity fluctuation near the wall. Therefore, the pressure peak is observed when a sweep event is detected. Their VITA sampling results also showed that the magnitude of the pressure peak is linearly proportional to the amplitude of the velocity fluctuation, which suggests that the pressure peak is governed by the turbulence-mean shear interaction. Asai, Sawada & Nishioka (1996) studied a subcritical boundary layer, where a strong periodical disturbance of approximately 30 % of the freestream velocity was applied by a loudspeaker through a small hole. This formed a series of hairpin-shaped vortices immediately downstream. Here, the distance between the hairpin legs was exactly the same as the hole diameter. They observed high-frequency velocity spikes associated with the passage of hairpin eddies away from the wall in their hot-wire measurements. One of their major conclusions was that the lateral growth of the turbulence region is through the generation of secondary vortices on both sides of the convecting primary hairpins.
More recently, Singer (1996) carried out a direct numerical simulation (DNS) study of young turbulent spots in a constant-pressure boundary layer, where spots were created by perturbing the flow with a pulse of fluid through a slit. Hairpin vortices were observed near the trailing edge of the spots; however, no evidence of Tollmien-Schlichting (T-S) waves was observed in this study. Sabatino & Smith (2008) examined the surface heat transfer of turbulent spots which were artificially introduced into a laminar boundary layer by means of wall-normal fluid injection through a small hole. They found that a significant portion of the spot did not generate a measurable increase in surface heat transfer above the laminar level. Strand & Goldstein (2011) conducted a DNS study of a laminar boundary layer over riblets and discovered that the surface micro-grooves reduced the spreading angle of turbulent spots by 14 %. They also observed that turbulent spots were composed of a multitude of entwined hairpin vortices, whose size increased as the spots matured.
Artificially initiated turbulent spots have also been studied in high-speed boundary layers. For example, Krishnan & Sandham (2006) carried out a DNS study of compressible isothermal-wall boundary layers at Mach 2, 4 and 6. Turbulent spots were triggered by a local blown injection, where an array of hairpin vortices and quasi-streamwise vortices were observed inside the spots. Spanwise coherent structures were observed under the front overhand region of the turbulent spots, which suggested the presence of Mack-mode instabilities (Mack 1984). The lateral spreading angle of the turbulent spots was reduced with an increase in the Mach number. An experimental study was conducted by Casper, Beresh & Schneider (2014b) to investigate the pressure fluctuations beneath turbulent spots in a hypersonic boundary layer which were triggered by pulsed-glow perturbations. They found that controlled disturbances grew into wave packets through Mack's second-mode instability where the breakdown to turbulence began. Instability waves remained throughout the breakdown stage as well as in the turbulent spot. A cost-effective approach to numerical modelling of the nonlinear breakdown of wave packets to turbulent spots in hypersonic boundary layers was proposed by Chuvakhov, Fedorov & Obraz (2018), which derived the boundary conditions for wave packets arising from Mack's second mode. If the initial hump (the disturbance maximum) of wave packets was high, they broke down rapidly. If not, the packets gradually decayed.
When the freestream turbulence level is between 1 % and 4 %, the laminar boundary layers develop streamwise elongated regions of high and low streamwise velocity 916 A1-3 (Klebanoff, Tidstrom & Sargent 1961) which lead to secondary instability and breakdown to turbulence (Matsubara & Alfredsson 2001). This is called the bypass transition via Klebanoff modes, which results in the formation of turbulent spots. This evolutionary path of the boundary layer transition is fundamentally different from the non-linear boundary layer transition process discussed above by artificially initiating turbulent spots using strong disturbances (Morkovin 1993). Durbin & Wu (2007) reviewed this boundary-layer transition process, which showed that the continuous spectral mode of transition is caused by freestream vortical disturbances. Here, only low-frequency disturbances enter the boundary layer, which generates long contours of streamwise velocity (jets) where turbulent spots appear. The breakdown to turbulence is preceded by the jet lift-off, which brings the low-speed fluid upward across the boundary layer. The growth rate of turbulent spots resulting from the bypass transition was experimentally studied by Fransson (2010). Here, turbulent spots were artificially initiated by short-pulsed jets through small holes. He observed that unsteady streamwise streaks during the bypass transition have a damping effect on the developing turbulent spots. A clear Reynolds number dependency on the development of turbulent spots was also demonstrated. A DNS study of artificially initiated turbulent spots in a laminar boundary layer was carried out by Rehill et al. (2013), who discovered that the general shape of the turbulent spots was not changed by the presence of organised streaks. The effect of low-speed streaks was to elongate the turbulent spots, while that of high-speed streaks was to contract them. Wu et al. (2017) demonstrated in their DNS study that the inception mechanism of turbulent spots during the bypass transition was analogous to that of the secondary instability in the natural transition of the boundary layer, where spanwise vortex filaments deformed and stretched in the streamwise direction to become -vortices and then hairpin packets. They concluded that the streak waviness and breakdown during the bypass transition were not involved in the inception mechanism of the turbulent spots. Marxen & Zaki (2019) analysed DNS data for the bypass transition to study turbulence in intermittent transitional boundary layers. A fully developed turbulence was found only in the centre of young turbulent spots, where the hairpin vortices reached far away from the wall. They also found that the frontal part of young turbulent spots and their lateral wings were dominated by streamwise vortices.
The aim of our study is to control the transitional boundary layer for drag reduction by modifying turbulent spots. To accomplish this, it is necessary to artificially initiate turbulent spots by applying disturbances to the boundary layer. Here we discover that strong disturbances from a small orifice generate a packet of hairpin-like structures by bypassing the linear instability stage of the boundary-layer transition. Nevertheless, we find that these hairpin-like structures behave very similarly to those resulting from the T-S wave transition or the bypass transition via Klebanoff modes. However, there are virtually no data available in the literature to show the early development of artificially initiated turbulent spots owing to difficulties in generating them repeatably in space and time. Here, we apply the 'deterministic turbulence' technique (Shaikh 1997;Borodulin, Kachanov & Roschektayev 2011) in our experiments in an extremely low-turbulence wind tunnel (Gaster 1990). This allows us to obtain highly repeatable velocity data immediately downstream of disturbances within ten times the local boundary layer thickness, while previous studies have all been made much further downstream. Using these velocity data, we can investigate the incipient structure and its breakdown to turbulence in a boundary layer, which leads to turbulent spots. The detailed account of the initial development process of these hairpin-like structures to turbulent spots is reported below. Dimensions are in millimetres. There are, in total, 19 orifices and miniature speakers across the span of the test plate, but only the centre speaker is used in this study. Two unused circular instrumentation plates are also shown.
Experimental set-up
Experiments were carried out in a low turbulence wind tunnel at City, University of London, whose test section measured 0.91 m (height) by 0.91 m (width) by 1.8 m (length). The tunnel was housed in a temperature-controlled laboratory, where the air temperature change was kept within ±0.5°C. The wind tunnel speed was set at 18 m s −1 in all tests, which corresponded to the unit Reynolds number of 1.2 × 10 6 per metre. The turbulence level was less than 0.005 % at this speed for the frequency range between 2 Hz and 2 kHz (Gaster 1990). A 1.537-m long and 0.91-m wide flat test plate was made of a 12.7-mm thick aluminium cast tooling plate, which was vertically installed in the centre of the test section. It had a modified asymmetric super-elliptic shaped leading edge skewed towards the working surface, with a thickness ratio of 0.41 and a length of 127 mm (Bosworth 2016). The surface roughness (Ra) of the test plate was less than 8 μm, where the flatness was less than 50 μm per 50-mm length anywhere in both the streamwise and spanwise directions. The pressure gradient of the boundary layer was set to zero by adjusting the trailing edge flap and tab, see figure 1(a). Mean velocity profiles of the laminar boundary layer over a flat plate at various streamwise locations are shown in figure 2, which indicate that they were well represented by the Blasius profile up to at least x = 1000 mm.
The laminar boundary layer was excited by a 12-mm diameter miniature speaker (IMO Precision Controls 41.T70L015H-LF), which was embedded on the centreline of the flat plate at 325 mm from the leading edge as shown in figure 1(b). The speaker was driven by a random broadband signal (0 to 1 kHz), as shown in figure 3(a), which issued jets through a 1-mm diameter orifice. Figure 3(b) shows the corresponding power spectrum. Here, a 400-ms long signal was repeated 20 times while the streamwise velocity was sampled at a 916 A1-5 Y.X. Wang, K.-S. Choi, M. Gaster, C. Atkin, V. Borodulin and Y. Kachanov rate of 10 kHz by a single hot-wire probe using a DISA 55M CTA unit. The sampled data were converted to a digital form by a 16-bit analogue-to-digital convertor before they were stored on a computer for subsequent analysis. The velocity measurements were taken across the whole boundary-layer thickness along the central plane, which covered a streamwise range of 325-800 mm. Flow velocities were also measured in spanwise planes at several streamwise positions in a fine spanwise step of z = 0.5 mm to reveal the 916 A1-6 detailed flow structures. The total experimental uncertainty in velocity measurements was ±1.2 %, which was composed of a ±1.0 % error in hot-wire measurements and calibration, a ±0.6 % error owing to freestream velocity change during the run and a ±0.2 % error owing to flow uniformity in the test section of the wind tunnel. The spatial uncertainties associated with hot-wire measurements were ±0.2, ±7.5 and ±1.6 μm in the streamwise (x), wall-normal (y) and spanwise (z) directions, respectively, which arose from step motor resolutions (±0.2 μm in x and y and ±1.6 μm in z) and the laser displacement sensor accuracy (±7.5 μm) of the wall positioning of the hot-wire probe.
Experimental results
The streamwise velocity fluctuation in the laminar boundary layer measured by a single hot-wire probe immediately above the disturbance source (x = 325 mm) at y = 0.5 mm is shown in figure 4(a). Here, the velocity fluctuations leading to turbulent spots, Spot I, Spot II and Spot III are labelled by I, II and III, respectively. Because the velocity is the time derivative of displacement, the jet velocity from the orifice should be linearly proportional to the frequency of the voltage signal that sets the diaphragm displacement, i.e. sin (2πft) = 2πf · cos(2πft). Indeed, the power spectrum shown in figure 4(b) demonstrates that the measured velocity fluctuation is dominated by high frequency components of around 1 kHz. Therefore, the flow disturbance applied to the boundary layer can be considered as amplitude modulated jet pulses operating at the high end of the excitation frequency (1 kHz). Because these frequencies are located outside the neutral stability curve of the boundary layer, as shown in figure 5 (Wang & Gaster 2005), they are linearly stable. The downstream development of the ensemble-averaged streamwise fluctuating velocity is shown in figure 6. Here, the measurements were taken along the centreline of the test plate between x = 350 and 700 mm. The time sequence is reversed through the use of Taylor's frozen-flow hypothesis, so that the flow structures correspond to physical space 916 A1-7 Downloaded from https://www.cambridge.org/core. in the x-y plane. An early development of turbulent spots was seen at x = 350 mm (see figure 6a). These were incipient turbulence spots, which looked like heads of hairpin eddies. The pattern of these velocity fluctuations was very similar to the velocity signal at the disturbance source (see figure 4a). For example, five such low-speed structures were seen for Spot II at t = 110 ms, which grew in size and strength downstream. These structures will eventually 'merge' together, creating a large low-speed lump accompanied by a thin high velocity region near the wall, which is very similar to that observed for the turbulent spots studied by Zilberman, Wygnanski & Kaplan (1977) and Fransson (2010), see figure 6(g) at x = 700 mm. Only the disturbance with an initial velocity fluctuation (x = 325 mm at y = 0.5 mm) greater than 10 % of the freestream velocity U e , as indicated by the red horizontal bar in figure 4(a), developed to turbulent spots downstream. For example, the velocity disturbances at times t = 85 ms and 110 ms developed to the turbulent spots, Spot III and Spot II, respectively, while those with lower velocity fluctuations (Spot I) decayed downstream. This agreed with the finding of previous studies (Elder 1960;Wygnanski et al. 1976), which indicated that only disturbance above a 'critical intensity' will breakdown to turbulence to form turbulent spots. However, the threshold value in our study was less than that of the critical intensity obtained before. Figure 7(a) shows the colour velocity contours of three incipient turbulent spots in a plan view (t-z plane) at y = 0.2, 0.5 and 1.0 mm, 25 mm downstream from the disturbance source (x = 350 mm). Again, the time sequence is reversed in this figure so that the turbulent spots' orientation corresponds to physical space in the x-y plane. The low-speed streaks that are shown in each figure were induced by a pair of necklace vortices from the disturbance source (Jabbal & Zhong 2008;Qayoum et al. 2010). Here, the low-speed streaks were seen on the upwash side of each pair of necklace vortices. The streaks seemed to be undisturbed everywhere except where velocity disturbance was applied. The incipient structure of Spot I was generated when the disturbance was applied to the boundary layer in two successive velocity pulses at t = 170 ms, see figure 4(a), where a pair of high velocity streaks lasting approximately 2 ms is shown at y = 0.2 mm. These are believed to be the legs of a hairpin vortex that extends from the orifice (Acarlar & Smith 1987;Haidari & Smith 1994). Because the ratio of velocity disturbance to the freestream velocity was low in our experiment, only the downstream part of the ring vortex from the orifice will develop into a hairpin (Sau & Mahesh 2008, 2010. The vorticity of the upstream part cancelled out that of the on-coming boundary layer. A low-speed fluid (shown in blue) was pumped up from the wall region within the hairpin head. At y = 1.0 mm, the incipient structure consisted of two nested hairpins as a result of two successive pulsed disturbances. As shown later, this flow structure decayed downstream as the initial disturbance level (about 0.07 U e ) was less than the threshold value of 0.1 U e to cause a breakdown to turbulence.
A1-9
Downloaded from https://www.cambridge.org/core. However, the initial structure of Spot III was created by a series of pulsed velocity disturbances at t = 80 ms. Here, 'wavy' pairs of high-speed streaks (as shown in red) were seen close to the wall (y = 0.2 and 0.5 mm). Three successive hairpin legs were 916 A1-10 Downloaded from https://www.cambridge.org/core. developed from an orifice where the velocity disturbance had been applied. Further away from the wall at y = 1.0 mm, the head of the hairpin was seen slightly enlarged in the spanwise direction. A low-speed region (as shown in blue) was also observed in this figure, which was induced (upwashed) by the hairpin vortex. The level of applied disturbance (0.12 to 0.17 U e ) was large enough to breakdown the laminar boundary layer to develop a turbulent spot. A similar sequence of events was observed for Spot II after a sequence of five strong-pulsed disturbances was applied to the laminar boundary layer at t = 110 ms. This structure was nearly identical to that of Spot III, except that the number of wavy pairs were greater, which made this structure much longer in the streamwise direction.
What is interesting to observe in the initial structure of Spot III is that there were extra streaks outside of the original hairpin legs as compared with the incipient structure of Spot I, whose development was in its early stage owing to a weak disturbance. This suggested that lateral growth of the flow structure resulted from generation of secondary vortices by the original hairpin legs. Lateral growth of the flow structure through secondary vortices was also evident in the initial structure of Spot II, which indicated that this was one of the mechanisms for turbulent spot growth, as suggested by Asai et al. (1996). Similar mechanisms for lateral growth were proposed by Gad-el-Hak et al. (1981), Perry et al. (1981) and Sankaran et al. (1988).
These results indicate that the development of a turbulent spot depends on the initial level of disturbance as well as the number of velocity pulses. For example, Spot II and Spot III are already in an advanced stage of development only 25 mm downstream of the disturbance. They have strong velocity defect regions away from the wall (shown in blue) together with velocity excess regions near the wall (shown in red). However, Spot I is still in its infancy, showing no or little development in the velocity defect or excess regions. Downstream developments of incipient turbulent spots are shown in figures 7(b) and 7(c) at x = 380 and 400 mm, respectively. Spot I grew in size (height, width and length) until x = 380 mm, but started to decay after x = 400 mm. However, Spot II and Spot III grew, both in size and intensity, through an increase in the number of velocity pulses during the development. The increase in the spot height seemed to arise from the increase in the boundary layer thickness, which will be shown later.
Results shown in figure 7 were obtained by ensemble-averaged x-component velocities when a 400-ms long disturbance signal was repeated 20 times. Here we show the effect of the number of ensemble averages on the velocity contour of Spot II at x = 380 mm in figure 8, which suggests that the repeatability of the velocity measurements in the present experiment was excellent at least in the near field where the early development of artificially initiated turbulence is investigated. This demonstrates that the velocity pattern does not effectively change as the number of ensemble averages is reduced from 20 to 1. This arises from the 'deterministic turbulence' technique (Shaikh 1997;Borodulin et al. 2011) employed in our test, which was carried out in an extremely low-turbulence wind tunnel (Gaster 1990). All results reported here were ensemble averaged using 20 repeated data, except for those shown in figures 10-13 where the velocity signals and associated wavelet spectra are shown using a single realisation.
The development of incipient turbulent spots is demonstrated further in figure 9, where the three-dimensional structure of Spot II is depicted by iso-surfaces of the x-component velocity fluctuation at 10 % (shown in coral) and −5 % (cyan) of the freestream velocity. Figures 9(a), 9(b) and 9(c) correspond to the spot structure at x = 350, 380 and 400 mm, respectively. They demonstrate that the incipient turbulent spot was composed of a number of low-speed pillars that were anchored at the wall, which stretched out to the edge of the boundary layer. A high-speed region of the turbulent spot was also visible near the wall in between the low-speed pillars. These pillars represent the low-speed regions that were pumped up within each hairpin-like structures of the incipient turbulent spot. The structural development of an incipient turbulent spot shown in figure 9 reveals that the number of hairpin-like structures increased in both the streamwise and spanwise directions during the development. Low-speed pillars of incipient turbulent spots seem to be amalgamated at x = 400 mm (see figure 9c), which suggested that hairpin-like structures merged together during the spot development. The early development of turbulent spots that is shown in figure 9 is similar to that of Wu et al. (2017) in their DNS simulation of the turbulent boundary layer during transition. Figures 10(a), 10(b) and 10(c) present the streamwise velocity fluctuations in the near-wall region (y = 0.5 mm) at x = 350, 380 and 400 mm, respectively, showing the downstream development of incipient turbulent spots. They show that Spot I decayed downstream, and its initial disturbance nearly disappeared by x = 400 mm. Meanwhile, Spot II developed downstream by generating a large number of high-frequency spikes. These spikes were positively skewed, which suggested that they were associated with a downwash of higher momentum fluid towards the wall. A similar development of Spot III is shown in figures 10(a), 10(b) and 10(c), although the development of this turbulent spot seemed to be slower than that of Spot II. The main difference between Spot II and Spot III was in the number of initial disturbances above the critical intensity. While Spot II had five velocity pulses in the initial disturbance, Spot III only had two. It seems, therefore, the speed of development of incipient turbulent spots depends on the number of velocity pulses or the duration of the initial disturbance.
The downstream change in the turbulence energy distribution within incipient turbulent spots can be studied by wavelet spectra using Matlab. Here, we used generalised Morse wavelets defined by Ψ P,γ (ω) = U(ω)a P,γ ω P 2 /γ e −ω γ in the frequency domain ω, where U(ω) is the unit step function and a P,γ is a normalising constant (Lilly & Olhede 2012). The symmetry parameter and the time-bandwidth product were set to γ = 3 and P 2 = 60, respectively. Figures 11(a), 11(b) and 11(c) show the wavelet spectra of the streamwise velocity signals in figures 10(a), 10(b) and 10(c), respectively. Figure 11(a) shows that the 916 A1-12 turbulence energy of Spot II and Spot III at x = 350 mm was contained between f = 0.5 and 2 kHz. With a downstream development of incipient turbulent spots, the wavelet spectrum spread to cover a wider frequency range up to f = 3 kHz. At x = 400 mm, much of the turbulence energy within Spot II and Spot III can be found between f = 1 and 3 kHz, see figure 11(c), as a result of high-frequency spikes being generated within incipient turbulent spots (see figure 10c). Figure 11 clearly shows that the turbulence energy in Spot I had nearly disappeared by x = 400 mm.
A1-14
Downloaded from https://www.cambridge.org/core. Figures 12(a), 12 (b) and 12(c) are similar to figures 10(a), 10(b) and 10(c), which show the streamwise velocity fluctuations at x = 350, 380 and 400 mm, respectively, near the edge of the boundary layer at y = 1.5 mm. Again, an increase in the number of high-frequency spikes was evident for Spot II and Spot III, while the initial disturbance was almost gone by x = 400 mm for Spot I. In contrast to figure 10, the high-frequency spikes at y = 1.5 mm were negatively skewed, which suggested that they were associated with an upwash of lower momentum fluid from the near-wall region. The downstream development of this low momentum region is clearly seen in figure 6. A strong shear layer between the low-speed region and the high-speed region produces high frequency velocity spikes (Wygnanski et al. 1976) as we observe in figures 10 and 12. Figures 10 and 12 also show that velocity pulses within the structure started to shape into an envelope of turbulent spots at around x = 400 mm. This could be a result of a beating effect of high-frequency velocity pulses generated within the developing spot structure, which is similar to the generation process of very large-scale motions (VLSMs) from the hairpin packets in a turbulent boundary layer (Sharma & McKeon 2013).
The wavelet spectra of streamwise velocity signals at y = 1.5 mm are shown in figures 13(a), 13(b) and 13(c) at x = 350, 380 and 400 mm, respectively. The turbulence energy of Spot I was reduced downstream very quickly as shown in figures 13(a), 13(b) and 13(c). However, the turbulence energy of Spot II and Spot III near the edge of boundary layer, which was initially concentrated between f = 1 kHz and 3 kHz owing to high-frequency spikes as shown in figure 12(a), shifted towards lower frequency downstream between f = 0.5 and 2.5 kHz at x = 380 mm and then to between f = 0.5 and 2 kHz at x = 400 mm. This shift of turbulence energy coincided with the formation of a turbulent spot envelope, as observed in figures 12(b) and 12(c). Indeed, we observed a low frequency peak (175 Hz) in the wavelet spectrum of Spot III at x = 400 mm in figure 13(c), which corresponded to the duration (6 ms) of the turbulent spot envelope being formed as shown in figure 12(c). A similar downshift in energy peak frequency was observed by Casper et al. (2014a), who used the wavelet transform of wall-pressure traces to identify the location of turbulent spots on a 7°hypersonic cone.
Integral boundary-layer parameters, which include the displacement thickness δ*, the momentum thickness θ and the shape factor H, at x = 350, 380 and 400 mm are given in figures 14(a), 14(b) and 14(c), respectively. We observe that δ* and θ increased and the shape factor H = δ*/θ reduced as expected with the development of the boundary layer downstream. We can also observe that the integral parameters δ* and θ increased within incipient turbulent spots at each downstream location. Here, the relative increase in the momentum thickness within incipient turbulent spots was greater than that of the displacement thickness, therefore the shape factor H = δ*/θ was reduced. For example, the shape factor in Spot II was reduced to H = 1.9, 1.6 and 1.4 at x = 350, 380 and 400 mm, respectively, as shown in figures 14(a), 14(b) and 14(c). Therefore, the boundary layer flow within Spot II was already fully developed turbulence at x = 400 mm as the shaper factor reached to H = 1.4. The shape factor of the boundary layer outside incipient turbulent spots was reduced from H = 2.5 at x = 350 mm to H = 2.15 at x = 400 mm. This suggested that the 'non-turbulence' region of the boundary layer was already affected by the development of incipient turbulent spots even when the intermittency factor was only 22 % at x = 400 mm. It is interesting to observe that the shape factor of the boundary layer at this streamwise location was reduced from H = 2.15 to H = 1.4 very quickly when the front (downstream end) of the turbulent spot approached. However, the process of getting back to the 'laminar' region from the 'turbulent' region at the back (upstream end) of the turbulent spot, which is called the 'calmed region' by Schubauer & Klebanoff (1956), was much slower. This arose from the elongated high-speed region affecting the 'laminar' region.
Ensemble-averaged velocity profiles of the boundary layer between t = 113 and 130 ms are given in figure 15, which show the temporal structure of Spot II at x = 400 mm. Here, the boundary layer profiles are shown from left to right with an interval of 1 ms. This indicates that the boundary layer starts to develop a strong inflexional profile at t = 115 ms only a few milliseconds after the spot front had passed. Then, a velocity retardation region develops, moving towards the boundary-layer edge for the next 3 ms (t = 116-118 ms). This is followed by a flow acceleration near the wall. These temporal changes in the boundary-layer profile within Spot II can be further examined by the velocity fluctuations recorded between 110 and 130 ms at y = 0.2, 0.5, 1.0, 1.5, 2.0 and 2.6 mm, as shown on the right-hand side of the figure. Here, the velocity fluctuations were mostly negative away from the wall (y > 1.5 mm) and positive close to the wall (y < 0.5 mm), which suggested that a strong shear layer was created at around y = 1 mm by the inflexional velocity profile. This seems to be the location where the turbulence energy was produced in the boundary layer to maintain turbulent spots. This behaviour of turbulence energy generation in turbulent spots is very similar to the burst events in the turbulent boundary layer, where the ejection event (−u and +v velocity fluctuations) is followed by the sweep event (+u and -v velocity fluctuations), see figure 9 of Blackwelder & Kaplan (1976).
Conclusions
An experimental investigation was carried out in an extremely low-turbulence wind tunnel to study the early development of artificially initiated turbulent spots in a laminar boundary layer over a flat plate. A good data reproducibility allowed us to observe fine structural details that have not been seen before. Only portions of the velocity disturbances that are greater than a threshold value of approximately 10 % of the freestream velocity were able to develop downstream while the others decayed. This agrees with the finding of previous studies (Elder 1960;Wygnanski et al. 1976), which indicated that only disturbance above a 'critical intensity' will breakdown to turbulence to form turbulent spots. However, the threshold value in our study is much lower than that of the critical intensity obtained before. Initial velocity disturbances, whose frequencies were outside the neutral stability curve of the boundary layer, quickly developed into hairpin-like structures that multiplied downstream, which increased the width, length and height of the incipient turbulent spots. Eventually, they developed into fully developed turbulent spots very similar to those studied by Zilberman et al. (1977). As they develop downstream, there is a build-up of the low-speed region in the outer region of the boundary layer, accompanied by an elongation of the high-speed region near the wall. A strong shear layer is therefore created between the low-speed and the high-speed regions of developing turbulent spots, where high-frequency velocity spikes are generated. This seems to be the location where the turbulence energy is produced to maintain the turbulent spots. The behaviour of turbulence energy generation within a turbulent spot is very similar to the burst events in the turbulent boundary layer, where ejection events are followed by sweep events (Blackwelder & Kaplan 1976).
Iso-surfaces of the x-component of the velocity fluctuations show that developing turbulent spots consist of a number of low-speed pillars that are anchored at the wall, which stretch out to the edge of the boundary layer. The high-speed region of the turbulent spot is also visible near the wall in between the low-speed pillars. These pillars represent the low-speed regions that are pumped up within each hairpin vortex of the incipient turbulent spot. The sequence of structural changes demonstrates that the number of hairpin-like structures is increasing in both the streamwise and spanwise directions during the development to turbulent spots. The low-speed pillars of incipient turbulent spots seem to be amalgamated downstream, which suggests that hairpin-like structures merge together 916 A1-18 Downloaded from https://www.cambridge.org/core. during the spot development. The early development of turbulent spots is similar to that of Wu et al. (2017) in their DNS simulation of the turbulent boundary layer during transition.
Hairpin-like structures were created by strong disturbances from an orifice in the form of pulsed jets (Sau & Mahesh 2008, 2010, not as a consequence of a secondary instability of the laminar boundary layer as observed by Borodulin et al. (2002). Once generated, however, these artificially initiated structures behaved very similarly to those resulting from the secondary flow instability, developing to turbulent spots by generating a large number of high-frequency spikes within. It has been shown that the turbulent spot inception mechanism during the bypass transition also involves packets of hairpin-like structures (Wu et al. 2017). These results suggest that the hairpin-like structures are essential elements in the boundary layer transition.
The downstream change in the turbulence energy distribution within incipient turbulent spots was studied using wavelet spectra. With a development of turbulent spots downstream, the spectrum in the near-wall region spreads to cover higher frequency components, which reflects the generation of high-frequency spikes. These spikes are positively skewed, which suggests that they are associated with a downwash of higher momentum fluid towards the wall. In the outer region of the boundary layer, however, the spectrum shifts from higher frequency towards lower frequency downstream as a result of envelope formation of the incipient turbulent spots.
The shape factor of the boundary layer within developing turbulent spots reduces from the 'laminar' value (H = 2.15) to the 'turbulent' value (H = 1.4) very quickly when the spot front (downstream end) approaches. However, the process of getting back to the 'laminar' value from the 'turbulent' value at the back of the turbulent spots (upstream end), which is called the 'calmed region' by Schubauer & Klebanoff (1956), is much slower. This arises from the elongation of the high-speed region near a wall during the downstream development of turbulent spots. | 2021-04-06T13:20:10.349Z | 2021-04-06T00:00:00.000 | {
"year": 2021,
"sha1": "e513fae0961cea322da38528f7bb3c1e2bd51478",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/ED2E899507663FAB59DE49D476BB8409/S002211202100152Xa.pdf/div-class-title-early-development-of-artificially-initiated-turbulent-spots-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "966d33d144e9b1818662c8661ffdc16a0d905a85",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": []
} |
125784469 | pes2o/s2orc | v3-fos-license | An Attempt to Use Non-Linear Regression Modelling Technique in Long-Term Seasonal Rainfall Forecasting for Australian Capital Territory
The objective of this research is the assessment of the efficiency of a non-linear regression technique in predicting long-term seasonal rainfall. The non-linear models were developed using the lagged (past) values of the climate drivers, which have a significant correlation with rainfall. More specifically, the capabilities of SEIO (South-eastern Indian Ocean) and ENSO (El Nino Southern Oscillation) were assessed in reproducing the rainfall characteristics using the non-linear regression approach. The non-linear models developed were tested using the individual data sets, which were not used during the calibration of the models. The models were assessed using the commonly used statistical parameters, such as Pearson correlations (R), root mean square error (RMSE), mean absolute error (MAE) and index of agreement (d). Three rainfall stations located in the Australian Capital Territory (ACT) were selected as a case study. The analysis suggests that the predictors which has the highest correlation with the predictands do not necessarily produce the least errors in rainfall forecasting. The non-linear regression was able to predict seasonal rainfall with correlation coefficients varying from 0.71 to 0.91. The outcomes of the analysis will help the watershed management authorities to adopt efficient modelling technique by predicting long-term seasonal rainfall.
Introduction
Rainfall can be regarded as the most important climate element in the hydrological cycle that has considerable effects on the surrounding environment, including human lives.The spatial and temporal distribution of rainfall has significant impact on the water availability of earth surfaces, and hence on the agricultural activities.Since the agricultural activities and resulting crop production depends on the distribution of rainfall, prediction of monthly and seasonal rainfall is essentially important for the agricultural planning, flood mitigation strategies.However, accurate prediction of seasonal rainfall remains elusive to the scientists.Therefore, seasonal rainfall forecasting becomes plausible amongst the hydrologic researchers around the globe [1,2].
Seasonal forecasting can be classified into two broad categories: the statistical approach and the dynamic approach.In the statistical approach, the statistical relationships between the predictors and the predictands are investigated [3].In the dynamic approach, seasonal meteorological estimates are used to build a hydrological model.However, there are methodological implications in using meteorological inputs in the current hydrological models [4].The climate model produces the outputs based on coarse grid scales, which has the potential to capture forecasting uncertainties, and hence lead to bias.Furthermore, the data requirements of the dynamic models hinder the application of the modelling type.As a result, the statistical approach drew considerable attention to the practical users of the prediction models.
Long-term prediction of seasonal rainfall has the potential to help in the decision-making process for planning appropriate watershed management strategies [4].Moreover, advanced prediction of rainfall can provide information to adopt the consequences of climate change [5].As a result, the urge for the application of seasonal rainfall forecasting is increasing day by day.Therefore, seasonal forecasting is routinely performed by different research institutes, to have better understanding of climate change throughout the world.However, there exist limiting factors which act as the barriers for the wider application of the seasonal prediction models [6].For example, the seasonal predictions are affected by the predictors, predictands, region and season [7].Nevertheless, the chaotic dynamics of the atmosphere may lead to the erroneous prediction of seasonal rainfall [8].The uncertainties in the model parameterization further hinder the prediction of seasonal rainfall.
Till today, precipitation is the most challenging climatic phenomena, which can be predicted with least accuracy [7].On the other hand, most of the research studies on precipitation prediction have been conducted over a regional area of the world and in a particular season [9][10][11][12].There exist only a few studies that concentrate on the precipitation analysis of the whole world [7,8,13].Most of the studies conducted used number of various scores to evaluate predicted rainfall with the observed rainfall, such as, correlations, ranked probability score and Brier skill score.However, there is still doubt regarding the accuracy of the predictions of the seasonal models, which has immense implications in the decision-making process [14].Nevertheless, there exists overconfidence and lack of reliability in the prediction of seasonal rainfall using the currently available models [15].Therefore, it is necessary to assess the comprehensive performance of different models and their uncertainties in predicting seasonal rainfall.
It is well established that large-scale atmospheric circulation patterns significantly affect the annual precipitation around the globe including Australia.The atmospheric circulation configuration is dominated by the patterns of the sea surface temperature.Many researchers accept the capability of the El Nino Southern Oscillation (ENSO) in predicting time-series events.After analysing the role of ENSO on seasonal precipitation, Manzanas et al. [13] found that September to October is the most skillful season to predict rainfall around eastern Australia.Hossain et al. [9]; Hossain et al. [16] also identified the effects of ENSO and Indian Ocean Dipole (IOD) on West Australian rainfall.Therefore, the evaluation of the ENSO capability in time-series prediction is the fundamental requirement.Other climatic variables, such as sea surface temperature over the Atlantic and Indian Ocean have considerable impacts on the climate variability near the surrounding regions [17].Recent studies also suggested that Indian Ocean Dipole (IOD) has the considerable effects on the climate variability in the continental regions including Australia [18,19].Rasel et al. [20] revealed the effects of Southern Annular Mode (SAM) as a potential contributor of South Australian rainfall variability.
A number of studies have been examined to identify appropriate modelling technique for the prediction of seasonal rainfall.However, only a single climate driver is not capable of replicating the accurate precipitation characteristics.Multi-predictors models have the higher prediction skill than the single predictor models [21].Nevertheless, there may exist dissimilar characteristics of seasonal rainfall patterns with the same rainfall totals [1].On the other hand, there exist non-linear characteristics of the seasonal climate [8].Therefore, a closer look at the appropriate mechanism of seasonal rainfall formation becomes essentially important.
This paper presents the efficiency of non-linear regression modelling technique in predicting long-term seasonal rainfall forecasting.The non-linear analysis was performed using the lagged (past) values of the climate indices as the potential predictors of long-term seasonal rainfall.Since there exist significant correlations between seasonal rainfall and two to three months average values of the climate indices, lagged values of the predictors were considered in this research.Furthermore, many researchers identified the ENSO and IOD as the most significant predictors of Australian seasonal rainfall.In this research, the efficiency of the climate indices in rainfall forecasting were assessed using the non-linear regression analysis technique.The non-linear analysis was performed considering South-eastern Indian Ocean (SEIO), Nino3.4 (sea surface temperature anomalies from 5 • S to 5 • N and 170 • W to 120 • W), southern oscillation index (SOI) and dipole model index (DMI) as the significant influential parameters of seasonal rainfall variation.The analysis was performed and applied to three rainfall stations in Australian Capital Territory (ACT).Seasonal rainfall forecasting can have practical implications to a wide range of users in diverse sectors, such as agriculture, energy, water supply and stormwater management [22].The outcomes of the analysis may be the benchmark for future generations in predicting seasonal rainfall.
Study Area and Data Collection
The Australian Capital Territory (ACT) located in the south-east of the country is enclaved within the state of New South Wales (NSW).Unlike other Australian cities whose climates are moderated by the sea, the ACT experiences four distinct seasons.As a result, the inter-annual variation of precipitation in the ACT is higher.Annually, the ACT receives approximately 623 mm rainfall.The highest rainfall could be observed in spring and summer and the lowest in winter.This study concentrates on the application of non-linear regression modelling technique in the ACT for the prediction of long-term seasonal rainfall.The non-linear models were developed using the large-scale climate drivers.Therefore, the research requires both long-term rainfall data and climate indices data.
In Australia, the Bureau of Meteorology (BoM) collects and stores rainfall data from more than 2000 stations.For the achievement of the objectives of this paper, three rainfall stations located in the ACT were selected as a case study.Specific location of the rainfall stations is shown in Figure 1.
Geosciences 2018, 8, x FOR PEER REVIEW 3 of 11 S to 5° N and 170° W to 120° W), southern oscillation index (SOI) and dipole model index (DMI) as the significant influential parameters of seasonal rainfall variation.The analysis was performed and applied to three rainfall stations in Australian Capital Territory (ACT).Seasonal rainfall forecasting can have practical implications to a wide range of users in diverse sectors, such as agriculture, energy, water supply and stormwater management [22].The outcomes of the analysis may be the benchmark for future generations in predicting seasonal rainfall.
Study Area and Data Collection
The Australian Capital Territory (ACT) located in the south-east of the country is enclaved within the state of New South Wales (NSW).Unlike other Australian cities whose climates are moderated by the sea, the ACT experiences four distinct seasons.As a result, the inter-annual variation of precipitation in the ACT is higher.Annually, the ACT receives approximately 623 mm rainfall.The highest rainfall could be observed in spring and summer and the lowest in winter.This study concentrates on the application of non-linear regression modelling technique in the ACT for the prediction of long-term seasonal rainfall.The non-linear models were developed using the large-scale climate drivers.Therefore, the research requires both long-term rainfall data and climate indices data.
In Australia, the Bureau of Meteorology (BoM) collects and stores rainfall data from more than 2000 stations.For the achievement of the objectives of this paper, three rainfall stations located in the ACT were selected as a case study.Specific location of the rainfall stations is shown in Figure 1.Long-term monthly rainfall data from 1971 to 2017 were downloaded from Australian Bureau of Meteorology (http://www.bom.gov.au/climate/data/?ref=ftr).The rainfall stations were selected based on the availability of long-term data which have fewer missing values.Monthly variation of the rainfall for the selected rainfall stations throughout the study period is shown in Figure 2. The long-term variation of the same rainfall could be seen in Figure 3 (blue curve).Seasonal rainfall was estimated from the collected monthly rainfall data.In this paper, the average of the spring (September-October-November) rainfall data were used to perform non-linear regression analysis.The data collected were used not only to construct the non-linear regression models but also to validate the prediction capability of the developed models.
of Meteorology (http://www.bom.gov.au/climate/data/?ref=ftr).The rainfall stations were selected based on the availability of long-term data which have fewer missing values.Monthly variation of the rainfall for the selected rainfall stations throughout the study period is shown in Figure 2. The long-term variation of the same rainfall could be seen in Figure 3 (blue curve).Seasonal rainfall was estimated from the collected monthly rainfall data.In this paper, the average of the spring (September-October-November) rainfall data were used to perform non-linear regression analysis.The data collected were used not only to construct the non-linear regression models but also to validate the prediction capability of the developed models.
To replicate the appropriate characteristics of seasonal rainfall, it is required to identify which month's climate indices should be used in the analysis.Since the main focus of this paper is the efficiency of non-linear regression modelling technique in predicting long-term seasonal rainfall, the climate indices which have a significant correlation with rainfall were analysed and used for the construction of the non-linear models.Monthly values of the long-term climate indices data from 1971 to 2017 were collected from the climate explorer website (https://climexp.knmi.nl/start.cgi).Monthly values of the SEIO, Nino3.4,SOI and DMI were downloaded to achieve the objective of this research.In this paper, 90% of the data were used to construct the non-linear models and 10% of the data were used to assess the performance of the constructed models.To replicate the appropriate characteristics of seasonal rainfall, it is required to identify which month's climate indices should be used in the analysis.Since the main focus of this paper is the efficiency of non-linear regression modelling technique in predicting long-term seasonal rainfall, the climate indices which have a significant correlation with rainfall were analysed and used for the construction of the non-linear models.Monthly values of the long-term climate indices data from 1971 to 2017 were collected from the climate explorer website (https://climexp.knmi.nl/start.cgi).Monthly values of the SEIO, Nino3.4,SOI and DMI were downloaded to achieve the objective of this research.In this paper, 90% of the data were used to construct the non-linear models and 10% of the data were used to assess the performance of the constructed models.
Methods
Traditional exploration of the relationship between two or more parameters are obtained by statistical regression analysis.In this study, non-linear regression analysis was performed to obtain steady relationship between long-term seasonal rainfall and large scale climate indices.In the non-linear regression technique, the arbitrary relationship between the predictands and predictors are obtained.One or more independent variables dictate the determination of non-linear relationship amongst the model parameters [23].The general relationship of the non-linear regression can be explained according to Equation (1) [24]:
Methods
Traditional exploration of the relationship between two or more parameters are obtained by statistical regression analysis.In this study, non-linear regression analysis was performed to obtain steady relationship between long-term seasonal rainfall and large scale climate indices.In the non-linear regression technique, the arbitrary relationship between the predictands and predictors are obtained.One or more independent variables dictate the determination of non-linear relationship amongst the model parameters [23].The general relationship of the non-linear regression can be explained according to Equation (1) [24]: where, Y is the dependent variable; b 1 , b 2 , . . . . . . . . .b n are coefficients of the independent variables; n is the number of observations; X is the independent variables and e is model error.The fitted model has the potential to predict the value of Y for the additional observed values of X.
There may exist different non-linear functions which is suitable to replicate the appropriate rainfall pattern.To find out the suitable predicting model for streamflow, researchers have performed a series of simple regression analysis [25].To recommend a suitable seasonal rainfall predicting model, six different functions: linear, quadratic, cubic, exponential, power and logarithmic were assessed in this study.The function which has the higher Pearson correlation with the rainfall was considered as the potential model for rainfall prediction.The long-term seasonal rainfall data were used for the estimation of the correlations.The predictions may be penalized due to the lack of understanding of the physical processes.The interactions amongst the sub-processes of the variables may further hider the predictive capability [26].Therefore, individual correlations amongst the climate indices were performed to assess the significant correlations.The climate indices which have significant correlation amongst themselves were not used to develop the non-linear regression models.
Since the forecast verification assesses the capability of re-producing the observed data, the process is considered as an essential process of any model development [27].In this research, the assessment of the forecast quality of the developed non-linear regression models were performed using the commonly used statistical errors, Pearson correlations, root mean square error (RMSE), mean absolute error (MAE) and index of agreement (d).The regression model which has the highest correlation between the seasonal rainfall and combined indices during the validation period (2013-2017) was considered as the recommended predictor model.
Results and Discussion
From the available linear and non-linear functions, the Pearson correlations between the climate indices and the ACT spring rainfall was determined from the fitted data sets.The function which has the higher correlation was considered as the suitable model for rainfall prediction.The correlations of the regression analysis is shown in Tables 1-3 for Ainslie Tyson St, Tharwa General Store and Huntly rainfall stations respectively.The month shown in the subscript is the value of the corresponding climate index for the specified month.The star (*) in the table refers that the correlation is significant at the 0.05 level.For Ainslie Tyson St rainfall station, the cubic function has the maximum correlations between spring rainfall and all the climate indices except SEIO Jun .This climate index has the maximum correlation with power function as shown in Table 1.Similarly for Huntly rainfall station, the cubic function has the maximum correlations between spring rainfall and the climate indices except SOI Jul .As evidenced in Table 2, power function has the maximum correlation with rainfall and this predictor.However for Tharwa General Store rainfall station, cubic function has the maximum correlations between spring rainfall and all the climate indices as can be seen in Table 3.Generally, the cubic function is the best predictor and the logarithmic function is the least predictor for seasonal rainfall forecasting.A similar outcome was obtained by Esha and Imteaz [25] in predicting streamflow.
To develop a generalised non-linear model for the prediction of seasonal rainfall, the functions which have the maximum correlation were further analysed.Seventeen combined non-linear models were developed for each of the rainfall stations.The arrangement was selected in such a way that there is no significant correlation amongst the input combinations.The correlation coefficients for each of the developed models were also estimated.The combined indices that have been used to construct the non-linear regression models and their correlations are shown in Table 4 for all the selected three rainfall stations.
It is clear from Table 4 that only single model is not capable to predict seasonal rainfall with sufficient accuracy for all the rainfall stations.The table reveals that DMI-SOI based models are appropriate for predicting seasonal rainfall with maximum correlation 0.71.However, the appropriate combination is not in the same month for all the stations.For instance, DMI Jun influence is dominant for all the three stations, whereas associated dominant indices are; SOI Jul for Tharwa General Store station and SOI Aug for Ainslie Tyson St station.For Huntley station, combined effect of SEIO Jul and Nino3.4Aug provided the highest correlation, with a Pearson correlation of 0.579.The outcomes support that the effects of climate indices vary spatially, and only single variable/index is not capable of predicting rainfall with sufficient accuracy.However, the combinations having higher correlations during the calibration were not considered as recommended models for rainfall prediction.The combinations which produce maximum correlation during the validation period were considered to be the recommended models.For this case, SEIO Jul -Nino3.4Aug is the best model for the Ainslie Tyson St and Huntly stations; whereas SEIO Jul -SOI Aug is the highest correlation producers for Tharwa General Store station.Therefore, three models that have the highest correlation between the spring rainfall and the climate variables during the validation period have been proposed.Derirved models are outlined in Equations ( 2 Since the cubic function has the potential to produce the higher correlation between seasonal rainfall and the considered climate indices, the equations have been developed for the cubic function.The combined capability of other functions will be assessed in future.
The plotted comparison of the analysis during the calibration period is shown in Figure 3.According to Figure 3, the non-linear regression models are not capable to replicate the actual seasonal rainfall with considerable accuracy.The statement is especially true for the extreme rainfall.When the rainfall is extremely high or extremely low, the approach is unable to capture the rainfall characteristics as evidence in Figure 3.More sophisticated analysis needs to be performed to replicate the extreme seasonal rainfalls.However, before concluding general remark, analysis on other area should be performed.
The plotted results of the prediction comparison during the validation period is shown in Figure 4.According to Figure 4, non-linear regression models should be used carefully to predict the seasonal rainfall with reasonable accuracy.To some extent, the approach is capable to predict the rainfall for some stations.For example, the approach is over predicting for Huntley rainfall station as evidence in Figure 4c.Therefore, other sophisticated modelling approaches should be explored for more accurate predictions of seasonal rainfall.
To evaluate the performance of the non-linear models developed, various statistical parameters were calculated.The outputs of the comparison are shown in Table 5.According to the table, the model with correlation more than 0.91 has higher RMSE and MAE than the model with correlation 0.71.In addition, models with correlation 0.86 is having more errors than the other two models.Similar outcomes were also observed for the index of agreement.Therefore, models which have higher correlation do not necessarily produce a lower error rate.The index of agreement close to one is considered to be the best predicting model.Therefore, the models could be used to predict seasonal rainfall with reasonable accuracy.However, the analysis should be performed with more rainfall stations in the same area and other states.
Geosciences 2018, 8, x FOR PEER REVIEW 9 of 11 rainfall with reasonable accuracy.However, the analysis should be performed with more rainfall stations in the same area and other states.
Conclusions and Recommendations
Over the last two decades, prediction of seasonal time-series events were given considerable attention.As a result, many modelling approaches were developed and applied to predict seasonal rainfall.However, due to the spatial and temporal variation of rainfall, none of the available models are capable to predict seasonal rainfall with considerable accuracy.
In this research, the efficiency of non-linear regression models were assessed in predicting long-term seasonal rainfall.The non-linear models were constructed considering the lagged climate indices as the potential predictors of seasonal rainfall.Three rainfall stations located in the ACT were selected
Conclusions and Recommendations
Over the last two decades, prediction of seasonal time-series events were given considerable attention.As a result, many modelling approaches were developed and applied to predict seasonal rainfall.However, due to the spatial and temporal variation of rainfall, none of the available models are capable to predict seasonal rainfall with considerable accuracy.
In this research, the efficiency of non-linear regression models were assessed in predicting long-term seasonal rainfall.The non-linear models were constructed considering the lagged climate indices as the potential predictors of seasonal rainfall.Three rainfall stations located in the ACT were selected as a case study.The climate drivers SEIO, Nino3.4,SOI, and DMI were used and analysed in this study.The individual correlations between spring rainfall and the climate indices were determined for six functions (one linear and five non-linear).The functions which have the highest correlation between spring rainfall and the climate indices were further analysed to develop non-linear regression models.Seventeen combined non-linear regression models were developed and assessed to explore appropriate model(s) capable of predicting seasonal rainfall.The correlations between the outputs of the fitted models and the observed data were determined.The models which produce maximum correlation were considered as the potential model for seasonal rainfall forecasting.The accuracy of the predicted models' outputs was assessed by the widely used statistical parameters, R, RMSE, MAE and d.From the analyses of the current study, the following general conclusions could be made:
•
Cubic function is capable of producing maximum correlation between seasonal rainfall and the climate indices.
•
Logarithmic function produces the minimum correlations between seasonal rainfall and the climate indices.• DMI-SOI based non-linear models are more suitable to predict seasonal rainfall, as they produce higher correlations.
However, before concluding a general remark, more rainfall stations should be analysed in this area and in other areas, which would be a part of a future study.Moreover, other non-linear modelling and genetic algorithm techniques will be explored, which are likely to be able to predict seasonal rainfalls with higher accuracy.
Figure 2 .Figure 2 .
Figure 2. Monthly variations of rainfalls for the selected stations throughout the study period.(a) Ainslie Tyson St; (b) Tharwa General Store; (c) Huntly.
Figure 3 .
Figure 3.Comparison of the modelling output during the calibration period.(a) Ainslie Tyson St; (b) Tharwa General Store; (c) Huntly.
Figure 3 .
Figure 3.Comparison of the modelling output during the calibration period.(a) Ainslie Tyson St; (b) Tharwa General Store; (c) Huntly.
Figure 4 .
Figure 4. Comparison of the modelling output during the validation period.(a) Ainslie Tyson St; (b) Tharwa General Store; (c) Huntly.
Figure 4 .
Figure 4. Comparison of the modelling output during the validation period.(a) Ainslie Tyson St; (b) Tharwa General Store; (c) Huntly.
Table 1 .
Pearson correlations of the regression analysis for Ainslie Tyson St station.
Table 2 .
Pearson correlations of the regression analysis for Tharwa General Store station.
Table 3 .
Pearson correlations of the regression analysis for Huntly station.
Table 4 .
Pearson correlations of the developed models for the selected rainfall stations.
Table 5 .
Estimated statistical parameters during validation period.
Table 5 .
Estimated statistical parameters during validation period. | 2019-04-22T13:12:11.364Z | 2018-07-28T00:00:00.000 | {
"year": 2018,
"sha1": "4f4d53e8548e67f0d8891e71203522a0713fbe5e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3263/8/8/282/pdf?version=1532777063",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3407d1d6030c9854d87c9561d142da27182a12c0",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
270727715 | pes2o/s2orc | v3-fos-license | The Intrinsic Characterization of a Fuzzy Consistently Connected Domain
: The concepts of a fuzzy connected set (fc set) and a fuzzy consistently connected set (fcc set) are introduced on fuzzy posets, along with a discussion of their basic properties. Inspired by some equivalent conditions of crisp connected sets, the characterizations of the fc sets are given, and we also explore fuzzy completeness and fuzzy compactness in addition to defining a new fuzzy way-below relation based on fcc complete sets. Using this relationship as a basis, the fcc domain is also provided and studied, and its equivalent characterizations are obtained. In summary, we develop a method to establish fcc completeness from a continuous poset
Introduction and Related Work
In the 1960s, the birth of fuzzy mathematics [1] and the establishment of continuous lattices [2] aroused the research interest of a large number of scholars.Following its development, the theory of continuous lattices was successfully promoted to the theory of continuous domain by G Gierz [3].References [4][5][6] innovatively combined domain theory with fuzzy mathematics, with Zhang Qiye and Fan Lei introducing the concept of fuzzy partial order, which, in turn, led to the emergence of fuzzy domain theory.A Chaudhuri and P Das [7] introduced a new concept of fuzzy set connectivity called cs-connectivity.This concept is different from other connectivity concepts, and they found that cs-connectivity is not equivalent to these existing definitions of connectivity; they examined the validity of the standard results under this new concept of connectivity.In references [8,9], the concept of connected sets was introduced to broaden the scope of continuous partial order theory, and the concept of the connected continuous domain was introduced using the concept of connected sets, with fruitful results obtained by Shang Yun and Zhao Bin, they introduce and explore the concept of a consistently connected continuous domain, and extend the application scope of continuous poset theory by exploiting the characteristics of connected sets, solved the limitations of continuous poset theory in the treatment of the real number set and the natural number set, and to characterize it through the properties of the principal ideal and connected closed sets, they also study the directional completeness of consistently connected complete posets and obtains good theoretical results.In [10][11][12], Tang Zhaoyong introduced and examined the connectivity on partially ordered sets from various perspectives using step sets, resulting in a series of significant findings.In [13], Tang Zhaoyong and Xu Luoshan deeply explore the connectivity and local connectivity of posets from the perspective of order and topology, especially the properties of multiple intrinsic topologies (such as Alexandrov topology and Scott topology); they try to prove the equivalence of the order connectivity of posets and its intrinsic topology and to show the properties of local connectivity.Moreover, by constructing counterexamples, they also reveal that the connectivity of the lower topology does not necessarily guarantee the sequential connectivity of the poset itself.In [14,15], they introduce and study the concept of connectivity, especially to explore the step set.They also explore the construction of connected branches and show that posets can be uniquely decomposed into the non-intersection union of these connected branches.Furthermore, they show that the connected relations of the posets constitute an equivalence relation.These results provide a new tool and theoretical framework for understanding and operating posets.Reference [16] examines and characterizes the notion of the prime neutrosophic ideal and prime neutrosophic filter.The structure of the neutrosophic open-set lattice on a topology generated by a neutrosophic relation is described in it.They have defined the concepts of neutrosophic ideals and neutrosophic filters on that lattice in terms of their level sets and meet and join operations.In addition, we have examined and defined the concepts of prime neutrosophic filters and ideals as fascinating subsets of neutrosophic ideals and filters.This work mostly discussed neutrosophic ideals and neutrosophic filters on the lattice structure of neutrosophic open sets.Reference [17] explores the fuzzy topology induced by fuzzy relations, extending classical concepts, and establishes necessary and sufficient conditions for its generation, along with characterizations involving fuzzy interval orders, preorders, and sequential fuzzy topologies.Furthermore, the fuzzy bi-topological space generated by the fuzzy relations is explored.
The research motivation for exploring the fuzzy connected set lies in the desire to expand the application of the continuous poset theory.By defining a novel fuzzy way-below relation on the fcc complete set, we aim to deepen our understanding of this structure and enhance its utility.Furthermore, the introduction and investigation of the concept of fcc continuous domain serve to enrich the theoretical framework and broaden its potential applications.This line of inquiry offers significant insights into the properties and characteristics of fc sets, thereby contributing to the advancement of the field of fuzzy set theory and its applications.
However, in fuzzy domain theory, there is no concept of connected sets.Thus, we introduce the concepts of fuzzy connected sets and fuzzy consistently connected sets on fuzzy posets, and their basic properties are discussed.In the third section of this paper, a new fuzzy way-below relationship is defined on fuzzy consistently connected complete posets, which allows for the exploration of the fuzzy consistently connected continuous domain.In addition, its equivalent characterizations are determined.
Preliminaries
Below, we present some important terms and definitions used in this paper.Here, we describe the definitions of fuzzy sets, domain theory, and consistently connected theory.
, the membership function, gives the grade of membership of each element x i ∈ X in A.
Definition 2 ([1]
).Let X be a set, L be a complete lattice, and L X be all mappings from X to L. Each A ∈ L X is called a fuzzy subset of X.For A ⊆ X, χ A ∈ L X is the characteristic function of A, defined as where 0 and 1 represent the least and great elements of L.
Definition 3 ([3]
).A poset is said to be complete with respect to directed sets if every directed subset has a sup.A directed complete poset is abbreviated as a dcpo.
Definition 4 ([3]
).Let L be a poset.We say that x is way − below y, in symbols x ≪ y if for all directed subsets D ⊆ L for which supD exists, the relation y ≤ supD always implies the existence of d ∈ D with x ≤ d.An element satisfying x ≪ x is said to be compact or isolated f rom below.
Definition 5 ([3]
).A poset L is deemed continuous if for all x ∈ L, the set ⇓ x = {u ∈ L : u ≪ x} is directed and x = sup{u ∈ L : u ≪ x}.A dcpo that is continuous as a poset is referred to as a domain.
Definition 6 ([4]
).Let X be a set and e : X × X → L be a mapping.Then, (X, e) is deemed a fuzzy poset if e satisfies the following: (1) ∀x ∈ X, e(x, x) = 1; (2) ∀x, y, z ∈ X, e(x, y) ∧ e(y, z) ≤ e(x, z); Then, e is referred to as a fuzzy partial order in X.
Definition 7 ([6]
).Let f : X → Y be a mapping from a set X to a fuzzy poset (Y, e Y ).Define f → :
Definition 8 ([8]
).Let X be a poset.∅ ̸ = B ⊆ X. B is considered connected if for all x, y ∈ B, there exists x = x 1 , x 2 , • • •, x n = y such that x i ∈ B, and x i , x i+1 are comparable.If B is connected and x, y ∈ B, then x and y are considered connected in B.
Definition 9 ([4]
).Let (X, e) be a fuzzy poset, x 0 ∈ X and A ∈ L X .x 0 is said to be the supremum (resp.infimum) of A, written as x 0 = ⊔A (resp. Definition 11 ([5]).Let (X, e) be a fuzzy poset.For all Definition 12 ([5]).Let (X, e) be a fuzzy poset.For all D ∈ L X , D is a fuzzy directed subset if
z).
A fuzzy direct subset I ∈ L X is considered a fuzzy ideal if it is a fuzzy lower set.
Definition 13 ([5]
).Let (X, e) be a fuzzy poset.For all D ∈ L X , D is considered a fuzzy co-directed subset if
y).
A fuzzy co-direct subset I ∈ L X is considered a fuzzy filter if it is a fuzzy upper set.
(1) Both the directed set and the co-directed set are connected sets.
(2) The image of a connected set under a homomorphic mapping is a connected set.
(3) Both the totally ordered set and the one-point set are connected sets.
Definition 16 ([9]).Let X be a poset.X is a consistently connected complete poset if, for all consistently connected sets D ⊆ X, ⊔D exists.
Definition 17 ([9]).Let X be a consistently connected complete poset.The consistently connected way-below relation ≪ c of X is defined as follows: for x, y ∈ X, x is said to be compatible when less than or equal to y, in symbols x ≪ c y if for all consistently connected set D, y ≤ supD implies x ≤ d for some d ∈ D. We write ⇓ c x = {u ∈ X : u ≪ c x}.
Definition 18 ([9]).Let X be a consistently connected complete poset.X is considered a consistently connected domain if (1) ∀x ∈ X, ⇓ c x is the consistently connected set in X; (2) ∀x ∈ X, x = sup ⇓ c x.
Fuzzy Connected Sets and Fuzzy Consistently Connected Sets
In order to extend the connected sets on posets to fuzzy domain theory in this section, an equivalent definition in alternative form of connected sets is first provided.
Definition 20.Let (X, e) be a fuzzy poset.For every D ∈ L X , the fuzzy subset D is considered a fuzzy connected set if (1) ∨ x∈X D(x) = 1; (2) For all a, b ∈ X, there exists D(a A fuzzy connected set is abbreviated as an fc set.Lemma 1.Let (X, ≤) be a poset, and it is considered as a fuzzy poset (X, e), L = 2.Then, for all cases, D ∈ X is a connected set in (X, ≤) if and only if the characteristic function χ D is an fc set in (X, e).
Proof.⇒ Let D be a connected set.
(2) Because D is a connected set, then, for every a, b ∈ D, there exists and for every i, x i and x i+1 are comparable; that is, for every i, Then, Using Definition 2, we obtain In addition, for every i, From this, for every i, we obtain Hence, the character function χ D is an fc set in (X, e).⇐ Let χ D be an fc set in (X, e).
(2) Because χ D is an fc set in (X, e), then for all a, b ∈ D, there exists Then, we have which means that for every i, x i and x i+1 are comparable, and hence, D ∈ X is a connected set in (X, ≤).
Remark 2. A fuzzy connected set may not necessarily be a fuzzy directed (co-directed) set.
Example 1.Let (X, ≤) be a poset of Figure 1.Consider it as a fuzzy poset (X, e), L = 2, D = {a, b, c, d}.Then, D is a connected subset of X, and χ D is an fc subset, but χ D is not a fuzzy directed (co-directed) subset of (X, e).Then, we obtain Thus, D is an fc set.Proposition 2. Let (X, ≤) be a poset, and it is considered as a fuzzy poset (X, e), L = 2.For all D ∈ X, χ D is an fc subset if χ D is a fuzzy directed (co-directed) subset.
Proof.It can be seen from the assumed conditions that there exists x 0 ∈ X such that χ D (x 0 ) = 1, and based on Proposition 1, the conclusion is true.Definition 21.Let (X, e) be a fuzzy poset, and a fuzzy subset D ⊆ L X .D is considered a fuzzy consistently connected set if (1) D is a fuzzy connected set; (2) there exists p ∈ X such that D ⊆↓ p (↓ p ∈ L X , ∀x ∈ X, ↓ p(x) = e(x, p)).A fuzzy consistently connected set is abbreviated as an fcc set.
All fcc sets in the fuzzy poset (X, e) are denoted by C F (X), and D is an fcc ideal if D is a fuzzy lower set.All fcc ideals in the fuzzy poset (X, e) are denoted by CI F (X). Definition 22.Let (X, e) be a fuzzy poset.(X, e) is an fcc complete poset if for all fcc subset D, ⊔D exists.Proposition 3. Let (X, ≤) be a poset, and consider it as a fuzzy poset (X, e), L = 2. (X, ≤) is a consistently connected complete set if and only if (X, e) is an fcc complete poset.
Proof.Suppose that (X, ≤) is a consistently connected complete set.For all D is an fcc set, let A = {x ∈ X : D(x) = 1}, then A ⊆ X.According to Lemma 1, A is consistently connected, then there exists x 0 ∈ X such that x 0 = supA, and we need to prove that x 0 = ⊔D.
Conversely, suppose (X, e) is an fcc complete set and A is a consistently connected set.Then, according to Lemma 1, χ A is fcc, and (X, e) is fcc complete; as such, there exists an x 1 ∈ X such that x 1 = ⊔χ A , and we need to demonstrate that x 1 = ⊔A. ( (2) Suppose y ∈ X such that for all x ∈ A, x ≤ y, that is, Therefore, e(x 1 , y) ≥ ∧ x∈X (χ A (x) → e(x, y)) = 1, and hence x 1 ≤ y.So, we have x 1 = supA.
Fcc Way-Below and Fcc Domain
In this section, we give definitions of fcc way-below and a fcc continuous set, as well as equivalent characterizations of fcc domain and a discussion of related properties.Definition 23.Let (X, e) be an fcc directed complete poset.For all y ∈ X, ⇓ FC y ∈ L X .This is deemed fcc way-below if ∀x ∈ X, ⇓ FC y(x) = ∧ I∈CI F (X) (e(y, ⊔I) → I(x)).
From Definition 21, we plot Figure 2 as an elaborated illustration of the Definition 23.X is a fuzzy poset, and the side length is unit length 1, where the projection coordinate value of any point is the matching degree of membership of x and y with respect to X. D is an fcc set, all the internal points have connectivity, and use * to represent supD.As p is a point in the unit cube, D is below the projection coordinates of p, which satisfies the consistently connectivity of D. Definition 24.Let (X, e) be an fcc complete set.(X, e) is considered an fcc continuous set if ∀x ∈ X, ⇓ FC x ∈ CI F (X) and x = ⊔ ⇓ FC x.Definition 25.An fcc directed complete poset that is an fcc continuous set is an f cc domain.
Definition 26.Let (X, e) be an fcc complete set.x is the fcc compact element in X if ⇓ FC x(x) = 1.All fcc compact elements in X are denoted K FC (X).
Definition 27.Let (X, e) be an fcc complete set, x ∈ X. Define a mapping k x : X → L: X is considered an fcc algebraic poset if k x is an fcc subset of X and ⊔k x = x.
From Example 1.9 of [9] and Proposition 3, we have the following examples: Example 2. For the poset R of real numbers, (R, e ≤ ) is an fcc domain.
Example 3.For the poset N of natural numbers, (N, e ≤ ) is a fuzzy algebraic domain.
Example 4.
Let A be an fcc domain.Then, the principal ideal of A is an fcc domain.
Proof.On the one hand, according to Proposition 4, we have ∨ z∈X ⇓ FC z(x)∧ ⇓ FC y(z) ≤⇓ FC y(x).On the other hand, we simply need to prove that ⇓ FC y(x) ≤ ∨ z∈X ⇓ FC z(x)∧ ⇓ FC y(z).Suppose D ∈ L X , for all a ∈ X, D(a) = ∨ z∈X ⇓ FC z(a)∧ ⇓ FC y(z), we shall prove that ⇓ FC y(x) ≤ D(x).
Firstly, D(x) is an fcc ideal.(1) For all x ∈ X, ⇓ FC x is an fcc ideal.
(2) For all a, b ∈ X, every fcc way-below lower set is an fcc ideal in fcc domain, so that it is fuzzy consistently connected, according to Definition 20, There exists c ∈ X such that Furthermore, there exists d ∈ X such that Therefore, D is a fuzzy lower set.(4) For all x ∈ X, D(x) ≤⇓ FC y(x) ≤↓ y(x), then we obtain D ≤↓ y, and thus D is fuzzy consistent.
In fact, for all a ∈ X, Theorem 2. Let (X, e) be an fcc complete poset.Then, (X, e) is an fcc domain if and only if (⇓ FC , ⊔) is a fuzzy Galois adjunction between (X, e) and (CI F (X), Sub X ).
Conclusions
We discuss fuzzy connectivity under a fuzzy partial order and provide the equivalence characterization of the connected set along with the definition of the fc set.A set is considered a connected poset if and only if its characterization functions are fuzzy-connected.The definition of the fcc set and its equivalent characterizations are obtained.In the last section, the definitions of fcc way-below and fcc domain are given, and the equivalence characterizations of fcc continuous poset and fcc complete set are then discussed.Finally, a method for deriving fcc completeness from an fcc continuous poset is established.
In this paper, we deeply explore the fuzzy connectivity under the fuzzy partial order and provide the equivalent characterization of the connected set by defining the fuzzy connected set (fc set).In the framework of fuzzy mathematics, connectivity is an important concept that helps us to understand and analyze the properties of complex mathematics structures.First, we define the connectivity of a set under a fuzzy partial order.Specifically, a set is considered to be a connected poset if and only if its characteristic function is fuzzy connected.This means that, under the fuzzy partial order, there is a continuous and uninterrupted relationship between the elements in the set, which makes the whole set present a holistic structure.In order to further understand the concept of a fuzzy connected set, we further explore the definition of the fuzzy consistently connected set (fcc set) and its equivalent characterization.The fcc set is a special set of fuzzy connectivity that satisfies more stringent conditions to enable a better description of certain specific types of fuzzy structure.Through the equivalent characterization of the fcc set, we can more clearly recognize its properties and characteristics, providing a basis for subsequent research and application.In the final section of this article, we introduce the concept of fcc way-below relation and fcc domain and discuss the equivalent characterization of fcc continuous posets and fcc complete sets.These concepts provide us with a new perspective to examine the application of fuzzy connectivity in complex systems.In particular, we propose a method to derive fcc completeness from the fcc continuous poset, which helps us to better understand and apply the theory of fuzzy consistent connectivity.Overall, this paper explores, in depth, the related concepts and properties of fuzzy consistent connectivity in the framework of fuzzy partial order.By introducing the concepts of the fc set, fcc set, fcc completed set, and fcc domain, we provide new ideas and methods for the study of fuzzy consistent connectivity.These results not only help us to have a deeper understanding of the intrinsic structure of fuzzy mathematics and complex mathematic structures but also provide strong support for subsequent research and applications.
It is worth mentioning that the fuzzy connectivity theory explored in this paper has broad prospects in practical applications.For example, in the fields of image processing, social network analysis, data mining, etc., fuzzy connectivity can be used to describe connected regions in an image, connected subgraphs in a social network, and connected clusters in a data set.Through the analysis and utilization of these connected structures, we can better understand and exploit the inherent laws and properties of these complex structures.Moreover, with the continuous development of fuzzy mathematics and complex structures, we believe that fuzzy connectivity theory will be more widely applied and developed.In the future, we can further explore the combination of fuzzy connectivity and other mathematical tools, such as fuzzy logic, fuzzy clustering, etc., to form a more perfect theoretical system and methodology.At the same time, we can also focus on the application cases of fuzzy connectivity in practical problems to promote its in-depth development and application in various fields.For example, we can take the fuzzy poset as the starting point and consider the connected proposition of its intrinsic topology.In conclusion, this paper explores fuzzy connectivity in the framework of a fuzzy partial order and proposes a series of new concepts and methods.These achievements not only enrich the theoretical system of fuzzy mathematics but also provide strong support for subsequent research and application.We believe that in future studies, fuzzy connectivity theory and fuzzy consistently connected theory will play an increasingly important role in providing us with new ideas and methods to solve complex problems.
Figure 1 .Proposition 1 .
Figure 1.Graph of example 1. Proposition 1.Let (X, e) be a fuzzy poset, and a fuzzy directed (co-directed) subset D ⊆ L X .Then, D is a fuzzy connected if there exists a point x 0 ∈ X such that D(x 0 ) = 1.Proof.It follows from D being fuzzy directed that ∨D(x) = 1.Since D is a fuzzy directed set, by the condition that for all a, b ∈ X, D(a) ∧ D(b) ≤ ∨ z∈X D(z) ∧ e(a, z) ∧ e(b, z), then there exists x 0 ∈ X.Let x a = a, x b = b, such that D(a) = D(x a ), D(x 0 ) = 1, D(x b ) = D(b).Then, we obtain
Figure 2 .
Figure 2. Graph of FC way-below relation. | 2024-06-26T15:10:35.645Z | 2024-06-23T00:00:00.000 | {
"year": 2024,
"sha1": "194d9491111dad579956977cc370da8c99d6daeb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/12/13/1945/pdf?version=1719124749",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0dc4b79170f4c248f199ba40408f531a431086fa",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": []
} |
102990460 | pes2o/s2orc | v3-fos-license | Evidence of new twinning modes in magnesium questioning the shear paradigm
Twinning is an important deformation mode of hexagonal close-packed metals. The crystallographic theory is based on the 150-years old concept of simple shear. The habit plane of the twin is the shear plane, it is invariant. Here we present Electron BackScatter Diffraction observations and crystallographic analysis of a millimeter size twin in a magnesium single crystal whose straight habit plane, unambiguously determined both the parent crystal and in its twin, is not an invariant plane. This experimental evidence demonstrates that macroscopic deformation twinning can be obtained by a mechanism that is not a simple shear. Beside, this unconventional twin is often co-formed with a new conventional twin that exhibits the lowest shear magnitude ever reported in metals. The existence of unconventional twinning introduces a shift of paradigm and calls for the development of a new theory for the displacive transformations
with the introduction of the "disconnections" 14,15 . All these cited models are based on the shear paradigm; they have dominated the theoretical developments of deformation twinning over the last seventy years.
The formation of {101 ̅ 2} extension twins without straight shear plane was recently observed by insitu Transmission Electron Microscopy (TEM) in magnesium nano-pillars 16 . The twins are characterized by a parent/twin misorientation of 90° around the a-axis, instead of 86° for the conventional extension twins in bulk magnesium. These observations, and earlier molecular-dynamic simulations of the nucleation stage of extension twinning 17,18 , lead some researchers to propose a new twinning mechanism based on "pure shuffle", or equivalently "zero shear" 16,19 . This mechanism is the subject of an intense debate 20 . One way to reduce the controversy was to admit that the "zero shear" mechanism "distinctively differs from any other twinning modes" and that "this should not be deemed as the failure of the classical theory" 19 . Is that correct? Is the unconventional (90°, a) twin an exotic case limited only to extension twinning in hcp metals, and even more specifically to the nucleation step or to nano-sized samples? It was recently shown that the unconventional "zeroshear" (90°, a) and conventional shear (86°, a) twins actually result from the same distortion because they differ just only by an obliquity correction 21 . The model assumes that the atoms move as hardspheres, and it calculates for a given orientation relationship the analytical forms of the atomic trajectories, lattice distortion, and volume change. A similar approach was used to model {101 ̅ 1} contraction twinning 22 . The volume change is a direct consequence of the hard-sphere assumption. Indeed, the Kepler conjecture (demonstrated by Hales 23 ) implies that all the intermediate states between an hcp structure and its twin have a density lower than that of hcp. The volume change is not negligible; for magnesium, it is 3% for extension twinning 21 and 5% for contraction twinning 22 . The same approach was used for martensitic transformations between fcc, bcc and hcp phases 24 . It should be noted that a volume change is not compatible with a simple shear. Beside the volume change, the calculations proved that the habit plane is not invariant; it is untilted but distorted, and restored only when the process is complete. Thus, one can ask whether for some twins the interface plane could be transformed into another crystallographic plane. This would then confirm that deformation twinning in hcp metals is not the result of a simple shear distortion. Here we present the experimental proof that such an unconventional twin exists; it is millimeter-sized and appears in a bulk magnesium single crystal. It will be also shown that this twin is often co-formed with a new conventional twin on {213 ̅ 2} plane that exhibits the lowest shear value ever reported for hcp metals.
A piece of magnesium single crystal was cut with a diamond saw, mechanically polished and then electro-polished. Bands of twins are visible in optical microscopy at the side where the sample was cut. A second cut was performed perpendicularly to the first one, and here again, large bands of twins appeared at the cut side, which shows that the twins were induced by friction during the cutting step. The two cut sections are called A and B in the rest of the paper. Electron BackScatter Diffraction (EBSD) maps were acquired on the area containing the larger twins in the A and B sections. They are shown in Fig. 1 and Extended Data Fig.1, respectively. In order to facilitate the identification of the twins all along this manuscript, some colors were attributed to the different types of twins, independently of the absolute orientation of the sample. The parent single crystal is colored in grey and the conventional extension twins in blue. New twins, colored in green, are formed close to the conventional extension "blue" twins. They are often co-formed with twins colored in yellow, orange and red. The green and yellow/orange/red twins are twins that have never been reported in literature.
Before detailing the crystallographic characteristics of these new twins, it is worth recalling, with the example of the conventional extension blue twins of Fig. 1, how crystallographic information can be read from the experimental EBSD data and their associated pole figures. The extension twins are identified in the EBSD maps by their (86°, a) misorientations with the parent crystal, as shown by the rotation of 86° around the a-axis marked by the dashed circle in Fig. 2b and c. The atomic displacements during extension twinning are such that the basal {001} plane and prismatic {100} plane are exchanged, as illustrated by the exchange between of the positions of the planes marked by the triangles and squares in the pole figures shown in Fig. 2a and b. The habit plane of the extension twins is the "diagonal" {012} planes located between these two planes. Indeed, the traces of the habit planes agree perfectly with the expected {012} planes, as shown by the fact that spots marked by circles in Fig. 2d are perpendicular to the trace of the extension twins noted HPE1 and HPE2 in Fig. 1. The same results are obtained with the EBSD map acquired on section B, shown in Extended Data Fig.1, with pole figures in Extended Data Fig.2. The {012} habit plane of the extension twins appears in the EBSD map as a plane that is both untilted and undistorted, i.e. fully invariant, in agreement with the simple shear theory, but actually, if the process of lattice distortion is considered in its continuity, the atomic displacements are such that the {012} plane cannot be maintained invariant during the distortion; it is only restored when the distortion is complete 21 . The conventional extension blue twins do not provide a direct footprint of this continuous process, and the {012} interface appears as if the {012} plane had been invariant through the process. The situation will be shown to be different with the "green" twins.
The long millimeter-sized green twins shown in the EBSD map of Fig. 1 are misoriented from the parent crystal by a rotation angle 58° with a spreading of 4° and a rotation axis close to a + 2b. This misorientation appears in the histogram of Fig. 1b and c. Twins with similar misorientations already appeared in the histograms of some previous studies 25,26 , but their crystallographic analysis is very recent 27,28 . Ostapovets et al. 27 interpret them as the result of a complete double {012}-{012} twinning in which there is no retained traces of the first {012} twins. A different mechanism was proposed in which the lattice distortion is modelled as a one-step process without the need of a hypothetical intermediate {012} twin 28 . Whatever the model, it is agreed 27,28 that there is a strong link between these (58°, a+2b) twins and the (64°, a+2b) {112 ̅ 2} twins frequently observed in titanium, and that these twins are not predicted by the general theory of twinning 8,9 or by the dedicated Westlake-Rosembaum model 29,30 . Following our study 28 , it will be assumed that the (58°, a+2b) twins result from a unique distortion that is geometrically represented with the supercell X 2 YG shown in Fig. 3a. When the parent lattice is rotated by (58°, a+2b) the supercell becomes close to the initial one, as illustrated in the projection along the axis OY = a+2b of Fig. 3b , 1 and 2 √1+ 2 , where is the c/a packing ratio. The principal strains associated with this distortion are -4.2%, 0, and +4.4% in the case of pure magnesium. The parent/twin misorientation matrix is a rotation of axis OY = [120] p and of angle ( ). It takes the value 58.39° for magnesium; which is expected from the model and very close to the misorientation observed in the histogram of Fig. 1b and in Fig. 4a 31 . The usual way to build a conventional twin from the prototype model is to add a small obliquity correction (few degrees) that compensates the tilt of an undistorted plane in order to make it fully invariant. After obliquity correction, the distortion becomes a simple shear. Calculations prove that there are only two possible planes whose tilt can be compensated by an obliquity correction; they are the planes (21 ̅ 2) and (1 ̅ 26) as detailed in separate papers 27,28 . In both cases, the rotation axis of the obliquity compensation is = [120] . Geometrically, the (21 ̅ 2) and (1 ̅ 26) are the two diagonals of the OX 2 VG rhombus shown in Fig. 3c and d, respectively; they define two conjugate twinning modes. The shear vector is along the [101] direction in the (21 ̅ 2) twin mode, and along the [3 ̅ 0 1] direction in the (1 ̅ 26) twin mode, as illustrated by the green arrows in Fig. 3c and d. The shear magnitudes are the same for both modes and close to 0.11 for magnesium 27,28 . The new parent-twin misorientation of the (21 ̅ 2) twin is a (63°, a+2b) rotation (the corrected obliquity was 4°), and the new parent-twin misorientation of the (1 ̅ 26) twin is a (57°, a+2b) rotation (the corrected obliquity was 1°). These two twins modes are not reported in the list of twins predicted by the classical shear theory 9 ; they are however conventional because their lattice distortions are given by simple shear matrices. By considering these theoretical results, one could expect that the habit planes of the green twins, noted HP1 and HP2 in the EBSD maps of Fig. 1, and those of the twins in the EBSD map of Extended Data Fig.1, are two equivalent planes in the family of the {112 ̅ 2} planes, or in the family of the {112 ̅ 6} planes. Surprisingly, this is not the case. The two HP1 and HP2 green twins have very close orientations (their disorientation angle is lower than 4°) but very distinct habit planes, whereas the models 27,28 predict only one habit plane per family. In the pole figures of Fig. 4, the unique plane of type {112 ̅ 6} and the unique plane of type {112 ̅ 2} common to both the parent crystal and the green twin are encircled; and, as they are close to the x-axis, their trace on the EBSD map should be vertical in the EBSD map of Fig. 1, which is clearly not the case (they are at more than 50° away from the vertical direction). So, what are the habit planes of the green twins? After some attempts we discovered that they are {212} planes of the parent crystal and {012} planes of the green twin crystals. This is shown in Fig. 5a and b by the fact that (i) the circles around the {012} pole are positioned exactly at the same positions as those of the {212} poles, and (ii) these common poles are perpendicular to the traces of the habit planes HP1 and HP2 of the green twins shown in the EBSD map of Fig. 1 (15). This experimental result is of prime importance because contrarily to all deformation twins ever reported in bulk materials, the habit planes of the green twins are not invariant planes; they cannot result from a simple shear. They are thus called here "unconventional", and noted (58°, a+2b). Additional crystallographic information is required to get a better understanding of these new twins. It was found that that the green twins share with the parent crystal a common direction of type 〈201〉, i.e. 〈224 ̅ 3〉 in the four index Miller-Bravais notation, and that this direction, marked by a blue circle in Fig. 5c, lies in the habit plane whose normal direction is encircled in red in Fig Let us build a crystallographic model of the (58°, a+2b) green twins. The parent/twin misorientation should be close (within few degrees) to that of the (58°, a+2b) stretch prototype previously discussed. Their habit plane is the 0 = (212) plane that is transformed into the (012) plane.
The The nine components of the obliquity-corrected distortion matrix, as function of the packing ratio, are given in Supplementary Equation (23). The new parent/twin misorientation of the obliquitycorrected twin is a rotation given in Supplementary Equation (25); the rotation angle is 60.7° and the rotation axis is less than 2° away from the a+2b axis. The difference of the orientation between the twins formed directly by the prototype model and those formed with the obliquity-corrected version is low (58.4° / 60.7°) and lies in the spreading of the misorientation histogram shown in Fig. 1b. This spreading exists for all observed green twins. It was noticed that the isolated green twins have a greater tendency to exhibit a 58° misorientation with the parent crystal, such as the one marked by green arrow in Fig. 1a, whereas the green twins that are co-formed with yellow twins tend to exhibit misorientations more centered in the range 60-61°; it is the case in the area marked by dashed green rectangle in Fig. 1a, as shown by the local misorientation histogram given in Fig. 1e. Gradient of orientations between 58° and 62° are rainbow-colored in Fig. 1d. The yellow twins seem to stabilize the unconventional green twins such that the condition 0 = (212) // (012) is fulfilled. The following part is devoted to the crystallographic properties of the yellow twins and their role in the stabilization of the green twins.
The yellow twins have a misorientation with the green twins with which they are co-formed close to (86°, a). This shows that the yellow twins are linked to the green twins by a sort of extension twinning. However, the yellow twins, as the green twins, also result from a deformation twinning mechanism of the parent crystal. The misorientation of the yellow twins with the parent crystal is found to be close to a rotation of 48° around an axis of type <241>, as illustrated in Fig. 1b and c for the EBSD map of section A, and in Extended Data Fig.1b Fig. 5b. Indeed, in this figure, the circle noted "HP1" is around the {212} plane that is common to both the parent and the yellow twins co-formed with the green twin lying along HP1 in Fig. 1; and the circle noted "HP2" is around the {212} plane that is common to both the parent and the red twin co-formed with the green twin lying along HP2 in Fig. 1. These features are also observed with the three habit planes identified in the EBSD map of section B, as shown in the {212} pole figure of Extended Data Fig.3 of the yellow, orange and red twins co-formed with the green twins shown in Extended Data Fig.2. Thus, the yellow twins are conventional because their habit plane is a crystallographic plane that is common to both parent and twin crystals. However, to the best of our knowledge, no twin with a shear plane of type {213 ̅ 2} has ever been reported by the classical models of deformation twinning 9 . It is thus important to get additional crystallographic information on this twin and its link with the green twin. It was noticed that the axis 0 = [02 ̅ 1] that was left invariant during the ( → ) twinning is also left invariant by the ( → ) twinning, i.e. this axis is common to the three crystals: the parent crystal, the green twin and the yellow twin. This is shown by the encircled directions in the pole figure Let us build a crystallographic model of the yellow twins. One could have imagined building such a model by considering that the yellow twins are extension twins of the green grains. However, it is mathematically impossible to build a conventional twin by composing the distortion matrix of the (58°, a+2b) unconventional green twin with that of a conventional (86°, a) extension twin because the terms in the former are irrational and those of the latter are rational, which means that their composition cannot give a rational matrix. This apparent issue is solved by considering that the yellow twins are not in a conventional extension twin relationship with the green grains, but with a relation that is derived from it by a small obliquity correction. The green twins induce a planar distortion (212) → (012) , and the obliquity-corrected extension twin should be such that it induces the reverse planar distortion (012) → (212) . The idea is therefore to maintain untilted the plane (012) and invariant the direction 0 = [02 ̅ 1] during ( → ) twinning. The calculations are detailed in section 3 of Supplementary Equations. They show that an obliquity of 1.1° around the common axis 0 is sufficient to make the conventional extension twins compatible with the experimental results. This obliquity is so small that it is not possible to distinguish whether the green/yellow relation is a conventional (86°, a) twinning or its derived version. The new distortion matrix and the misorientation matrix associated with the ( → ) relation are given in equation (40) , which is equal to 0.078 for ideal hard-sphere packing and 0.084 for magnesium. This twinning mode was not predicted by the classical theory of deformation twinning; the shear magnitude is, to our best knowledge, the lowest value ever reported for deformation twinning of metals.
In summary, the EBSD study on a saw-cut magnesium single crystal put in evidence unconventional millimeter-size twins localized close to conventional extension twins. The parent/twin misorientation is (58°, a+2b). This twin is unconventional because its habit plane is not invariant; it is a {212} plane untilted but distorted and transformed into a {012} plane. This twin is often co-formed with another twin and linked to it by an unconventional type of extension twinning. This co-formed twin is a conventional (shear) twin of the parent crystal, but the twin mode is new; the parent/twin disorientation is (48°,〈2 ̅ 21〉); and the calculations show that the associated distortion matrix is a {212} 〈5 ̅ 4 ̅ 7〉 shear with a magnitude of only 0.084. Some researchers recently proposed a "pure shuffle" model to interpret their observations of (90°, a) extension twins in magnesium nano-pillars, but they assumed that their discovery was limited to this special twin and was not a "failure of the classical theory". The present evidence of macroscopic unconventional twins with {212} {012} interface calls for reconsidering the theory of deformation twinning because the initial paradigm of simple shear is not consistent with the present observations. An approach based on hard-spheres had been followed for the last years and applied to martensitic transformations and to extension and compression twinning in magnesium [21][22][23][24] ; it proposes to shift the shear paradigm while preserving the essential displacive features of these transformations 32 . Once generalized and formalized, it could constitute one of the possible alternatives to the shear-based theories.
Methods
The magnesium single crystal was bought at Goodfellow Inc. It is the same sample as the one used in the theoretical study of extension twinning 21 . Two perpendicular cross-sections were cut and called A and B in the paper. The extension twins and the new twins studied are induced by the disk cutting with the abrasive disk saw. The two sections were mechanically polished with abrasive papers and clothes with diamond particles down to 1 m, and then electropolished at 12V with an electrolyte made of 85% ethanol, 5% HNO 3 and 10% HCl just taken out of the fridge (10°C). The EBSD map was acquired on a field emission gun (FEG) XLF30 scanning electron microscope (FEI) equipped with an Aztec system (Oxford Instruments). The EBSD maps were treated with the Channel5 software (Oxford Instruments). The blue, green and yellow/red colors were attributed to the twins by using Table 2) that summarizes the main equations used in the paper. The vectors are noted by bold lowercase letters and the matrices by bold capital letters. The three-index Miller notation in the hexagonal system is used for the calculations and preferentially chosen to write the results. The four index Miller-Bravais notation is sometimes written to help the reader to identify the directions or planes that are equivalent by hexagonal symmetries. Conversion rules 33 It is often necessary for the calculation to switch from the crystallographic basis to an orthonormal basis linked to this basis. In the case of an hexagonal phase, we call ℎ = ( , , ) the usual hexagonal basis, and ℎ = ( , , ) the orthonormal basis linked to ℎ by the coordinate transformation matrix ℎ : where is the c/a packing ratio of the hexagonal phase. The matrix ℎ is commonly called structure tensor in crystallography. It can be used to express the directions into the orthonormal basis ℎ . For planes, it is ℎ * that should be used. We note O, the "zero" position that will be left invariant by the distortion and we note X, Y and Z the atomic positions defined by the vectors OX (2) with ℎ given by equation (1). Inversely, if the distortion matrix is found in ℎ and it can be written in ℎ by the inverse formula: The misorientation matrix is defined by the coordinate transformation matrix ℎ → . This matrix allows the change of the coordinates of a fixed vector between the parent and twin bases. It is given by the vectors forming the basis of the twin ℎ = ( , , ) expressed in the parent hexagonal basis, i.e. ℎ → = [ ℎ → ℎ ]. Its reverse is just ℎ → = [ ℎ → ℎ ].
The orientation of the twinned crystal is defined by the matrix ℎ → , but other equivalent matrices could be chosen. The equivalent matrices are obtained by multiplying ℎ → by the matrices of internal symmetries of the hexagonal phase, i.e. the matrices forming the point group of the hcp phase ℎ .
The matrix ℎ → is a coordinate transformation matrix between two hexagonal bases; it is thus a rotation matrix. The rotation angle of a matrix ℎ → is given by its trace and the rotation axis is the eigenvector associated with the unit eigenvalue. However, one must keep in mind that ℎ → is expressed in a non-orthonormal basis, which implies that some usual equations related to rotations do not hold. For example, the inverse of a rotation matrix equals its transposes only in orthonormal basis. Using The correspondence matrix is used to calculate in the twin basis the coordinates of the image by the distortion of a vector written in the parent basis, i.e.
Construction of the distortion, misorientation and correspondence matrices
The crystallographic features of a twin model are determined by the choice of a supercell. This supercell defines a sub-lattice of the hexagonal lattice; and it is actually this sub-lattice that is linearly distorted by → ; the atoms inside the supercell do not follow the same trajectories as those at the corners of the cells; they "shuffle". The As the matrices and are constituted by the crystallographic directions forming the supercell, their values are integers. As the inverse of an integer matrix is a rational matrix, the correspondence matrix is a rational matrix.
Obliquity correction
It is usual in the crystallographic models of ferroelectrics to introduce an obliquity correction. This is a rotation with a small angle (few degrees) that is composed with a stretch distortion matrix in order to transform it into a simple shear matrix. An obliquity correction can be introduced to correct a small tilt on a plane and/or an small rotation of a direction. Here we need to introduce a general obliquity correction function ( , ′ , , ′). This function gives the rotation matrix noted Obl such that ( ) = ′ and ( ) = ′ . Let us consider a direction u and a plane g expressed in the hexagonal basis. Expressed in the orthonormal basis ℎ they are = ℎ and = ℎ * . In this basis the plane has the same coordinates as its normal direction .
It is a rotation matrix expressed in the orthonormal basis ℎ that transforms ( , ) into ( ′, ′). This rotation should be compensated by its inverse in order to put in coincidence the plane with the plane ′, and the direction with the direction ′ .
Definition of unconventional twinning
We call conventional twin a twin whose lattice distortion is expressed by a simple shear matrix. The habit plane of these twins is the shear plane, which is also the plane maintained fully invariant by the shear distortion. This means that for two non-collinear directions u and v of the plane g, i.e. such If the plane g is invariant, it is untilted. Therefore, a consequence of the existence of an invariant plane is which means that is an eigenvector of ( → ) * .
It should be noted that (12)(13), but the reciprocal is not always true.
By noting the plane by its Miller indices = (h,k,l), and considering that the interplanar distance As the plane is invariant, the volume change is completely given by 1/. If =1, there is no volume change, the shear is called "simple shear". In the more general case, the shear is sometimes called "invariant plane strain" (IPS) and not "shear" in order to distinguish it from pure shear (stretch). To our knowledge, all the deformation twins reported in literature till now are simple shear.
In the manuscript, we call unconventional twin a twin defined by a distortion matrix for which a plane is untilted, but not invariant. Mathematically it means that the distortion matrix checks equation (13) but not equation (12). The untilted plane is transformed into a plane that is not equivalent to the initial one by any of the crystal symmetries; some of the directions contained in the plane are modified in length and/or angle. To our knowledge, unconventional twinning has never been reported till now.
The correspondence matrix is calculated by considering the vectors 2 , , and , and the vectors 2 ′, ′, and ′, in their respective hexagonal bases , i.e. by using the supercell The misorientation matrix is given by equation (5): which is a rotation of angle ( 1 √1+ 2 ) = ( ), that is equal to 58.5° for hard-sphere packing and 58.4° for magnesium.
Some correspondences between some planes and directions of the parent and its twins calculated from the correspondence matrices in equation (17) are interesting to interpret the EBSD map. They are given in Table 1.
Parent → Twin
Planes (58°,a) From this table, we tried two different approaches to build a model that could explain the green twins observed experimental EBSD maps. The first approach was the most intuitive one; it is based on the fact that the direction = [120] is invariant in the stretch twin model (see Supplementary Table 1). However, after many attempts, this way was given up because all the habit planes we could predict contain the OY direction, which is not in agreement with the observations. A dissymmetry should be introduced in the system. The second approach was less intuitive; but it was revealed to fit perfectly with the observations, even for small details that were not noticed at the beginning. It is based on the correspondence between the (212) and (012) planes, and between the [02 ̅ 1] and [2 ̅ 2 ̅ 1] directions (Supplementary Table 1). The model, described in the next section, introduces an obliquity correction such that the plane 0 = (212) becomes untilted and the direction 0 = [02 ̅ 1] invariant.
Unconventional twin derived from the (58°, a + 2b) stretch twin prototype
The calculations were performed with Mathematica (see Supplementary Data Part B).
The EBSD map shows that the habit plane of the green twin is not invariant; it is the plane For a hard-sphere packing ratio = √ As ℎ → differs from ℎ → only by the obliquity correction, the correspondence matrix given by equation (17) is not affected. The distortion ℎ → is unconventional as the untilted plane (212) , which is also the habit plane of the green twin, is not fully invariant but transformed into the plane (012) . The modes of plasticity required to accommodate this deformation are not the subject of the paper, but it is hoped that deeper TEM investigations and molecular dynamics simulations can bring important elements of responses.
The other method to distinguish the twin generated by distortion ℎ → from the one generated by A rotation equivalent to ℎ → that has for rotation axis is found by using a 6-fold rotation symmetry. In the basis ℎ , and noted by its Seitz symbol, this symmetry is The experimental EBSD maps show that the extension "yellow" twins are often co-formed with the "green" twins and constitute green-yellow "stripes" as that in the green rectangle of Fig.1a. In the EBSD map acquired in the cross-section B, the yellow twins can also appear orange or red, as shown in the Extended Data Fig.1. The striking point is that these the "yellow" twins are conventional twins of the parent "grey" crystal: their habit plane is the plane (212) , and this plane is common to both the parent and "yellow" crystal. The misorientation between the "yellow" twins and the parent "grey" crystal experimentally measured from the EBSD maps is a rotation of 48° around an axis close to a 〈241〉 direction, as shown in Fig.1b and c. To the best of our knowledge this twin has never been reported or predicted; which means that, even if conventional, there is not yet crystallographic model for it. In order to build such a model, additional information is required. We noticed that the misorientation between the yellow twins and the green twins is close to (86°, a), with an interface plane close to {102}, which means that the yellow and green twins are linked by an kind of extension twin relation, or a twinning relation close to that one.
The crystallographic model of (86°, a) extension twinning in hcp metals was proposed by correcting the obliquity of a (90°, a) , as observed in the EBSD maps. Before detailing the obliquity correction that will be applied to the conventional extension twin, let us determine the appropriate reference frame that should be used to express the extension twinning distortion matrix in order to be composed with the green twin.
The conventional (86°, a) twin in an adequate basis
The calculations were performed with Mathematica (see Supplementary Data Part C).
In order to build the unconventional yellow twin derived from a conventional (86°, a) extension twin, we have to quickly recall some the crystallographic details of this twin. The (86°, a) extension twin described in the paper 21 is an extension twin on the plane (01 ̅ 2) . This twin was shown to derive from a stretch prototype, called ( After the obliquity correction, the distortion matrix becomes For the ideal hard-sphere packing ratio, the obliquity is ξ = ( The distortion matrix (32) generates the conventional (86°, a) twin for which the invariant plane is (01 ̅ 2) . In order to continue working with coherent coordinates in the system formed by the "green", "yellow" and "grey" crystals, we need to use an extension twin such that, once combined with the green twin distortion (23), it yields a conventional twin on the (212) plane. A hexagonal symmetry is thus introduced; its choice will be justified a posteriori by the internal coherency of the calculations and by the perfect agreement with the experimental EBSD observations. This internal symmetry noted by its Seitz symbol is It allows establishing the distortion matrix of the (102) extension twin from that of the (01 ̅ 2) extension twin given in equation (32): To be clearer, we have used in equation (34) a notation that specifies that the parent crystal is the green grain and that the yellow grains are linked to it by an extension twin (even if not yet corrected by the obliquity). Indeed, the parent index "p" is here "gr" and the twin index "t" is "gr".
The correspondence matrices in the direct and reciprocal spaces are Now that the appropriate basis is found to express the conventional extension twin, the additional obliquity correction required to get the planar distortion (012) → (212) without tilt can be determined.
The unconventional twin derived from the (86°, a) twin prototype
The calculations were performed with Mathematica (see Supplementary Data Part D).
The extension twin (34) leaves invariant the plane (102) and the direction [2 ̅ 2 ̅ 1] , and it transforms the plane (012) into the plane (212) by the correspondence matrix (35), but this plane is tilted. Now, we will build by obliquity correction of the conventional extension twin (34) an unconventional twin such that the plane (012) is transformed into the plane (212) without tilt, and such that the direction [2 ̅ 2 ̅ 1] becomes invariant. This twin, when composed with the unconventional "green" twin, will give a conventional twin relatively to the "grey" parent crystal. In order to determine the obliquity matrix, one could directly apply the general function (21), but we noticed that correcting the obliquity of the plane = (012) is sufficient to also correct the obliquity of the direction [2 ̅ 2 ̅ 1] , as detailed as follows.
Composition of the correspondence matrices
The first correspondence ℎ → is followed by the second correspondence ℎ → . Their composition is simply
Composition of the distortion matrices
The first distortion ℎ → is followed by the second distortion ℎ → . It is necessary to work in the same basis to compose these active matrices. In four-index notation this vector is of type 〈1, 2 , 3 ̅ , 7〉 ℎ . The shear amplitude is given by its norm, that can be calculated directly from ℎ . It is = √ 7 48 |3 − 2 | (48) In the case of hard-sphere packing ratio, the analytical expression of the distortion matrix takes rational values: The shear value along the direction [5 ̅ , 4 ̅ , 7] ℎ is = 1 24 √ 7 2 ≈ 0.078. | 2019-04-09T13:02:36.321Z | 2017-07-03T00:00:00.000 | {
"year": 2017,
"sha1": "050bb9f758a0102ae66777668ec4b4d3a0d1c698",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1707.00490",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "050bb9f758a0102ae66777668ec4b4d3a0d1c698",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
237327786 | pes2o/s2orc | v3-fos-license | Synthesis of Ag Nanoparticles-Decorated CNTs via Laser Ablation Method for the Enhancement the Photocatalytic Removal of Naphthalene from Water
Silver nanoparticles (Ag NPs) were decorated with different amounts on the exterior walls of carbon nanotubes (CNTs) by a laser ablation assisted method, especially in liquid media to be applied as a good adsorption material against naphthalene. The laser ablation time was controlled the amount of decoration Ag NPs on CNTs. The prepared nanocomposite was analyzed via different analytical techniques. Ag NPs with a small size distribution of 29 nm are uniformly decorated with spherical shape on CNTs walls. The disorder degree of tubular structure and shifting of the vibrational characteristic peaks increase with the increase in the decoration of Ag NPs. After that, the prepared samples were investigated for the removal of naphthalene. These studies of loading Ag NPs with different amounts on the surface of CNTs act as a promising material for water treatment.
Introduction
Water can be represented as a vital component of the world, and it is essential for our life. Water pollution is mostly caused by industrial and climate changes, which occurs when toxins are left without being treated or removed from the environment. Organic dyes, which presented in the industrial wastewater during the manufacturing of paper, cosmetics, and textiles, are major water contaminants. Organic dyes in water that contain benzene rings are harmful to health, which need to find a treatment process for contaminated wastewaters. There are numerous techniques for extracting contaminants and organic dyes from all types of water. From these methods, catalytic degradation of organic pollutants has been proven to be one of the most successful and cost-efficient strategies for dealing with environmental pollution [1][2][3][4][5].
Carbon nanotubes (CNTs) have captivated the attention since their discovery because of their unique structure and characteristics, which include a high active surface with The pulsed laser ablation method is based on using the nanosecond pulsed laser (Continuum laser, PL9000, Santa Clara, CA, USA) to generate laser energy =120 mJ, pulse duration = 7 ns, and laser wavelength = 1064 nm. This laser beam was focused and directed to the upper surface of the silver plate that was immersed in liquid solution (functionalized carbon nanotubes) by 7 cm of a plano-convex lens to form a laser beam with an effective spot size, which had a diameter of about 0.78 mm. The ablation time was used to change the generated amount of Ag NPs, which was directed to decorate the tubular's surface of CNTs as shown in Figure 1. By PLAL method, the directed laser beam on the target surface in a very small cross-section produces high energetic laser pulse, followed by creating the multi-focused-photons, but in a very special condition. This condition is produced from inside a liquid media, allowing to produce laser induce plasma surrounded by liquid layer, leading to produce plasma-induced pressure and generating charged tiny particles (positively charges) approached to be in a nanoscale, which has a high capability to interact with functionalized materials (e.g., functionalized CNTs) suspended in the surrounding medium during ablation process. So, for the ablation of Ag plate immersed in liquid medium from functionalized CNTs, charged nanoparticles from Ag were generated and dispersed in the functionalized CNTs solution. The cationic charges on the outer surface of metal Ag NPs helped to be attracted by the anionic active sites on the CNTs' tubular surface to form CNTs decorated with Ag NPs. Therefore, without the functionalization process of CNTs, the decoration process could not be accomplished [23]. Nanomaterials 2021, 11, x FOR PEER REVIEW 3 of 19 ratio of 3:1 by volume-to create functional groups on their tubular outer surface. They act as active sites on the CNTs, allowing to catch the nanoparticles in the aqueous solution.
The nanocomposite structure can be synthesized via the decoration of nanoparticles on the CNTs' outer surface by the presence of these functional groups.
Preparation of ِ ◌ Ag/CNTs Nanocomposite by PLAL
The pulsed laser ablation method is based on using the nanosecond pulsed laser (Continuum laser, PL9000, USA) to generate laser energy =120 mJ, pulse duration = 7 ns, and laser wavelength = 1064 nm. This laser beam was focused and directed to the upper surface of the silver plate that was immersed in liquid solution (functionalized carbon nanotubes) by 7 cm of a plano-convex lens to form a laser beam with an effective spot size, which had a diameter of about 0.78 mm. The ablation time was used to change the generated amount of Ag NPs, which was directed to decorate the tubular's surface of CNTs as shown in Figure 1. By PLAL method, the directed laser beam on the target surface in a very small cross-section produces high energetic laser pulse, followed by creating the multi-focused-photons, but in a very special condition. This condition is produced from inside a liquid media, allowing to produce laser induce plasma surrounded by liquid layer, leading to produce plasma-induced pressure and generating charged tiny particles (positively charges) approached to be in a nanoscale, which has a high capability to interact with functionalized materials (e.g., functionalized CNTs) suspended in the surrounding medium during ablation process. So, for the ablation of Ag plate immersed in liquid medium from functionalized CNTs, charged nanoparticles from Ag were generated and dispersed in the functionalized CNTs solution. The cationic charges on the outer surface of metal Ag NPs helped to be attracted by the anionic active sites on the CNTs' tubular surface to form CNTs decorated with Ag NPs. Therefore, without the functionalization process of CNTs, the decoration process could not be accomplished [23].
Determination of Ag Concentration on the Prepared Nanocomposite
The cellulose filter paper was used as a substrate in a filtration system connected with a vacuum pump to collect CNTs with their decorated Ag NPs, and the solution containing the unloading Ag NPs was passed from the porous of filter paper. Then the CNTs paste were collected and washed with ultra-pure water to ensure that the remaining Ag NPs were attached to CNTs, followed by re-filtrating and drying at 50°C. After that, the collected CNTs were washed with ultra-pure water to ensure that the remaining Ag NPs are attached to the CNTs, followed by re-filtrating and re-drying again. To estimate the
Determination of Ag Concentration on the Prepared Nanocomposite
The cellulose filter paper was used as a substrate in a filtration system connected with a vacuum pump to collect CNTs with their decorated Ag NPs, and the solution containing the unloading Ag NPs was passed from the porous of filter paper. Then the CNTs paste were collected and washed with ultra-pure water to ensure that the remaining Ag NPs were attached to CNTs, followed by re-filtrating and drying at 50 • C. After that, the collected CNTs were washed with ultra-pure water to ensure that the remaining Ag NPs are attached to the CNTs, followed by re-filtrating and re-drying again. To estimate the total amount of decorated Ag nanoparticles in the CNTs, the prepared nanocomposite with different ablation times was immersed in acidic solution from 15 mL nitric acid at a concentration of 70%. Then, Ag + concentration was calculated by AAS. According to Ag nanoparticles present only on the outer surface of the CNTs, the loading amount of Ag in CNTs (W Ag ) were calculated by the following expression [34,35]: where C Ag + , V, and M are the concentration of Ag + in the solution (measured from AAS), the volume of solution (250 mL), and the weight of Ag NPs/CNTs after completely drying (5 mg). The concentration of Ag nanoparticles corresponding to the different ablation times was determined by AAS (Table 1).
Determination of the Total Amount of Generated Ag NPs Using the PLAL Technique
The generation of Ag NPs via PLAL method was based on the ablation process of Ag plate, and all process was carried inside the closed system (vial or beaker) without any by-products, and the amount of Ag NPs was related to the weight loss in the main precursor (Ag plate). Therefore, the total amount of generated Ag NPs (loaded on the CNTs and free NPs) was determined by the following equation: W total Ag NPs = W Ag plate before PLAL − W Ag plate after PLAL where W total Ag NPs , W Ag plate before PLAL , and W Ag plate before PLAL are the total amount of generated silver nanoparticles during PLAL process, the weight of Ag plate before the ablation process, and the weight of Ag plate after the ablation process. By knowing the volume of the solution (10 mL), the concentration of generated Ag NPs can be simply calculated as tabulated in Table 1. These results are compatible with AAS analysis of the produced nanocomposite without a filtration process. The effect of nitric acid helped to collect all loaded and unloaded Ag NPs to be determined as total generated Ag NPs.
Investigation Techniques
The crystalline structure was obtained via an X-ray diffractometer (Shimadzu 7000, Tokyo, Japan). The qualitative and quantitative analyses were determined by Energy dispersive X-ray spectrometry, connected to a Field Emission-Scanning Electron Microscopy (Quanta FEG 250, FEI, Brno-Černovice, Czech Republic). The catalytic reduction performance was obtained by a UV-Visible spectrophotometer (JASCO, 570, Tokyo, Japan). The carbon nanotubes defect was carried out by the WITec alpha 300 R confocal Raman spectrometer. Chemical composition was carried out by Fourier-transform infrared spectroscopy (JASCO 6100 spectrometer, Tokyo, Japan). The thermal combustion analysis was carried out by the Perkin Elmer TGA thermogravimetric Analyzer. The concentration of Ag NPs was detected by atomic absorption spectroscopy (Perkin-Elmer AAnalyst 100, Waltham, MA, USA).
Catalytic Degradation Application Study
The studied adsorbent materials were added at a concentration of 0.2 g/L to the 40 mg/L naphthalene aqueous solution at room temperature (37 • C) with stirring for about 10 min in the dark before being exposed to light to make sure that the adsorbent surfaces and naphthalene molecules have interacted. UV-vis absorption spectra were used to determine the residual naphthalene content in the water. Furthermore, the absence of naphthalene smell from the solution showed that it was removed from the solution. The photocatalytic activities were evaluated in a beaker with a magnetic stirrer under UV light of 16 W. Samples were taken at regular intervals to evaluate the reaction via a photocatalyst to eliminate naphthalene. The degradation efficiency of naphthalene was measured using a spectrophotometer. 1 1), respectively, which is based on JCPDS No.04-0783 [36]. These results proved that the effect of the ablation of Ag plate in f -CNTs solution leads to producing cubic structure of Ag NPs on the graphite structure of CNTs. Also, the intensity of the main crystalline peak (1 1 1) indicated an increase in the amount of Ag NPs decorated the tubular surface of CNTs. This behavior could be detected by focusing on the intensity of the main characteristic interplanar indices of (1 1 1) in comparison with the main characteristic peak of CNTs, which was related to the interplanar indices of (0 0 2). So, as the ablation time increases, the amount of decoration on CNTs' surface with Ag NPs increases. Furthermore, the determination and the changes of the particle sizes of Ag NPs could be detected by the Scherrer equation [37,38].
Study of Nanocomposite
where β, θ, k, and λ are the broadening of the selective pattern line, the diffraction angle, the shape factor, and the exited X-ray laser source. For that, the average crystallinity size of (1 1 1) plane was about 29 nm for all prepared nanocomposites. This measurement confirmed that the ablation time did not make a valuable change on the particle size of Ag NPs, but it made a significant effect on the amounts of generated Ag NPs. That was related to that the ablation time is mainly affected on the generation of extra nanoparticles without any remarkable change in the generated nanoparticles. FT-IR and Raman studies were used to investigate the interaction between Ag NPs and functional groups on the outer surface of functionalized CNTs. Figure 3a showed that FT-IR spectra of the utilized CNTs and their nanocomposites with different amounts of Ag nanoparticles clearing that in the case of CNTs, it can be noticed that the existence of adsorption bands at 3433 cm -1 is corresponding to -OH stretching vibration. Also, the main characteristic peak of the functionalized carbon nanotube skeleton structure of CNTs FT-IR and Raman studies were used to investigate the interaction between Ag NPs and functional groups on the outer surface of functionalized CNTs. Figure 3a showed that FT-IR spectra of the utilized CNTs and their nanocomposites with different amounts of Ag nanoparticles clearing that in the case of CNTs, it can be noticed that the existence of adsorption bands at 3433 cm −1 is corresponding to -OH stretching vibration. Also, the main characteristic peak of the functionalized carbon nanotube skeleton structure of CNTs was observed around 1642 cm −1 related to the functional group of C=C bonding of aromatic rings. The presence of the carbonyl functional group in the carboxylic functional group (-COOH) was detected around 1700 cm −1 , confirming the effect of functionalization on the CNTs' tubular outer surface. Besides, the presence of the stretching vibrational motion at 920 cm −1 was related to C-O functional group of carboxylic acid. These FT-IR data indicated that the oxidation treatment process was successfully produced on the outer surface of CNTs by adding functional carboxyl and hydroxyl groups [39][40][41][42]. In the case of decoration with Ag NPs, the interactions between silver ions and the hydroxyl group of CNTs are responsible for the change in the intensity and in the positions of the peaks in the range from 1400 cm −1 to 1000 cm −1 . Besides, the presence of the vibrating starching modes C-Ag functional group at 594 cm −1 . These data could represent a confirmation on the interaction. [43,44].
The crystallinity and structural changes of the carbon framework of CNTs before and after decoration with Ag NPs were investigated using Raman spectroscopy as shown in Figure 3b. From these spectra, CNTs sample has strong peaks around 1338 cm −1 (D band) and 1583 cm −1 (G band), whereas Ag/CNTs nanocomposite sample has equivalent peaks at 1322 cm −1 and 1594 cm −1 . D band represents edges, various defects, and disordered in the carbonic structure from the vibration of sp 3 hybridization from CNTs skeleton and impurities, whereas G band comes from the zone mode in CNTs structure, assigning to the ordered sp 2 hybridization of C atoms in CNTs structure [45,46]. Compared to CNTs, Ag NPs/CNTs nanocomposites have a considerable frequency shift toward a lower D-band wavenumber (approximately 30 cm −1 ) due to the partial reduction in CNTs by decorating with Ag NPs on their external tubular surface, this result reveals a high level of disorder in the graphene layers of CNTs structure, as well as an increase in the number of defects, confirming the interaction between the external surface of CNTs structure and Ag NPs. Furthermore, when Ag NPs decorate CNTs structure, the intensity ratio of D band to G band (I D /I G ) increases. That was related to increasing the degree of disorder due to the decoration of CNTs with Ag NPs [47][48][49][50]. The values of this ratio were approximately 1.19, 1.41, 1.46, and 2.22 for CNTs, Ag NPs/CNTs (1), Ag NPs/CNTs (2), and Ag NPs/CNTs (3) nanocomposites, respectively. Figure 3c depicts the absorption properties of our produced samples. The CNT has a distinct peak at 240 nm, which could be attributed to the aromatic C-C bonds of the π-π* transitions. This peak was linked to a little perturbation in the wavelength region of 200 nm to 250 nm. The Ag/CNTs nanocomposite sample has a spectrum that is comparable to that of CNTs, with the primary absorption peak of Ag/CNTs at 240 nm. The absorption peak of the Ag NPs/CNTs nanocomposite sample became weaker and narrower than the absorption peak of the CNTs sample, which can be attributed to enhanced scattering of shorter wavelengths, notably maybe by the presence of the crystallite structure of the Ag nanoparticles. In addition, there is a new addition around the 450 nm peak, which can be attributed to the presence of plasmonic structure from the noble metal (Ag NPs). The intensity of this characteristic peak increases as the ablation time increases, in relation to the increase in amounts of decoration of Ag NPs on the surface on the CNTs [51,52]. Furthermore, the following equation may be used to compute the energy bandgap of the prepared samples [53]: The crystallinity and structural changes of the carbon framework of CNTs before and after decoration with Ag NPs were investigated using Raman spectroscopy as shown in Figure 3b. From these spectra, CNTs sample has strong peaks around 1338 cm -1 (D band) and 1583 cm -1 (G band), whereas Ag/CNTs nanocomposite sample has equivalent peaks at 1322 cm -1 and 1594 cm -1 . D band represents edges, various defects, and disordered in the carbonic structure from the vibration of sp 3 hybridization from CNTs skeleton and impurities, whereas G band comes from the zone mode in CNTs structure, assigning to the ordered sp 2 hybridization of C atoms in CNTs structure [45,46]. Compared to CNTs, The morphology of Ag NPs/CNTs (1) nanocomposite structure was studied using TEM, as shown in Figure 4. CNTs had a smooth surface with diameter around 34 nm, whereas Ag NPs has a spherical form with a diameter of 23 nm, as seen in this image. Furthermore, decoration of Ag NPs on the external surface of CNTs has no effect on the structure of Ag NPs, and CNTs are evenly dispersed throughout the composite. As shown in Figure 5, the energy dispersive of X-ray spectroscopic analysis was used to identify the chemical compositions of the prepared nanocomposite structure. The elemental analysis was studied on the selected area (1μmX1μm) at the scale bar 500 nm via scanning electron microscope. In the case of CNTs, the consistent elements are only C and O, while in the case of the decoration of Ag NPs in functionalized CNTs, the produced spectrum shows the characteristic elements from Ag, C and O with different percentages based on the time of ablation. The percentage changes of these constituent elements were observed in that figure. From this study, it was cleared that, as the ablation time increases the amount of Ag increases and the amounts of C and O decreases. Also, there is no appearance of other elements rather than the main structure of CNTs and Ag elements. As shown in Figure 5, the energy dispersive of X-ray spectroscopic analysis was used to identify the chemical compositions of the prepared nanocomposite structure. The elemental analysis was studied on the selected area (1 µm × 1 µm) at the scale bar 500 nm via scanning electron microscope. In the case of CNTs, the consistent elements are only C and O, while in the case of the decoration of Ag NPs in functionalized CNTs, the produced spectrum shows the characteristic elements from Ag, C and O with different percentages based on the time of ablation. The percentage changes of these constituent elements were observed in that figure. From this study, it was cleared that, as the ablation time increases the amount of Ag increases and the amounts of C and O decreases. Also, there is no appearance of other elements rather than the main structure of CNTs and Ag elements. Figure 6 shows the TGA curves of CNTs and their decoration with various amounts of Ag nanoparticles. The significant mass loss in the CNTs curve from 580 • C to 710 • C is related to the carbon structure of the CNTs decomposing. In the region up to 500 • C, there is a gradual and moderate mass loss of about 4% due to the elimination of oxygen functionalities generated during the functionalization process. CNTs almost burn out around 800 • C, with very few remaining weights. The remaining amount of mass loss at temperatures higher than 800 • C was related to the presence of the decorated Ag NPs. The overall mass loss of Ag/CNTs is reduced by up to 85%, 68%, and 50% till 800 • C, respectively, corresponding to varied ablation times in minutes (10, 20, and 30). The loading percentages of Ag NPs were related to the ablation time (25%, 37%, and 40%) [54][55][56][57].
The sample from the ablation of the Ag plate in f -CNTs is Ag NPs and CNTs in the form of CNT decorated by Ag NPs. Confirmation of the presence of CNTs and Ag NPs in the prepared nanocomposite was reveled via XRD analysis, UV-VIS absorption study, TGA analysis, and EDX analysis, while the decoration CNTs by Ag NPs was confirmed by FT-IR, Raman, and TEM.
From XRD analysis, the synthesized colloidal solution nanocomposites from the ablation of Ag plate immersed in f -CNTs consisted of only graphite structure of CNTs and metal Ag NPs in the cubic phase. Also, the effect of ablation time did not make any remarkable changes in the particle size of the prepared Ag NPs, but increase in the ratio amounts between Ag NPs with respect to CNTs structure. From UV-VIS absorption study, there is a new additional peak appeared in visible region, which can be related to the presence of plasmonic structure from noble metal (Ag NPs). Also, their intensity increases as the ablation time increases, related to the increase in the amounts of Ag NPs with respect to CNTs. From EDX analysis, the chemical identification on the prepared samples showed the presence of the characteristic elements from Ag, C, and O with different percentage for the decoration of Ag on f -CNTs, while Ag elements disappeared before starting the ablation process. From TGA analysis, the interaction between them was confirmed by the remaining amount of the mass loss at higher than 800 • C, which was related to the presence of the ablated Ag NPs. Figure 6 shows the TGA curves of CNTs and their decoration with various amounts of Ag nanoparticles. The significant mass loss in the CNTs curve from 580 °C to 710 °C is related to the carbon structure of the CNTs decomposing. In the region up to 500 °C, there is a gradual and moderate mass loss of about 4% due to the elimination of oxygen functionalities generated during the functionalization process. CNTs almost burn out around 800 °C, with very few remaining weights. The remaining amount of mass loss at temperatures higher than 800 °C was related to the presence of the decorated Ag NPs. The overall mass loss of Ag/CNTs is reduced by up to 85%, 68%, and 50% till 800 °C, respectively, corresponding to varied ablation times in minutes (10, 20, and 30). The loading percentages of Ag NPs were related to the ablation time (25%, 37%, and 40%) [54][55][56][57]. From FT-IR analysis, there are slightly shift in the functional groups in the range from 1400 cm −1 to 1000 cm −1 related to the interactions between silver ions and the hydroxyl group presence on f -CNTs' outer surface. Besides, the appearing of the fingerprint peaks of C-Ag functional group confirmed the interaction between them. So, this analysis could be represented as the first prove for the interaction between Ag NPs and CNTs. From Raman analysis, the increase in the number of defects in the structure of CNTs, confirming the interaction between the external surface of CNTs structure and Ag NPs. So, this analysis could be represented as the second prove for the interaction between Ag NPs and CNTs. From TEM image, Ag NPs successfully decorated the external surface of CNTs without making any effect on the structure of Ag NPs. So, this analysis could be represented as the third prove for the interaction between Ag NPs and CNTs.
Catalytic Activity
The adsorption performance of the studied nanocomposite with different amount of silver were examined regarding to CNTs for the removal of naphthalene structure under light irradiation at room temperature (37 • C). The degradation efficiency of the studied samples against naphthalene is depicted in Figure 7a. In this figure, UV-visible investigation showed the main characteristic transition peaks of naphthalene, which was appeared at 275 nm [29,58]. After that, the prepared mixed solutions after adding the tested samples were irradiated, followed by studying the changes of the naphthalene spectra. Also, the effect of irradiation time on the mixed solution was produced with different amount for complete adsorption removal as shown in Figure 7b. The following equation was used to compute the photocatalytic degradation efficiency of naphthalene [59]: where A f and A i are the values of naphthalene absorbance after and before irradiation at t time. From this figure, it was cleared that CNTs did not make any change in the degradation of naphthalene, while as Ag NPs started to be presented in the structure as a decorated material, the concentration of naphthalene started to be degraded. If Ag NPs exceed certain amount of decoration with respect to the number of CNTs, the efficiency of adsorption performance started to be decreased due to CNTs became brittle if holding more than certain amount [60]. Furthermore, this figure was used to create a kinetic model for total adsorption and photocatalytic degradation of naphthalene in water solutions under irradiation light, which were investigated synchronously in the same experimental situations by varying the concentrations. The impact of initial naphthalene concentrations on adsorption/photodegradation rate by employing CNTs and their decorating with various amounts of Ag NPs can be seen in the decrease of naphthalene concentration during irradiation time (as seen in Figure 8). Furthermore, the reaction was almost complete for each starting concentration of naphthalene utilizing CNTs and their decorating samples with Ag NPs. After 80 min of light irradiation, the remaining concentrations of CNTs, Ag NPs/CNTs (1), Ag NPs/CNTs (2), and Ag NPs/CNTs (3) were found to be 36.6 mg/L, 31.6 mg/L, 10.7 mg/L, and 26.6 mg/L, respectively.
The presence of two processes (adsorption and photocatalytic) for the degradation of naphthalene may be simultaneously occurred and investigated using first-order, secondorder, or Langmuir-Hinshelwood kinetic models [61]. The first process described in this reaction was related to the adsorption of naphthalene demonstrating that naphthalene absorption happens as a result of a process defined by the first order kinetic constant k 1 , which is given by the equation: tion of naphthalene, while as Ag NPs started to be presented in the structure as a decorated material, the concentration of naphthalene started to be degraded. If Ag NPs exceed certain amount of decoration with respect to the number of CNTs, the efficiency of adsorption performance started to be decreased due to CNTs became brittle if holding more than certain amount [60]. The presence of two processes (adsorption and photocatalytic) for the degradation of naphthalene may be simultaneously occurred and investigated using first-order, secondorder, or Langmuir-Hinshelwood kinetic models [61]. The first process described in this reaction was related to the adsorption of naphthalene demonstrating that naphthalene While the second process described in this reaction was related to the photocatalytic degradation of naphthalene. The rate of photodegradation of naphthalene is characterized by the first order kinetic constant k 2 , which is given by the equation: where C i , C t , and C eq are the concentration of naphthalene at the beginning, at any time during the reaction process, and at equilibrium, respectively [62]. Figure 9 represented the kinetic modeling of the prepared samples against naphthalene. The calculation of the kinetic constant was tabulated in Table 2. From this data, the adsorption stage is limited by the sluggish in the photodegradation step. So, the adsorption behavior has a significant impact on the whole process [61]. where , , are the concentration of naphthalene at the beginning, at any time during the reaction process, and at equilibrium, respectively [62]. Figure 9 represented the kinetic modeling of the prepared samples against naphthalene. The calculation of the kinetic constant was tabulated in Table 2. From this data, the adsorption stage is limited by the sluggish in the photodegradation step. So, the adsorption behavior has a significant impact on the whole process [61]. Figure 10 shows the photocatalytic degradation of naphthalene, and the efficiency and durability of Ag NP/CNTs (2) as a reusable catalyst. Once the nanocomposite catalyst Figure 10 shows the photocatalytic degradation of naphthalene, and the efficiency and durability of Ag NP/CNTs (2) as a reusable catalyst. Once the nanocomposite catalyst was prepared and utilized for the first degradation procedure, it was collected, recovered, and reused up to 10 times for photocatalytic degradation tests. From this figure, the efficiency of 66% degradation from naphthalene in 90 min was lowered from 100% to 79%. This result shows that the nanocomposite material has a high level of stability. However, the decrease in efficiency by about 21% after the tenth reuse was due to the nanocomposite being collected and washed by ultra-pure water before being reused. Nanomaterials 2021, 11, x FOR PEER REVIEW was prepared and utilized for the first degradation procedure, it was collected, re and reused up to 10 times for photocatalytic degradation tests. From this figure ciency of 66% degradation from naphthalene in 90 min was lowered from 100% This result shows that the nanocomposite material has a high level of stability. H the decrease in efficiency by about 21% after the tenth reuse was due to the nanoc being collected and washed by ultra-pure water before being reused.
Mechanism of Catalytic Performance
The influence of visible light irradiation and the addition of Ag NPs/CNT w cluded to degrade naphthalene by the photocatalytic process. The oxidation of C vides numerous functional groups-such as hydroxyl, carboxyl, and carbony tubular surface of CNTs, as discussed before. These functional groups serve as cations assisting CNTs in adsorbing Ag NPs to form the Ag NPs/CNT nanoco Then, analogous to semiconductor-metal junctions, the Ag NPs/CNTs nanoco are built, and Ag NPs acts as electron acceptors [63], as mentioned in the schem gram of Figure 11.
Mechanism of Catalytic Performance
The influence of visible light irradiation and the addition of Ag NPs/CNT were concluded to degrade naphthalene by the photocatalytic process. The oxidation of CNTs provides numerous functional groups-such as hydroxyl, carboxyl, and carbonyl-on the tubular surface of CNTs, as discussed before. These functional groups serve as active locations assisting CNTs in adsorbing Ag NPs to form the Ag NPs/CNT nanocomposite. Then, analogous to semiconductor-metal junctions, the Ag NPs/CNTs nanocomposites are built, and Ag NPs acts as electron acceptors [63], as mentioned in the schematic diagram of Figure 11.
Once the prepared nanocomposite material is inserted into the naphthalene solution, a number of interactions were carried out. Due to the obvious significant coupling between CNTs and naphthalene, a substantial number of naphthalene molecules are adsorbed on the tubular surface of the CNTs. The excited state of naphthalene molecules can be triggered by visible light irradiation. The electrons move from the exciting form of naphthalene to CNTs because of the high electron affinity of CNTs. CNTs allow electrons to freely travel without being scattered by any external defects. When moving electrons collide with the present locations of Ag NPs, they get trapped. The adsorbed oxygen atoms are transformed to superoxide anion radical (O 2 − ) and hydroxyl radical (OH . ) by the electrons collected on the Ag NPs. As a result, these active oxygen species degrade naphthalene [64,65]. Nanomaterials 2021, 11, x FOR PEER REVIEW 16 o Figure 11. Schematic diagram of the photocatalytic degradation of naphthalene by Ag NPs/CNT.
Once the prepared nanocomposite material is inserted into the naphthalene soluti a number of interactions were carried out. Due to the obvious significant coupling tween CNTs and naphthalene, a substantial number of naphthalene molecules are sorbed on the tubular surface of the CNTs. The excited state of naphthalene molecules be triggered by visible light irradiation. The electrons move from the exciting form naphthalene to CNTs because of the high electron affinity of CNTs. CNTs allow electr to freely travel without being scattered by any external defects. When moving electr collide with the present locations of Ag NPs, they get trapped. The adsorbed oxygen oms are transformed to superoxide anion radical ( ) and hydroxyl radical ( . ) by electrons collected on the Ag NPs. As a result, these active oxygen species degrade na thalene [64,65].
Conclusions
In this work, we used laser ablation to demonstrate a straightforward approach producing Ag-decorated CNTs nanocomposites with different amounts based on tun the laser ablation time to be applicable for water treatment from hazard organic co pounds such as naphthalene. A nanocomposite structure was created via laser ablat method by making an Ag plate containing f-CNTs inside liquid media. The character tion methods showed that the vibration characteristic peaks of CNTs shifted due to interaction of silver with the graphite layers of CNTs. The qualitative analysis did show any element other than C, O, and Ag. The diffractogram spectra showed the pr ence of interplanar indices of the graphite and silver structures without any change their crystallinity. The degree of disorder rises on increasing the decoration of Ag N and the remaining amount of mass loss at a temperature higher than 800 °C due to presence of the decorated Ag NPs on the CNTs' surface.
Conclusions
In this work, we used laser ablation to demonstrate a straightforward approach for producing Ag-decorated CNTs nanocomposites with different amounts based on tuning the laser ablation time to be applicable for water treatment from hazard organic compounds such as naphthalene. A nanocomposite structure was created via laser ablation method by making an Ag plate containing f -CNTs inside liquid media. The characterization methods showed that the vibration characteristic peaks of CNTs shifted due to the interaction of silver with the graphite layers of CNTs. The qualitative analysis did not show any element other than C, O, and Ag. The diffractogram spectra showed the presence of interplanar indices of the graphite and silver structures without any change in their crystallinity. The degree of disorder rises on increasing the decoration of Ag NPs, and the remaining amount of mass loss at a temperature higher than 800 • C due to the presence of the decorated Ag NPs on the CNTs' surface. | 2021-08-28T06:17:20.854Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "6e14f6bd17496cfcde876e9e0a81c0766a124ae1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/8/2142/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cc4bc36dd7cea3492a6a0217b27cd7473709331",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233864552 | pes2o/s2orc | v3-fos-license | The imprint of primordial gravitational waves on the CMB intensity profile
We use the induced geometry on the two dimensional transverse cross section of a photon beam propagating on a perturbed Friedmann-Robertson-Walker (FRW) spacetime to find the Cosmic Microwave Background (CMB) photon distribution over a telescope's collecting area today. It turns out that at each line of sight the photons are diluted along a transverse direction due to gravitational shearing. The effect can be characterized by two spin-weight-two variables, which are reminiscent of the Stokes polarization parameters. Similar to that case, one can construct a scalar and a pseudo-scalar function where the latter only gets contributions from the tensor modes. We analytically determine the power spectrum of the pseudo-scalar at superhorizon scales in a simple inflationary model and briefly discuss possible observational consequences.
I. INTRODUCTION
After they decouple from the plasma at recombination, the CMB photons move freely along the null geodesics of the curved spacetime. Obviously, the exact geodesic lines are slightly modified by cosmological perturbations as compared to the unperturbed geodesics. In general, this small change can be decomposed into a component along the line of sight causing a redshift in the frequency and a displacement perpendicular to the line of sight yielding lensing effects. The gravitational lensing of CMB photons is well studied, for a review see [1], and the effect has also precise observational signatures [2,3].
Previous studies mostly focus on how the CMB temperature map on the sky is modified by the lensed angular positions of the CMB photons. The deflection angle caused by lensing becomes a pure gradient and the corresponding potential is introduced as the main statistical variable. There is also some work on the lensing shear effect that modifies the so called hot and cold spot CMB ellipticity distribution [4][5][6]. Since the background temperature map is uniform, these are nonlinear effects that appear at the second order. Moreover, they are also dominated by the density perturbations; the influence of the tensor modes is completely negligible. Yet, it is also possible to extract a rotational component of the shear which have contributions only from the gravitational waves [7,8] (this is similar to the B-mode of the CMB polarization) but unfortunately the presumed signal is very small and below the noise level [8][9][10].
The gravitational lensing effects can also be studied using the geodesic deviation equation. In that framework, one calculates the expansion, shear and rotation parameters as the basic geometrical variables of the congruence. In a recent work [11], we have instead determined the induced two dimensional metric on the transverse cross section of a null geodesic beam in a perturbed FRW background. We have shown that the transverse metric does not depend on the slicing and its derivative along the geodesic flow can be decomposed to yield the expansion, shear and rotation. Clearly, the induced metric offers a direct geometrical description of a photon congruence.
Consider the evolution of a CMB photon beam back in time from the moment of its capture by a telescope today to the time of decoupling. The transverse slice corresponding to the telescope collection area is mapped to another slice of the beam at decoupling. The distribution of the trajectories on the initial surface is expected to be uniform since the photons are in local thermal equilibrium. However, the distribution of the photons hitting the telescope surface today would in general be nonuniform because of the gravitational shearing. Each photon trajectory marks a point on a transverse slice of the beam and we call the distribution of these points the intensity profile of the congruence. In [11] we have shown that the CMB intensity profile is characterized by two variables that are reminiscent of the Stokes polarization parameters. In this work, we will elaborate more on these variables; specifically we will construct a pseudo scalar quantity which is only generated by the primordial gravitational waves, similar to the B-mode of polarization.
II. NULL GEODESICS ON PERTURBED FRW
Let us look at the null geodesics on the following perturbed FRW spacetime (1) By definition the tensor mode is traceless δ ij γ ij = 0 and at the moment we do not impose any gauge fixing conditions. In the present study, we will only work in the linear theory, therefore all equations written below must be assumed to be valid up to the first order in perturbations.
To determine the null geodesic trajectories, one can first solve for the tangent vector field on the spacetime obeying arXiv:2105.02236v1 [astro-ph.CO] 5 May 2021 Defining the perturbations around the unperturbed field by one can fix δp 0 from p µ p µ = 0 as and solve for δp i so that where η 0 is the present conformal time and we introduce x i η1η2 to be the spatial position on an unperturbed geodesic path This is the unique solution obeying the condition hence p i (x, η 0 ) = l i /a(η 0 ) 2 = l i , where we also set a(η 0 ) = 1. As a result, l i defines the present direction of propagation and the actual line of sight including the lensing effects. Note that the time argument of the fields in the last line in (5) is the present time η 0 . These terms arise since we demand (7) and they vanish when the derivative operator along the unperturbed geodesic ∂ η + l i ∂ i is applied. These equations are worked out for a photon having "unit" energy and the general case can be obtained by scaling p µ → Ep µ . Our results below do not depend on the parameter E and therefore we are not going to introduce it.
It is possible to obtain the Sachs-Wolfe effect using the above equations. The 4-velocity vector of a comoving observer in (1) (obeying u µ u µ = −1) can be found as and the energy of a photon as measured by this observer is given by One can see that where As it was first observed in [12], (10) encodes the Sachs-Wolfe effect if one defines T ∝ ω. Indeed, by applying the derivative along the geodesic trajectory p µ ∂ µ one can see which exactly gives the evolution of the temperature fluctuations along the unperturbed geodesic lines. In [11] we have shown that (12) is valid for any (and not necessarily thermal) distribution function provided one reads the temperature from the average intensity by T 4 ∝ I.
One can obtain the geodesic path x µ (λ) by integrating where λ is an affine parameter. The zeroth component of the above equation can be used to relate λ and the conformal time as Defining the perturbed geodesic After using (4), one can integrate to obtain where δp j is found in (5) and the spatial argument of the functions x 0η η0 stands for (6). Note that (17) obeys δx i (η 0 ) = 0 and thus (15) yields the unique null geodesic path which passes from the spatial position x i 0 at time η 0 along the direction l i .
III. THE GEOMETRY OF THE PHOTON BEAM AND GRAVITATIONAL SHEARING
Eq. (15), where δx i is given in (17), actually describes a family of geodesics parametrized by the constants x i 0 and l i . A photon beam observed at η 0 along direction l i corresponds to a (small) subset in that family. Let ∆x i 0 denote the coordinate difference between two nearby geodesic lines at η 0 . The time evolution of this interval can be found from the solution (15) where ∆δx i (η) is obtained by varying (17) with respect x i 0 . The corresponding physical length is given by the metric At the time of observation, the transverse cross section of the beam can be specified by the vectors m i and n i , where (l i , m i , n i ) forms an orthonormal set with respect to δ ij (the impact of the metric perturbations at that instant is completely negligible). The evolution of the transverse beam cross section can be found by choosing (18), where L is the telescope size. We have checked that the two dimensional metric obtained from (19) (involving the displacements ∆x i 0 = Lm i and ∆x i 0 = Ln i ) exactly agrees with the slicing independent transverse metric obtained from the geodesic deviation equation in [11] (l i had been chosen as the geodesic cotangent at the time of decoupling in [11]).
We now compare the physical lengths of the two transverse directions Lm i and Ln i at the time of recombination η r . Their difference equals La(η r )Q, where we define (20) One can also determine the size difference between π/4 rotated directions (m i +n i )/ √ 2 and (m i −n i )/ √ 2, which can be found as La(η r )U , where (21) The two parameters Q and U , which depend on the directions (l i , m i , n i ), identify the shape of the initial transverse surface at the time of recombination, which has a uniform photon distribution over it. Obviously, while this initial surface evolves to become the (circular) cross section today, the photons are diluted in the direction that expands more compared to the other, see Fig 1. The phase space volume element along a geodesic flow does not change by Liouville's theorem and this leads to the standard rule that gravitational lensing does not modify specific intensity, see e.g. [13]. For the variables Q and U , this result is avoided since these do not directly measure the intensity; instead they are related to the distribution of photons over a transverse surface (which we call the intensity profile of the beam). Obviously, the validity of the particle description is crucial for the observability of this effect. Relying on the photon picture, one can quantify the surface distribution by measuring the energy flux over narrow slits instead of the whole area. For a given wavelength, the slit width must be small enough so that the usual concept of intensity fails (note that intensity is a coarse grained concept in the photon picture). In that case, Q becomes proportional to the energy flux difference between two slits extending along m i and n i directions, see Fig. 2. Likewise, the flux difference between π/4 rotated slits gives U .
One can simplify (20) and (21) to a very good approximation by computing the leading order contributions. From (17), the terms coming from δx i (η r ) can be seen to appear inside single or double time integrals, which give oscillating contributions. The tensor modes in these integrals are negligible compared to the first term in (20) and (21). Using also Ψ −Φ and ignoring the monopole term, one can obtain where The explicit (η r −η ) factor in K ij appears after changing the order of a double time integral.
In (22) the scalar mode contributions involve an oscillating time integral but they are still expected to dominate the power spectra over gravitational waves. Therefore, it is desirable to construct a variable which only depends on the tensor modes. The doublet (Q, U ) rotates by 2α when the tangent vectors (m, n) are rotated by α. Thus they constitute spin-weight-two objects on the sphere. The infinitesimal variations of l i (that respect the constraint l i l i = 1) can be parametrized like where the doublet (δa, δb) has spin-weight −1. The derivative operator (δ 2 /δa 2 − δ 2 /δb 2 , 2δ 2 /δaδb) has spinweight 2 and by applying it on (Q, U ) with Kronecker delta and epsilon tensor contractions, one can obtain spin-weight-zero scalar and pseudo-scalar on the sphere It is easy to see that Φ drops out in B which indeed becomes Just like in the polarization case, E and B represent curl and divergence free field lines, this time, formed by the "eigen-directions" of (Q, U ) on the sphere (the eigendirection at a given point can be defined from one of the vectors of the basis (m, n) in which (Q, U ) becomes proportional to (1, 0), i.e. one has U = 0).
IV. THE POWER SPECTRA
We work out the power spectra of these variables at superhorizon scales in a simplified model having only two epochs, inflation and radiation. The scale factor in such a model is given by where η I = −1/ √ H I H 0 , and H I and H 0 are the Hubble parameters at inflation and today, respectively. The form of (27) is fixed by demanding the continuity of the scale factor and the Hubble parameter at η I . The present conformal time can be found from a(η 0 ) = 1 which gives η 0 = 1/H 0 + 2η I . The redshift at recombination is given by and one may take z r 10 3 . Note the following hierarchy η 0 η r |η I |. The mode function of a minimally coupled massless scalar field that is released in its Bunch-Davies vacuum at inflation is given by where µ I k = µ k (η I ), µ I k = µ k (η I ) and prime denotes η derivative.
The tensor perturbation can be expanded in terms of the mode functions where s = 1, 2 and the creation-annihilation operators satisfy the usual commutation relations, e.g. [a k , a † k ] = δ 3 (k − k ). The polarization tensor s ij has the following properties where P ij = δ ij − k i k j /k 2 . The tensor mode function γ k (η) can be determined in terms of µ k (η) in (29) as where M p is the reduced Planck mass M 2 p = 1/(8πG). The gravitational potential Φ is determined from the curvature perturbation ζ, which is conserved at superhorizon scales and can be expanded like (30). The corresponding mode function during inflation can be taken as where is the slow-roll parameter (for constant , ζ k is actually given by the first Hankel function but (32) is a very good approximation when 1). The standard gauge fixing breaks down in reheating after inflation but there are alternative smooth gauges which would imply the standard results [16]. The gravitational potential can be obtained by applying a coordinate change that sets the shift variable of the metric to zero, N i = 0. This yields where the dot denotes derivative with respect to the proper time dt = adη and H =ȧ/a is the Hubble parameter after inflation. In general, a two-point function involving the variables Q, U , E and B is specified by two distinct vector sets (l 1 , m 1 , n 1 ) and (l 2 , m 2 , n 2 ). One can conveniently choose (l, m, n) = (r,θ,φ) so that the angular integrals in the correlators become straightforward. The remaining (radial) momentum integrals contain the usual (distributional) UV infinities, which can be cured by i -terms (see [17] for the implementation of the i -prescription in cosmology). In the following we take where θ is the angle between l 1 and l 2 ; i.e. cos(θ) = l i 1 l i 2 . On the last scattering surface (34) corresponds to superhorizon scales.
Although the oscillating time integrals diminish their power, we estimate that the scalar modes still dominate the expectation values Q 1 Q 2 and U 1 U 2 (one has Q 1 U 2 = Q 2 U 1 = 0 identically) because of the slowroll enhancement 1/ coming from the curvature perturbation (32). The angular integrals in momentum space give an oscillating factor which effectively sets a (comoving) cutoff scale for the remaining radial momentum integral (the cutoff is equivalent to the UV improvement implied by the i -prescription). This scale is roughly proportional to η 0 and from (23), which encodes the contribution of the scalar perturbation, one sees that on dimensional grounds while the two spatial derivatives yield 1/η 2 0 the time integrals give η 2 0 . This shows that in Q 1 Q 2 and U 1 U 2 , the order of magnitude contributions of the tensors and the scalars are similar to the amplitudes of the expectation values γγ and ΦΦ , respectively. Hence the tensors are suppressed by the slow-roll parameter and only the BB-correlator is relevant for the gravitational waves.
As usual in the two-point function B(l 1 )B(l 2 ) the oscillating subhorizon modes give negligible contributions when θ obeys (34). Thus, to a very good approximation one can use the superhorizon spectrum (which can be obtained from (31) and (29) when kη r 1) In that case, the momentum integral can be calculated exactly without any issues (the integral is convergent at IR as k → 0 and its UV behavior is cured by the iprescription). The result contains many terms when i = 0, but in the limit i → 0 one gets a remarkably simple final formula Note that B(l) is not a positive operator, hence a negative expectation value on scales (34) is conceivable.
V. CONCLUSIONS
Eq. (36) is the main result of this work. It gives a distinctive superhorizon signal that starts from zero at θ = π and increases in magnitude with decreasing θ. Indeed, (36) greatly enlarges as θ approaches the subhorizonsuperhorizon border (of this simple model) at θ = 1/z r . Of course, one would not expect (36) to be correct up to that order since the subhorizon corrections to (35) become more and more important.
The amplitude in (36) depends directly on the scale of inflation, which is encoded by the Hubble parameter H I . This is an expected feature for a power spectrum involving gravitational waves. Using the typical upper limit H I 10 −5 M p , which can be obtained from the upper observational limit on the tensor-to-scalar ratio, one finds a very small amplitude, of the order of 10 −10 . Of course, this is the largest estimate since H I can be much smaller. Note that as they are defined, Q, U , E and B are all dimensionless variables and they measure relative magnitudes, e.g. if I V and I H are the intensities corresponding to the vertical and horizontal slabs in Fig. 2, then Q = (I V − I H )/((I V + I H )/2). Therefore the figure 10 −10 estimates a dimensionless signal (related to the variations of Q and U on the sphere), which can be compared to the usual temperature fluctuations having relative order of magnitude 10 −5 .
In any case, the result is encouraging for further investigations in a realistic model including small angles. Note that (36) does not depend on the photon frequency and it can be determined from flux measurements as discussed above. These are technical advantages in terms of observability but detecting the corresponding signal will be hard if not impossible. Nevertheless, it is valuable to have an (even in principle) alternative to the CMB polarization experiments, as observing a quantum gravitational wave effect is already expected to be quite difficult. | 2021-05-07T01:15:56.125Z | 2021-05-05T00:00:00.000 | {
"year": 2021,
"sha1": "b281af5616f0f964d29303bda75d6020ed5714b8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2021.136353",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "cb9d4606b8d46959e8958d338afb4af3e4ec5ff3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
13246893 | pes2o/s2orc | v3-fos-license | Of Hags and bitches. Ageist attitudes in 2016 presidential debate on twitter
In this article we present our exploratory research into the occurrence of ageist attitudes within the discussion related to the US 2016 presidential election. We use natural processing techniques to analyze the content tweets related to Hillary Clinton and Donald Trump. Content analysis shows that although ageist attitudes are scarce in the discussion, they are mostly focused on Hillary Clinton rather than Donald Trump. Also, ageist arguments against Donald Trump appear mostly as a reply to controversies connected with the health of Hillary Clinton.
Introduction
In this article we present how ageist attitudes can manifest in social media.As an example we used tweets connected with the US 2016 presidential campaign.The choice for this particular set of data was based on two main reasons.Firstly, both presidential candidates are considerably older than the current president of the US.Both are older than than 65 years old, with Hillary Clinton being 69 and Donald Trump being 70, which also makes them the oldest presidential candidates in US history.Secondly due to the controversies raised during the current presidential campaign it is possible to compare how ageist attitudes compare to other topics that are raised in relation to the presidential candidates.Ageism is a form of social discrimination in which people mostly older adults are subject to prejudice and unfair treatment based on their age alone [2].Just like other forms of social exclusion (such as racism and sexism), ageism is an obstacle in creating a fair and inclusive society.Our research is mostly exploratory in nature, we want to learn what kind of ageist attitudes are apparent in relation to Hillary Clinton and Donald Trump.We also proposed two research objectives: -See if the presidential candidates are subject to ageist attitudes and comments, and if so, which of the candidates is more often the target of such comments.-Research if ageism in the tweets is connected with other forms of discriminatory or offensive language (ie.racist or sexist).
By setting these objectives we want to research the interaction between objective age, and the perception of somebody's age, as well as the relation ageism has to other forms of social exclusion.
In order to research the occurrence of ageist attitudes in social media, one needs to setup up a framework for how ageism is represented in language.Such a comprehensive framework of ageism has been proposed by [4].In their work they present a multidimensional model for representing ageist attitudes in language.These dimension varied from negative sentiment towards older adults, through judgments and stereotypical assumptions, to infantilization of older adults when creating a narrative.Within this paper we focus mostly on one of the aspects of this framework, which is negative associations with being old.In studies related to the perception of older adults it has been stated that such negative associations play an important role.Work by [3] where perception of aging among college students were tested are a prime example of that.Other works related to representation of older adults in media, and social media were focused on content analysis (such as work done by [1] where a content analysis of the US Superbowl commercials was made in order to review how they represent older adults), or on the potential for social advocacy [6] Automated detection of social bias in social media has been attempted mostly in the field of detecting hate speech [7] and racism against minorities [5].Those approaches were based on machine learning and natural language processing, with the aim of effective detecting of racism in social media.Most of the research was done on Twitter.To the best of the authors' knowledge no similar approach to detecting ageism has been proposed so far.
Data set
We collected over 300 000 tweets in English, which contained any reference to either of the presidential candidates in the 2016 campaign: Donald Trump, and Hillary Clinton.In order to collect those tweets we prepared a code with the use of Python Tweepy library, which allows for automated use of the Twitter streaming API.The tweets were collected during the last week of August.Table 1 presents a more detailed description of the content of the corpus.The numbers of tweets relating to Hillary Clinton and Donald Trump are comparable in number, however the Trump tweets are more numerous.
Analysis tools
In order to analyze the collected materials we wrote a set of scripts in python with the use of NLTK (version 3.2.1)library.Firstly we created a word list representing the frequency of words used in the corpus.We limited the list to words that appear more the 5 times in the corpus, and are longer than two characters.We also removed English stop words.Later we generated separate word lists for two of the candidates in order to compare which words appear more frequently in relation to each candidate.Afterwards we selected offensive words out of which we created a lexicon.We selected all of the tweets that contained at least one of those words and manually tagged them as either targeting Hillary Clinton or Donald Trump.In total 679 occurrences were observed and tagged.
Lexical representations of ageism
In our manual review of the word list for all tweets we found only one word that clearly conveys a negative attitude towards older adults, was the word 'hag'.As shown in table, the word appeared only 36 times, which is extremely rare.When compared with other derogatory terms such as 'moron', 'dumb' or 'bitch' and 'idiot' this word also seems to be rare.However, when the word occurrence is compared between the word lists of tweets referring to both candidates, the word 'hag' appears only in tweets that target the democratic candidate.What is more this is the only term exclusive to attacks on Hillary Clinton.It is noteworthy that a gendered term 'bitch' appears with almost equal frequency in tweets relating to both candidates.What is also noteworthy, is the fact that although Donald Trump is the older one among the presidential candidates, tweets at attack him mostly are focused on his intelligence.When analyzing the word list we also tested some other candidate words, that would suggest ageist attitudes.The prime candidates were words 'age', and 'old'.However, upon analysis of bigrams and trigrams containing these word, the context in which these word appeared was related mostly to matters other than age (such as old emails by Hillary Clinton, and old tweets made by Donald Trump).
In order to research the occurrence of ageist attitudes within the 2016 US presidential election we decided to analyze the word frequency in tweets related to both presidential candidates.While in our research we found that ageist terminology and attitude are extremely rare in the narration of the twitter users interested in the presidential campaigns a single important trend can be observed.Ageist comments and offensive remarks were aimed exclusively at the democratic candidate, Hillary Clinton.The most visible example of this trend was referring to her as a 'hag', or an 'old witch'.These results are very intriguing, when one considers the fact that Hillary Clinton is the younger of the the two candidates.The fact that Hillary Clinton is the target of ageist remarks on Twitter, as well as the fact that the word most often used in those remarks is a gendered term 'hag' while another gendered slur 'bitch' is represented equally in tweets attacking both presidential candidates, leads to a suggestion that ageist attitudes can exist independently from other forms of discriminatory language.
6 Application of findings for social media design for older adults Our findings suggest that when applying design principles for social media for older adults, consideration has to be made for the influence of various forms of discriminatory language.Since, ageist attitudes can exists both independently and in conjunction with other forms of discrimination, the designer should include experiences of users representing various identities.This suggests that user experience can be a viable tools for designing social media for older adults.
Further work
In our future work we plan to extend our research into detecting ageist attitudes.
We still aim at the use of data from social media, including Twitter.Our main objective is to create an unsupervised model for mapping and detecting ageist attitudes on-line.We plan to utilize the framework created by [4] in order to build a model that would apply their theoretical findings into a computer algorithm for screening ageist materials in social media.
Table 2 .
Offensive words occurrence | 2016-11-11T08:38:07.000Z | 2016-11-11T00:00:00.000 | {
"year": 2016,
"sha1": "63afa3d1df2ae6ecc953fb50d8a8f2a7d50557de",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "63afa3d1df2ae6ecc953fb50d8a8f2a7d50557de",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology",
"Computer Science"
]
} |
34283132 | pes2o/s2orc | v3-fos-license | The dominant role of mergers in the size evolution of massive early-type galaxies since z ~ 1
In this paper we measure the merger fraction and rate, both minor and major, of massive early-type galaxies (M_star>= 10^11 M_Sun) in the COSMOS field, and study their role in mass and size evolution. We use the 30-band photometric catalogue in COSMOS, complemented with the spectroscopy of the zCOSMOS survey, to define close pairs with a separation 10h^-1 kpc<= r_p<= 30h-1 kpc and a relative velocity Delta v<= 500 km s^-1. We measure both major (stellar mass ratio mu = M_star,2/M_star,1>= 1/4) and minor (1/10<= mu<1/4) merger fractions of massive galaxies, and study their dependence on redshift and on morphology. The merger fraction and rate of massive galaxies evolves as a power-law (1+z)^n, with major mergers increasing with redshift, n_MM = 1.4, and minor mergers showing little evolution, n_mm ~ 0. When split by their morphology, the minor merger fraction for early types is higher by a factor of three than that for spirals, and both are nearly constant with redshift. Our results show that massive early-type galaxies have undergone 0.89 mergers (0.43 major and 0.46 minor) since z ~ 1, leading to a mass growth of ~30%. We find that mu>= 1/10 mergers can explain ~55% of the observed size evolution of these galaxies since z ~ 1. Another ~20% is due to the progenitor bias (younger galaxies are more extended) and we estimate that very minor mergers (mu<1/10) could contribute with an extra ~20%. The remaining ~5% should come from other processes (e.g., adiabatic expansion or observational effects). This picture also reproduces the mass growth and velocity dispersion evolution of these galaxies. We conclude from these results that merging is the main contributor to the size evolution of massive ETGs at z<= 1, accounting for ~50-75% of that evolution in the last 8 Gyr. Nearly half of the evolution due to mergers is related to minor (mu<1/4) events.
Introduction
The history of mass assembly is a major component of the galaxy formation and evolution scenario.The evolution in the number of galaxies of a given mass, as well as the size and shapes of galaxies building the Hubble sequence, provides strong input to this scenario.The optical colour -magnitude diagram of local galaxies shows two distinct populations: the red sequence, consisting primarily of old, spheroid-dominated, quiescent galaxies, and the blue cloud, formed primarily by spiral and irregular star-forming galaxies (e.g., Strateva et al. 2001;Baldry et al. 2004).This bimodality has been traced at increasingly higher redshifts (e.g., Ilbert et al. 2010), showing that the most massive galaxies were the first to populate the red sequence as a result of the so-called "downsizing" (e.g., Bundy et 2006; Pérez-González et al. 2008;Pozzetti et al. 2010).These properties result from several physical mechanisms for which it is necessary to evaluate the relative impact.In this paper we examine the contribution of major and minor mergers to the mass growth and size evolution of massive early-type galaxies (ETGs), based on new measurements of the pair fraction from the COSMOS1 (Cosmological Evolution Survey, Scoville et al. 2007) and zCOSMOS2 (Lilly et al. 2007) surveys.
The number density of red massive galaxies with M ⋆ 10 11 M ⊙ is roughly constant since z ∼ 0.8 (Pozzetti et al. 2010, and references therein), with major mergers (mass or luminosity ratio higher than 1/4) common enough to explain their number evolution since z = 1 (Eliche- Moral et al. 2010;Robaina et al. 2010;Oesch et al. 2010).However, and despite that they seem "dead" since z ∼ 0.8, two observational facts rule out the pas-sive evolution of these massive galaxies after they have reached the red sequence: the presence of Recent Star Formation (RSF) episodes and their size evolution.In the former, the study of red sequence galaxies in the NUV-optical colour vs magnitude diagram reveals that ∼30% have undergone RSF, as seen from their blue NUV − r colours, both locally (Kaviraj et al. 2007) and at higher redshifts (z ∼ 0.6, Kaviraj et al. 2011).This RSF typically involves 5 − 15% of the galaxy stellar mass (Scarlata et al. 2007;Kaviraj et al. 2008Kaviraj et al. , 2011)).Some authors suggest that minor mergers, i.e., the merger of a massive red sequence galaxy with a less massive (mass or luminosity ratio lower than 1/4), gasrich satellite, could explain the observed properties of galaxies with RSF (Kaviraj et al. 2009;Fernández-Ontiveros et al. 2011;Desai et al. 2011).
Regarding size evolution, it is now well established that massive ETGs have, on average, lower effective radius (r e ) at high redshift than locally, being ∼ 2 and ∼ 4 times smaller at z ∼ 1 and z ∼ 2, respectively (Daddi et al. 2005;Trujillo et al. 2006Trujillo et al. , 2007Trujillo et al. , 2011;;Buitrago et al. 2008;van Dokkum et al. 2008van Dokkum et al. , 2010;;van der Wel et al. 2008;Toft et al. 2009;Williams et al. 2010;Newman et al. 2010Newman et al. , 2012;;Damjanov et al. 2011;Weinzirl et al. 2011;Cassata et al. 2011, but see Saracco et al. 2010 andValentinuzzi et al. 2010b for a different point of view).ETGs as compact as observed at high redshifts are rare in the local universe (Trujillo et al. 2009;Taylor et al. 2010;Cassata et al. 2011), suggesting that they must evolve since z ∼ 2 to the present.It has been proposed that high redshift compact galaxies are the cores of present day ellipticals, and that they increased their size by adding stellar mass in the outskirts of the galaxy (Bezanson et al. 2009;Hopkins et al. 2009a;van Dokkum et al. 2010).Several studies suggest that merging, especially the minor one, could explain the observed size evolution (Naab et al. 2009;Bezanson et al. 2009;Hopkins et al. 2010b;Shankar et al. 2011;Oser et al. 2012), while other processes, as adiabatic expansion due to AGNs or to the passive evolution of the stellar population, should have a mild role at z 1 (Fan et al. 2010;Ragone-Figueroa & Granato 2011;Trujillo et al. 2011).In addition, a significant fraction of local ellipticals present signs of recent interactions (van Dokkum 2005;Tal et al. 2009).
While minor mergers are expected to contribute significantly to the evolution of massive ETGs, there is no direct observational measurement of their contribution yet.As a first attempt, the minor merger fraction of the global population of L B L * B galaxies in VVDS-Deep 3 (VIMOS VLT Deep Spectroscopic Survey, Le Fèvre et al. 2005) has been studied by López-Sanjuan et al. (2011, LS11 hereafter), who show that minor mergers are quite common, that their importance decrease with redshift (see also Lotz et al. 2011), and that participate to about 25% of the mass growth by merging of such galaxies.Focusing on massive galaxies, Williams et al. (2011), Mármol-Queraltó et al. (2012), or Newman et al. (2012) study their total (major + minor) merger fraction to z ∼ 2, finding that it is nearly constant with redshift.In this paper we present the detailed merger history, both minor and major, of massive (M ⋆ ≥ 10 11 M ⊙ ) ETGs since z ∼ 1 using close pair statistics in the COSMOS field, and use it to infer the role of major and minor mergers in the mass assembly and in the size evolution of these systems in the last ∼ 8 Gyr.
The paper is organized as follow.In Sect. 2 we present our photometric catalogue in the COSMOS field, while in Sect. 3 we review the methodology used to measure close pairs merger fractions when photometric redshifts are used.We present our merger fractions of massive galaxies in Sect.4, and the inferred 3 http://cesam.oamp.fr/vvdsproject/vvds.htm merger rates for ETGs in Sect. 5.The role of mergers in the mass assembly and in the size evolution of massive galaxies is discussed in Sect.6, and in Sect.7 we present our conclusions.Throughout this paper we use a standard cosmology with Ω m = 0.3, Ω Λ = 0.7, H 0 = 100h Km s −1 Mpc −1 and h = 0.7.Magnitudes are given in the AB system.
The COSMOS photometric catalogue
We use the COSMOS catalogue with photometric redshifts derived from 30 broad and medium bands described in Ilbert et al. (2009) and Capak et al. (2007), version 1.8.We restrict ourselves to objects with i + ≤ 25.The detection completeness at this limit is higher than 90% (Capak et al. 2007).In order to obtain accurate colours, all the images were degraded to the same point spread function (PSF) of 1.5 ′′ .At i + ∼ 25, the rms accuracy of the photometric redshifts (z phot ) at z 1 is ∼ 0.04 in (z spec − z phot )/(1 + z spec ), where z spec is the spectroscopic redshift of the sources (Fig. 9 in Ilbert et al. 2009).At z > 1 the quality of the photometric redshifts quickly deteriorates.Additionally, and because we are interested on minor companions, we require a detection in the K s band to ensure that the stellar mass estimates are reliable, thus we add the constraint K s ≤ 24.
Stellar masses of the photometric catalogue have been derived following the same approach than in Ilbert et al. (2010).We used stellar population synthesis models to convert luminosity into stellar mass (e.g., Bell et al. 2003;Fontana et al. 2004).The stellar mass is the factor needed to rescale the best-fit template (normalized at one solar mass) for the intrinsic luminosities.The Spectral Energy Distribution (SED) templates were generated with the stellar population synthesis package developed by Bruzual & Charlot (2003, BC03).We assumed a universal initial mass function (IMF) from Chabrier (2003) and an exponentially declining star formation rate, S FR ∝ e t/τ (τ in the range 0.1 Gyr to 30 Gyr).The SEDs were generated for a grid of 51 ages (in the range 0.1 Gyr to 14.5 Gyr).Dust extinction was applied to the templates using the Calzetti et al. (2000) law, with E(B − V) in the range 0 to 0.5.We used models with two different metallicities.Following Fontana et al. (2006) and Pozzetti et al. (2007), we imposed the prior E(B − V) < 0.15 if age/τ > 4 (a significant extinction is only allowed for galaxies with a high S FR).The stellar masses derived in this way have a systematic uncertainty of ∼0.3 dex (e.g., Pozzetti et al. 2007;Barro et al. 2011).
We supplement the previous photometric catalogue with the spectroscopic information from zCOSMOS survey, a large spectroscopic redshift survey in the central area of the COSMOS field.In this analysis we use the final release of the bright part of this survey, called the zCOSMOS-bright 20k sample.This is a pure magnitude selected sample with I AB ≤ 22.5.For a detailed description and relevant results of the previous 10k release, see Lilly et al. (2009); Tasca et al. (2009); Pozzetti et al. (2010) or Peng et al. (2010).A total of 20604 galaxies have been observed with the VIMOS spectrograph (Le Fèvre et al. 2003) in multi-slit mode, and the data have been processed using the VIPGI data processing pipeline (Scodeggio et al. 2005).A spectroscopic flag has been assigned to each galaxy providing an estimate of the robustness of the redshift measurement (Lilly et al. 2007).If a redshift has been measured, the corresponding spectroscopic flag value can be 1, 2, 3, 4 or 9. Flag = 1 meaning that the redshift is 70% secure and flag = 4 that the redshift is ∼ 99% secure.Flag = 9 means that the redshift measurement relies on one single narrow emission line (O ii or Hα mainly).The information about the consistency between photometric and Fig. 1.Stellar mass as a function of redshift in the COSMOS field.Red dots are principal galaxies (M ⋆ ≥ 10 11 M ⊙ ) with z phot in the zCOSMOS area, blue dots are companion galaxies (M ⋆ ≥ 10 10 M ⊙ ) with z phot in the COSMOS area, and black dots are the red galaxies (NUV −r + ≥ 3.5) with z phot in the COSMOS area.We only show a random 15% of the total populations for visualisation purposes.Green squares mark those galaxies in previous populations with a spectroscopic resdhift.The vertical lines marks the lower and upper redshift in our study, while the horizontal ones the mass selection of the principal (solid) and the companion (dashed) samples.spectroscopic redshifts has also been included as a decimal in the spectroscopic flag.In this study we select the highest reliable redshifts, i.e. with confidence class 4.5, 4.4, 3.5, 3.4, 9.5, 9.3, and 2.5.This flag selection ensures that 99% of redshifts are believed to be reliable based on duplicate objects (Lilly et al. 2009).
Our final COSMOS catalogue comprises 134028 galaxies at 0.1 ≤ z < 1.1, our range of interest (see Sect. 2.1).Nearly 35% of the galaxies with i + 22.5 have a high reliable spectroscopic redshift.For consistency and to avoid systematics, we always use the stellar masses and other derived quantities from the photometric catalogue.We checked that the dispersion when comparing stellar masses from z phot and z spec is ∼ 0.15 dex, lower than the typical error in the measured stellar masses (∼ 0.3 dex).Thanks to the methodology developed in López-Sanjuan et al. (2010a) we are able to obtain reliable merger fractions from photometric catalogues under some quality conditions (Sect.3).We check that the COSMOS catalogue is adequate for our purposes in Sects.3.2 and 3.3.
Definition of the mass-selected samples
We define two samples selected in stellar mass.The first one comprises 2047 principal massive galaxies in the zCOSMOS area, where spectroscopic information is available, with The second sample comprises the 23992 companion galaxies with M ⋆ ≥ 10 10 M ⊙ in the full COSMOS area and in the same redshift range.The mass limit of the companion sample ensures completeness for red galaxies up to z ∼ 0.9 (Drory et al. 2009;Ilbert et al. 2010).Because of that, we set z up = 0.9 as the upper redshift in our study, while z down = 0.2 to probe enough cosmological volume.However, our methodology takes into account the photometric redshift errors (see Sect. 3, for details), so we must include in the samples not only the sources with z < z up , Fig. 2. ∆ z as a function of redshift in the mass-selected sample, from M ⋆ ≥ 10 11 M ⊙ (thiner line) to 10 10 M ⊙ < M ⋆ ≤ 10 10.2 M ⊙ (thicker line) galaxies in bins of 0.2 dex.The black solid line marks the photometric errors of blue galaxies in the lower mass bin, while the dashed line is for red galaxies in the same mass bin.The vertical line marks the higher redshift in our samples, z max = 1.1.The horizontal line marks the median ∆ z for lowmass galaxies at 1 ≤ z < 1.1, ∆ z = 0.015.but also those sources with z − 2σ phot < z up in order to ensure completeness in redshift space.Because of this, we set the maximum and minimum redshift of the catalogues to z min = 0.1 and z max = 1.1.We show the mass distribution of our samples as a function of z in Fig. 1, and we assume our samples as as volumelimited mass-selected in the following.
Our final goal is to measure the merger fraction and rate of massive ETGs, but the our principal sample comprises ETGs, spirals and irregulars.We segregate morphologically our principal sample thanks to the morphological classification defined in Tasca et al. (2009).Their method use as morphological indicator the distance of the galaxies in the multi-space C − A − G (Concentration, Asymmetry and Gini coefficient) to the position in this space of a training sample of ∼500 eye-ball classified galaxies.These morphological indices were measured in the HST/ACS images of the COSMOS field, taken through the wide F814W filter (Koekemoer et al. 2007).We refer the reader to Tasca et al. (2009) for further details.The morphological classification in the COSMOS field is reliable for galaxies brighter than i + < 24, and all our principal galaxies are brighter than i + < 23.5 up to z = 1.According to the classification presented in Tasca et al. (2009) our principal sample comprises 1285 (63%) ETGs (E/S0) and 632 (31%) spiral galaxies.The remaining 6% sources are half irregulars (65 sources) and half massive galaxies without morphological classification (65 sources).The mean mass of both ETGs and spirals is similar, M ⋆ ∼ 10 11.2 M ⊙ .
Dependence of photometric errors on stellar mass
The quality of the photometric redshifts in COSMOS decreases for faint objects in the i + band (Ilbert et al. 2009).In this section we study in details how redshift errors depend on the mass of the sources, since this imposes limits on our ability of measure reliable merger fractions in photometric catalogues (Sect.3.2).As shown by Ilbert et al. (2010), we can estimate the photometric redshift error (σ z phot ) from the Probability Distribution Function of the photometric redshift fit.In Fig. 2 we show the median ∆ z, phot ≡ σ z phot /(1 + z phot ) of galaxies with differ-ent stellar masses, from M ⋆ ≥ 10 11 M ⊙ (massive galaxies) to 10 10 M ⊙ < M ⋆ ≤ 10 10.2 M ⊙ (low-mass galaxies) in bins of 0.2 dex.
Massive galaxies are bright in the whole redshift range under study, thus their photometric errors are small up to z ∼ 1, ∆ z, phot ∼ 0.005.On the other hand, low-mass galaxies are fainter at high redshift than their local counterparts, so their their photometric errors increase with z and reach ∆ z, phot ∼ 0.015 at z ∼ 1.We study separately the photometric errors of low-mass red and blue galaxies.We take as red galaxies those with SED colour NUV − r + ≥ 3.5, while as blue those with NUV − r + < 3.5 (see Ilbert et al. 2010, for details).Blue galaxies also have ∆ z, phot ∼ 0.015 up to z ∼ 1, while red galaxies have higher photometric redshift errors, with ∆ z, phot ∼ 0.020 at z = 0.95 and ∆ z, phot ∼ 0.040 at z = 1.05.This different behaviour can be explained by the different mass-to-light ratio (M ⋆ /L) of both populations.Faint (i + ∼ 25) blue galaxies, which photometric errors are higher, reach masses as low as M ⋆ ∼ 10 8.5 M ⊙ at z ∼ 1.On the other hand, we are in the detection limit for red galaxies at these redshift (these red galaxies have i + ∼ 25, Sect.2.1), explaining their high photometric redshift errors.In Sect.3.2 we prove that our methodology is able to recover reliable merger fractions in COSMOS samples with ∆ z, phot 0.040, as those in our study.
Close pairs using photometric redshifts
The linear distance between two sources can be obtained from their projected separation, r p = θd A (z i ), and their rest-frame relative velocity along the line of sight, ∆v = c |z j − z i |/(1+z i ), where z i and z j are the redshift of the principal (more luminous/massive galaxy in the pair) and companion galaxy, respectively; θ is the angular separation, in arcsec, of the two galaxies on the sky plane; and d A (z) is the angular scale, in kpc/arcsec, at redshift z.Two galaxies are defined as a close pair if r min p ≤ r p ≤ r max p and ∆v ≤ ∆v max .The lower limit in r p is imposed to avoid seeing effects.We used r min p = 10h −1 kpc, r max p = 30h −1 kpc, and ∆v max = 500 km s −1 .With these constraints 50%-70% of the selected close pairs will finally merge (Patton et al. 2000;Patton & Atfield 2008;Lin et al. 2004;Bell et al. 2006).The PSF of the COSMOS ground-based images is 1.5 ′′ (Capak et al. 2007), which corresponds to ∼ 8h −1 kpc in our cosmology at z ∼ 0.9.To ensure well deblended sources and minimize colour contamination, we fixed r min p to 10h −1 kpc (θ 2 ′′ ).On the other hand, we set r max p to 30h −1 kpc to ensure reliable merger fractions in our study (see Sect. 3.2 for details).
To compute close pairs we defined a principal and a companion sample (Sect.2.1).The principal sample contains the more massive galaxy of the pair, and we looked for those galaxies in the companion sample that fulfil the close pair criterion for each galaxy of the principal sample.If one principal galaxy has more than one close companion, we take each possible pair separately (i.e., if the companion galaxies B and C are close to the principal galaxy A, we study the pairs A-B and A-C as independent).In addition, we impose a mass difference between the pair members.We denote the ratio between the mass of the principal galaxy, M ⋆,1 , and the companion galaxy, M ⋆,2 , as and looked for those systems with M ⋆,2 ≥ µM ⋆,1 .We define as major companions those close pairs with µ ≥ 1/4, while minor companions those with 1/10 ≤ µ < 1/4.
With the previous definitions the merger fraction is where N 1 is the number of sources in the principal sample, and N p the number of principal galaxies with a companion that fulfil the close pair criterion for a given µ.This definition applies to spectroscopic volume-limited samples.Our samples are volumelimited, but combine spectroscopic and photometric redshifts.In a previous work, López-Sanjuan et al. (2010a) developed a statistical method to obtain reliable merger fractions from photometric catalogues.We remind the main points of this methodology below, while we study its limits when applied to our COSMOS photometric catalogue in Sect.3.2.We use the following procedure to define a close pair system in our photometric catalogue (see López-Sanjuan et al. 2010a, for details): first we search for close spatial companions of a principal galaxy, with redshift z 1 and uncertainty σ z 1 , assuming that the galaxy is located at z 1 − 2σ z 1 .This defines the maximum θ possible for a given r max p in the first instance.If we find a companion galaxy with redshift z 2 and uncertainty σ z 2 in the range r p ≤ r max p and with a given mass with respect to the principal galaxy, then we study both galaxies in redshift space.For convenience, we assume below that every principal galaxy has, at most, one close companion.In this case, our two galaxies could be a close pair in the redshift range Because of variation in the range [z − , z + ] of the function d A (z), a sky pair at z 1 − 2σ z 1 might not be a pair at z 1 + 2σ z 1 .We thus impose the condition and redefine this redshift interval if the sky pair condition is not satisfied at every redshift.After this, our two galaxies define the close pair system k in the redshift interval where the index k covers all the close pair systems in the sample.
The next step is to define the number of pairs associated at each close pair system k.For this, we suppose in the following that a galaxy i in whatever sample is described in redshift space by a probability distribution P i (z i | η i ), where z i is the source's redshift and η i are the parameters that define the distribution.If the source i has a photometric redshift, we assume that while if the source has a spectroscopic redshift where δ(x) is delta's Dirac function.With this distribution we are able to statistically treat all the available information in z space and define the number of pairs at redshift z 1 in system k as where , the integration limits are the subindex 1 [2] refers to the principal [companion] galaxy in k system, and the constant C k normalizes the function to the total number of pairs in the interest range The function ν k (Eq.[6]) tells us how the number of pairs in the system k, N k p , are distributed in redshift space.The integral in Eq. ( 6) spans those redshifts in which the companion galaxy has ∆v ≤ ∆v max for a given redshift of the principal galaxy.
With previous definitions, the number of pairs in the interval where the index l spans the redshift intervals defined over the redshift range under study.If we integrate over the whole redshift space, z r = [0, ∞], Eq. ( 10) becomes where k N k p is analogous to N p in Eq. ( 2).In order to estimate the statistical error of f m,l , denoted σ stat,l , we use the jackknife technique (Efron 1982).We compute partial standard deviations, δ k , for each system k by taking the difference between the measured f m,l and the same quantity with the kth pair removed for the sample, For a redshift range with N p systems, the variance is given by When N p ≤ 5 we used instead the Bayesian approach of Cameron (2011), that provides accurate asymmetric confidence intervals in these low statistical cases.We check that for N p > 5 both jackknife and Bayesian methods provide similar statistical errors within 10%.
Dealing with border effects
When we search for close companions near to the edges of the images it may happen that a fraction of the search volume is outside of the surveyed area, lowering artificially the number of companions.To deal with this we select as principal galaxies those in the zCOSMOS area, i.e., in the central 1.6 deg 2 , while we select as companions those in the whole photometric COSMOS area.This maximize the spectroscopic fraction of the principal sample and ensures that we have companions inside all the searching volume.
Testing the methodology with 20k spectroscopic sources
Following López-Sanjuan et al. (2010a), we test in this section if we are able to obtain reliable merger fractions from our COSMOS photometric catalogue.For this, we study the merger fraction f m in the zCOSMOS-bright 20k sample.The merger fraction in the 10k sample is studied in details by de Ravel et al. (2011) and Kampczyk et al. (2011).We defined f spec as the fraction of sources on a given sample with spectroscopic redshift.The 20k sample has f spec = 1, while the COSMOS photometric catalogue has f spec = 0.34 for i + ≤ 22.5 galaxies.In this section we only use the N = 10542 sources at 0.2 ≤ z < 0.9 with a high reliable spectroscopic redshift from the 20k sample.
To test our method at intermediate f spec , we created synthetic catalogues by assigning their measured z phot and σ z phot to N(1 − f spec ) random sources of the 20k sample (we denote this case as A = 1 in the following).To explore different values of ∆ z , we assigned to the previous random sources a redshift as drawn for a Gaussian distribution with median z phot and σ 2 = (A 2 −1) σ 2 z phot , where A > 1 is the factor by which we increase the initial ∆ z of the sample.In this case, the redshift error of the source is A σ z phot .Then, we measure where f 20k m is the measured merger fraction in the 20k spectroscopic sample at 0.2 ≤ z < 0.9 without imposing any mass or luminosity difference and f syn m is the merger fraction from the synthetic samples in the same redshift range.When A > 1, we repeated the process ten times and averaged the results.
We explore several cases with our synthetic catalogues.For example, we assume that all sources in the synthetic principal catalogue (subindex 1) and in the companion one (subindex 2) have a photometric redshift, f spec,1 = f spec,2 = 0, and that ∆ z,1 = ∆ z,2 = 0.007 (A 1 = A 2 = 1).We also contemplate more realistic cases, as f spec,1 = 0.3 and ∆ z,1 = 0.007 (A 1 = 1) for principals, and f spec,2 = 0 and ∆ z,2 = 0.042 (A 2 = 6) for companions.We found that δ f m is higher than 10% for r max p = 30h −1 kpc close pairs for ∆ z,2 0.05 (A 2 7) and realistic values of ∆ z,1 .We checked that | δ f m | 10% for ∆ z,2 ≤ 0.04 and r max p = 30h −1 kpc, justifying the upper limit ∆ z = 0.04 imposed in Sect.2.2.For higher r max p the method overestimates the merger fraction by about 50% in the ∆ z,2 = 0.04 case.Because we are interested on faint companions, we set r max p = 30h −1 kpc in the following to ensure reliable merger fractions.
On the other hand, we found that the σ stat of the f syn m is ∼ 5% of the measured value, i.e., two times lower than the estimated | δ f m | ∼ 10%.Because of this, and to ensure reliable uncertainties in the merger fractions, we impose a minimum error in f m of 10%, and we take as final merger fraction error σ f m = max(0.1 f m , σ stat ).
In the next section we test further our methodology by comparing the merger fraction from a spectroscopic survey ( f spec = 1) against that in COSMOS from our photometric catalogue.r min p = 10h −1 kpc.We show the merger fractions from COSMOS and VVDS-Deep for different values of µ B in Fig. 3.
Comparison with merger fractions in VVDS-
We find that VVDS-Deep and COSMOS merger fractions are in excellent agreement in the first redshift range, while in the second redshift range some discrepancies exist, with the merger fraction in COSMOS being higher than in VVDS-Deep at µ 1/5.However, both studies are compatible within error bars.Note that merger fraction uncertainties in COSMOS are ∼ 3 times lower than in VVDS-Deep because of the higher number of principals in COSMOS.We checked the effect of comic variance in this comparison.For that, we split the zCOS-MOS area in several VVDS-Deep size (∼0.5 deg 2 ) subfields and measured the merger fraction in these subfields.The maximum and minimum values of f m in these subfields, including 1σ f m errors, are marked in Fig. 3 with solid lines.We find that, within 1σ f m , there is a zCOSMOS subfield with merger properties similar to the VVDS-Deep field.Because the zCOSMOS subfields are contiguous, this exercise provides a lower limit to the actual cosmic variance in the COSMOS field (e.g., Moster et al. 2011).Hence, we conclude that our methodology is able to recover reliable minor merger fractions from photometric samples in the COSMOS field.
The merger fraction of massive ETGs in the COSMOS field
The final goal of the present paper is to estimate the role of mergers (minor and major) in the mass assembly and size evolution of massive ETGs.To facilitate future comparison, we present first the merger properties of the global massive population in Sect.4.1.Then, we focus in the ETGs population in Sect.4.2.The evolution of the merger fraction with redshift up to z ∼ 1.5 is well parametrized by a power-law function (e.g., Le Fèvre et al. 2000;López-Sanjuan et al. 2009;de Ravel et al. 2009), so we take this parametrization in the following.
The merger fraction of the global massive population
We summarize the minor, major and total merger fractions for M ⋆ ≥ 10 11 M ⊙ galaxies in the COSMOS field in Table 1 and we show them in Fig. 4. We defined five redshift bins between z down = 0.2 and z up = 0.9 both for minor and major mergers.The ranges 0.3 < z < 0.375, 0.7 < z < 0.75 and 0.825 < z < 0.85 are dominated by Large Scale Structures (LSS, Kovač et al. 2010), so we use these LSS as natural boundaries in our study.This minimizes the impact of LSS in our measurements, since the merger fraction depends on environment (Lin et al. 2010;de Ravel et al. 2011;Kampczyk et al. 2011).We identify a total of 56.2 major mergers and 71.1 minor ones at 0.2 ≤ z < 0.9.Note that the number of mergers can be a float number because of the weighting scheme used in our methodology (Sect.3).We find that • The minor merger fraction is nearly constant with redshift, f mm ∼ 0.051 .The least-squares fit to the minor merger fraction data is The negative value of the power-law index implies that the minor merger fraction decreases slightly with redshift, but it is consistent with a null evolution (m mm = 0).This confirms the trend found by LS11 for bright galaxies and by Lotz et al. (2011) for less massive (M ⋆ ≥ 10 10 M ⊙ ) galaxies, and extend it to the high mass regime.• The major merger rate of massive galaxies increases with redshift as This increase with z contrast with the nearly constant minor merger fraction.In Fig. 5 we compare our measurements with those from the literature for massive galaxies and for r max p ∼ 30h −1 kpc close pairs.de Ravel et al. ( 2011) use the zCOSMOS 10k sample, so their sample is included in ours, and r min p = 0, r max p = 30h −1 kpc spectroscopic close pairs.Because they assume a different inner radius than us, we apply a factor 2/3 to their original values (see Sect. 5, for details).Both merger fractions are in good agreement, supporting our methodology.Note that our uncertainties are lower by a factor of three than those in de Ravel et al. (2011) because our principal sample is a factor of four larger than theirs.Xu et al. (2012) measure the merger fraction from photometric close pairs also in the COSMOS field.They provide the fraction of galaxies in close pairs with µ ≥ 1/2.5, so we apply a factor 0.7 to obtain the number of close pairs (this is the fraction of principal galaxies in their massive sample) and a factor 1.6 to estimate the number of µ ≥ 1/4 systems (the merger fraction depends on µ as f m ∝ µ s , as shown by LS11, and s = −0.95 for massive galaxies in COSMOS, Sect.6.2).On the other hand, Bundy et al. (2009) and Bluck et al. (2009) measure the major (µ ≥ 1/4) merger fraction in GOODS 4 (Great Observatories Origins Deep Survey, Giavalisco et al. 2004) and Palomar/DEEP2 (Conselice et al. 2007) surveys, respectively.These studies are also in good agreement with our values, with the point at z = 0.8 from Bluck et al. (2009) being the only discrepancy.The least-squeres fit to all the close pair studies in Fig. 5 yields similar parameters to those from our COSMOS data alone, Eq. (15).Regarding morphological studies, Bridge et al. (2010) provide the major merger fraction of M ⋆ 5 × 10 10 M ⊙ galaxies in two CFHTLS 5 (Canada-France-Hawaii Telescope Legacy Survey, Coupon et al. 2009) Deep fields, including the COSMOS field.They perform a visual classification of the sources, finding 286 merging systems of that mass.We cannot compare directly their merger fractions with ours because of the different methodologies (e.g., Bridge et al. 2010;Lotz et al. 2011).Thus, we translate they merger rates into the expected close pair fraction following the prescriptions in Sect. 5. Giving the uncertainties in the merger time scales of both methods and the difficulties to assign a precise mass ratio µ to the merger candidates in morphological studies, the merger fractions from Bridge et al. (2010, stars in Fig. 5) are in nice agreement with our results.
• The fit to the total merger fraction is This evolution is slower than the major merger one, reflecting the different properties of minor and major mergers.We compare our total merger fractions with others in the literature in Fig. 2012) measure the merger fraction of M ⋆ ≥ 5 × 10 10 M ⊙ galaxies from r max p = 30h −1 kpc close pairs.The values from both close pair studies are consistent with ours.Also the results of Williams et al. (2011) suggest a slow/null evolution in the total (µ ≥ 1/10) merger faction of massive galaxies up to z ∼ 2. Regarding morphological studies, Jogee et al. (2009) estimate the total (µ ≥ 1/10) merger fraction of M ⋆ ≥ 2.5 × 10 10 M ⊙ galaxies in the GEMS 6 (Galaxy Evolution From Morphology And SEDs, Rix et al. 2004) survey.Their values, f m ∼ 0.08, are consistent with ours.We also show in the merger fraction from Lotz et al. (2011) for M ⋆ ≥ 10 10 M ⊙ galaxies in the AEGIS7 (All-Wavelength Extended Groth Strip International Survey, Davis et al. 2007) survey.The different methodologies between these works and ours and the different stellar mass regimes probed make direct comparisons difficult (see Bridge et al. 2010;Lotz et al. 2011, for a review of this topic).In summary, previous work is compatible with a mild evolution of the total merger fraction, as we observe.
The merger fraction of ETGs
We summarize the minor and major merger fractions for both ETGs and spiral galaxies in the COSMOS field in Tables 2 and 3, respectively, while show them in Fig. 7.We defined five redshift bins between z down = 0.2 and z up = 0.9 for ETGs, as for the global population, but only three in the case of spirals because of the lower number of principal sources.We assume m mm = 0 in the following for the minor merger fraction, as for the global population (Sect.4.1).The mean minor merger fraction of ETGs is f ET mm = 0.060, while f spiral mm = 0.023 for spirals.There is therefore a factor of three difference between the merger fractions of early type and late type populations.LS11 also find a similar result when comparing the minor merger fraction of red and blue bright galaxies.
On the other hand, the major merger fraction of ETGs is also higher than that of spirals by a factor of two.The fit to the major merger data yields Because we only three data points for spirals and of the high uncertainty in the first redshift bin, the found value of m MM for massive spirals is only tentative.Nevertheless, that the major merger fraction of spirals evolves faster than that of ETGs is in agreement with previous studies which compare early-types/red and late-types/blue galaxies (e.g., Lin et al. 2008;de Ravel et al. 2009;Bundy et al. 2009;Chou et al. 2011;LS11).In summary, the merger fraction of massive (M ⋆ ≥ 10 11 M ⊙ ) ETGs, both major and minor, is higher by a factor of 2-3 than that of massive spirals (see also Mármol-Queraltó et al. 2012, Fig. 7. Major (upper) and minor (lower) merger fractions of M ⋆ ≥ 10 11 M ⊙ galaxies as a function of redshift and morphology.Dots are for ETGs, while squares are for spirals.Dashed (solid) lines are the best fit to the early-type (spiral) data, while dotted lines are the fits for the global population.
for a similar result).We estimate the merger rate of ETGs in Sect. 5.
Colour properties of companion galaxies
In this section we attempt to identify the types of galaxies in the companion population.As the morphological classification is not reliable for all companions because they are faint, we instead use a colour selection.We split our companion galaxies into red (NUV −r + ≥ 3.5) and blue (NUV −r + < 3.5, Ilbert et al. 2010), and measure the fraction of red companions ( f red ) of massive galaxies at 0.2 ≤ z < 0.9.
We find that 62% of the companions of the whole principal sample are red, while ∼ 38% are blue.Furthermore, the red fraction remains nearly the same for minor ( f red = 60%) and major ( f red = 64%) companions.When we repeat the previous study focusing on massive ETGs, we find f red ∼ 65%, both for minor and major companions.This implies that most of our close pairs are "dry" (i.e., red -red).
The merger rate of massive ETGs in the COSMOS field
In this section we estimate the minor (R mm ) and major (R MM ) merger rate, defined as the number of mergers per galaxy and Gyr, of massive ETGs.We remind here the steps to compute the merger rate from the merger fraction, focusing first on the major merger rate.Following de Ravel et al. ( 2009), we define the major merger rate as where the factor C p takes into account the lost companions in the inner 10h −1 kpc (Bell et al. 2006) and the factor C m is the fraction of the observed close pairs that finally merge in a typical time scale T MM .We take C p = 3/2.The typical merger time scale depends on r max p and can be estimated by cosmological and N-body simulations.In our case, compute the major merger time scale from the cosmological simulations of Kitzbichler & White (2008), based on the Millennium simulation (Springel et al. 2005).This major merger time scale refers to major mergers (µ > 1/4 in stellar mass), and depends mainly on r max p and on the stellar mass of the principal galaxy, with a weak dependence on redshift in our range of interest (see de Ravel et al. 2009, for details).Taking log (M ⋆ /M ⊙ ) = 11.2 as the average stellar mass of our principal galaxies with a close companion, we obtain T MM = 1.0 ± 0.2 Gyr for r max p = 30h −1 kpc and ∆v max = 500 km s −1 .We assume an uncertainty of 0.2 dex in the average mass of the principal galaxies to estimate the error in T MM .This time scale already includes the factor C m (see Patton & Atfield 2008;Bundy et al. 2009;Lin et al. 2010, LS11), so we take C m = 1 in the following.In addition, LS11 show that time scales from Kitzbichler & White (2008) are equivalent to those from the N−body/hydrodynamical simulations by Lotz et al. (2010b), and that they account properly for the observed increase of the merger fraction with r max p (see also de Ravel et al. 2009).We stress that these merger time scales have an additional factor of two uncertainty in their normalization (e.g., Hopkins et al. 2010c;Lotz et al. 2011).
The minor merger rate is where T mm = Υ × T MM .Following LS11, we take Υ = 1.5±0.1 from the N−body/hydrodynamical simulations of major and minor mergers performed by Lotz et al. (2010b,a, see also Lotz et al. 2011).As for major mergers, we assume C p = 3/2 and C m = 1.
We summarize the merger rates of massive ETGs in Table 4, and show them in Fig. 8.We parametrize their redshift evolution as Assuming n mm = 0 for minor mergers, as for the merger fraction (Sect.4.2), we find R ET mm = 0.060 ± 0.008 Gyr −1 .The fit of the major merger rate of massive ETGs is Our results imply that the minor merger rate is higher than the major merger one at z 0.5.In addition, the minor and major merger rates of massive ETGs are ∼ 20% higher than for the global population.
In Fig. 8 we also show the minor and major merger rates of red bright galaxies measured by LS11.We find that red galaxies have similar merger rates, both minor and major, as our massive ETGs.This suggests that massive red sequence galaxies have similar merger properties: nearly 95% of our ETGs are red, while the mean mass of the red galaxies in LS11 is M ⋆,red ∼ 10 10.8 M ⊙ , a factor of two less massive than our ETGs, M ⋆,ET ∼ 10 11.2 M ⊙ .The study of the merger properties of red sequence galaxies as a function of stellar mass is beyond the scope of this paper and we explore this issue in a future work.
ETGs since z = 1
In this section we use the previous merger rates to estimate the number of minor and major mergers per massive (M ⋆ ≥ 10 11 M ⊙ ) ETG since z = 1 (Sect.6.1) and the impact of mergers in their mass growth (Sect.6.2) and size evolution (Sect.6.3) in the last ∼8 Gyr.
Number of minor mergers since z = 1
We can obtain the average number of minor mergers per ETG between z 2 and z 1 < z 2 as where 3 in a flat universe.The definition of N ET MM for major mergers is analogous.Using the merger rates in previous section, we obtain N ET m = 0.89 ± 0.14, with N ET MM = 0.43 ± 0.13 and N ET mm = 0.46 ± 0.06 between z = 1 and z = 0.The number of minor mergers per massive ETGs since z = 1 is therefore similar to the number of major ones.Note that these values and those reported in the following have an additional factor of two uncertainty due to the uncertainty on the merger time scales derived from simulations (Sect.5).
The number of major mergers per red bright galaxy measured by LS11 is N red MM = 0.7±0.2,higher than our measurement, while the number of minor mergers is similar, N red mm = 0.5 ± 0.2.The discrepancy in the major merger case can be explained by the evolution of the merger rate in both studies, since LS11 assumed n red MM = 0 and we measure n ET MM = 1.8.On the other hand, spiral galaxies have a significantly lower number of mergers, N ∼ 0.20.We refer the reader to LS11 for the discussion about the role of major and minor mergers in the evolution of spiral galaxies.In their work, Pozzetti et al. (2010) find that almost all the evolution in the stellar mass function since z ∼ 1 is a consequence of the observed star formation (see also Vergani et al. 2008), and estimate that N m ∼ 0.7 mergers since z ∼ 1 per log (M ⋆ /M ⊙ ) ∼ 10.6 galaxy are needed to explain the remaining evolution.Their result is similar to our direct estimation for the global massive population (ETGs + spirals), N m = 0.75 ± 0.14, but they infer N MM < 0.2.This value is half of ours, N MM = 0.36 ± 0.13, pointing out that close pair studies are needed to understand accurately the role of major/minor mergers in galaxy evolution.
Mass assembled through mergers since z = 1
Following LS11, we estimate the mass assembled due to mergers by weighting the number of mergers in the previous section with the average major (µ MM ) and minor merger (µ mm ) mass ratio, To obtain the average mass ratios we measure the merger fraction of massive ETGs at 0.2 ≤ z < 0.9 for different values of µ, from µ = 1/2 to 1/10.Then, we fitted to the data a powerlaw, f m (µ) ∝ µ s , and used the prescription in LS11 to estimate the average merger mass ratio from the s value.Following those steps we find s = −0.95 for massive ETGs in COSMOS, while the average merger mass ratios are µ MM = 0.48 and µ mm = 0.15, similar to those values reported by LS11.With all previous results we obtain that mergers with µ ≥ 1/10 increase the stellar mass of massive ETGs by δM ⋆ = 28 ± 8% since z = 1.LS11 find δM ⋆ (1) = 40 ± 10% for red bright galaxies in VVDS-Deep, consistent with our measurement within errors.We note that they use B−band luminosity as a proxy of stellar mass, so their value is an upper limit due to the lower mass-to-light ratio of blue companions.Bluck et al. (2011) study the major and minor (µ ≥ 1/100) merger fraction of massive galaxies at 1.7 < z < 3 in GNS 8 (GOODS NICMOS Survey, Conselice et al. 2011).They extrapolate their results to lower redshifts, estimating δM ⋆ (1) = 30 ± 25% for µ ≥ 1/10 mergers.They value is in good agreement with our measurement, but its large uncertainty prevents a quantitative comparison.The relative contribution of major/minor mergers to our inferred mass growth is 75%/25% because the average major merger is three times more massive than the average minor one, as already pointed out by LS11.In their cosmological model, Hopkins et al. (2010a) predict that the relative contribution of major and minor mergers in the spheroids assembly of log (M ⋆ /M ⊙ ) ∼ 11.2 galaxies is ∼ 80%/20%, in good agreement with our observational result.
On the other hand, several authors have studied luminosity functions and clustering to constrain the evolution of luminous red galaxies (LRGs) with redshift, finding that LRGs have increased their mass δM ⋆ ∼ 30%−50% by merging since z = 1 (Brown et al. 2007(Brown et al. , 2008;;Cool et al. 2008).Their results are similar to our direct estimation, but we must take this agreement with caution.Tal et al. (2012) show that LRGs have a lack of major companions, excluding major mergers as an important growth channel (see also De Propris et al. 2010).Typically LRGs have L 3L * , and a low impact of major mergers in this systems is indeed expected by cosmological models, where the contribution of major mergers in galaxy mass assembly peaks at ∼ M * ⋆ (Khochfar & Silk 2009;Hopkins et al. 2010a;Cattaneo et al. 2011).Thus, even if the values of δM ⋆ are similar for LRGs and our massive galaxies, they could have a different origin.A better approach to estimate indirectly the impact of mergers in mass growth is to study the evolution of massive red galaxies at a fixed number density: because they are red (i.e., they have low star formation), their mass is expected to growth only by merging.Following this approach, van Dokkum et al. ( 2010) and Brammer et al. (2011) estimate δM ⋆ (1) ∼ 40% for massive galaxies in the NEWFIRM Medium-Band Survey 9 (van Dokkum et al. 2009).Their result represents the integral over all possible µ values, so in combination with our δM ⋆ (1) ∼ 30% for µ ≥ 1/10, this would imply that (i) µ ≥ 1/10 mergers dominates the mass assembly of massive galaxies since z = 1 and (ii) there is room for an extra δM ⋆ ∼ 10% growth due to very minor mergers (µ < 1/10).
Size growth due to mergers since z = 1
Since the first results of Daddi et al. (2005) and Trujillo et al. (2006), several authors have studied in detail the size evolution of massive ETGs with cosmic time.It is now well established that ETGs were smaller, on average, than their local counterparts of a given stellar mass by a factor of two at z = 1 and of four at z = 2 (Sect.1).The size evolution is usually parametrized as where r e is the effective radius of the galaxy.Despite of all observational efforts, the value of α is still in debate, spanning the range α = 0.9 − 1.5 (see references in Sect.1), as well as its dependency on stellar mass (massive galaxies evolve faster, Williams et al. 2010, or not, Damjanov et al. 2011).In the following we assume as fiducial α value the value reported by van der Wel et al. ( 2008) from a combination of several analy-sis, α = 1.2 (δr e = 0.43 at z = 1), with an uncertainty of 0.2 (dott-dashed line in Fig. 9).Two main effects could explain the size evolution of ETGs: the progenitor bias and genuine size growth.The number density of massive (red) galaxies at z = 2 is ∼ 15 − 30% of that in the local universe (e.g., Arnouts et al. 2007;Pérez-González et al. 2008;Williams et al. 2010;Ilbert et al. 2010), and those ETGs that have reached the red sequence at later times are systematically more extended than those which did it at high redshift.This effect is called the progenitor bias and mimic a size growth (see van der Wel et al. 2009a;Valentinuzzi et al. 2010a,b;Cassata et al. 2011, for further datails).Both van der Wel et al. (2009a) and Saglia et al. (2010) estimate that the progenitor bias of massive ETGs accounts for a factor 1.25 (δr e = 0.8) of the size evolution since z = 1, and we assume this value in the following.
Regarding size growth, several authors have suggested that compact galaxies at z ∼ 2 are the cores of present day massive ellipticals, and that they increase their size by adding stellar mass in the outskirts of the compact high redshift galaxy (Bezanson et al. 2009;Hopkins et al. 2009a;van Dokkum et al. 2010).The fact that the more compact galaxies at z ∼ 0.1 (Trujillo et al. 2009) and z ∼ 1 (Martinez-Manso et al. 2011) have similar young ages (∼ 1 − 2 Gyr), combined with their paucity in the local universe (Trujillo et al. 2009;Taylor et al. 2010;Cassata et al. 2011) also support the size evolution of these systems along cosmic time.Mergers, specially the minor ones, have been proposed to explain this evolution (e.g., Naab et al. 2009;Bezanson et al. 2009;Hopkins et al. 2010b).Adiabatic expansion due to AGN activity (Fan et al. 2010) or stellar evolution (Damjanov et al. 2009) could also play a role.Thanks to our direct measurements of the minor and major merger rate of massive ETGs, we are able to explore the contribution of mergers to the size growth of these galaxies in the last ∼ 8 Gyr.
Theory and simulations show that equal-mass mergers between two spheroidal galaxies are less effective in increasing the size of ETGs than a major/minor merger with a less dense galaxy, both spiral and spheroidal.In the first case the increase in size is proportional to the accreted mass, r e ∝ M β ⋆ , with β = 1, while in the second case the index is higher and spans a wide range, β ∼ 1.5 − 2.5 (e.g., Bezanson et al. 2009;Hopkins et al. 2010b).In our case, we estimate β for a given µ from the relation between the initial (r e,i ) and the final effective radius (r e,f ) of an ETG in a merger process derived by Fan et al. (2010), where ǫ is the slope of the stellar mass vs size relation.In their work, Damjanov et al. (2011) find ǫ = 0.51 for early-type galaxies in the range 0.2 < z < 2 (see also Williams et al. 2010;Newman et al. 2012), similar to the ǫ = 0.56 from Shen et al. (2003) in SDSS or the ǫ ∼ 0.5 expected from the Faber-Jackson relation (Faber & Jackson 1976).However, not all the observed mergers are between two early-type galaxies.Using colour as a proxy for the morphology of our companion galaxies, we find that 65% of the mergers are "dry" (red -red), while 35% are "mixed" (red -blue), for both major and minor mergers (Sect.4.3).In the mixed case we use ǫ = 0.27, a value estimated from the data of Shen et al. (2003) for late-type galaxies in our mass range of interest.Finally, we obtain the β for a given µ as 0.65β dry + 0.35β mixed .Using the average mass ratios µ MM and µ mm in Sect.6.2, we find β MM = 1.30 for major mergers and β mm = 1.65 for minor ones.Following Eq. ( 24), we trace the mass growth of massive ETGs with redshift for both minor, δM ⋆,mm (z), and major mergers, δM ⋆,MM (z).Then, we translate these mass growths to a size growth with the previous values of β, Finally, we estimate the contribution of mergers to the total size evolution since z = 1 as .
This model yields a size evolution due to mergers of δr m e (1) = 0.70 (α = 0.52±0.12,solid line in Fig. 9).This implies that observed major and minor mergers can explain ∆r e ∼ 55% of the size evolution in massive ETGs since z ∼ 1.In the following, all quoted ∆r e have a typical ∼ 15% uncertainty due to the errors in the merger rates and in the observed size evolution.
We take into account the progenitor bias by applying a linear function to the previous size growth due to mergers (dotted line in Fig. 9), We obtain δr PB e (1) = 0.56 (α = 0.84 ± 0.12), thus explaining ∆r e ∼ 75% of the size evolution with our current observations.We note that this value is similar to the δr e (1) = 0.63 estimated by the simple model of van der Wel et al. (2009a), which only includes the progenitor bias and a merger prescription from cosmological simulations.The remaining ∆r e ∼ 25% of the evolution should be explained by other physical processes (e.g., very minor mergers with µ ≤ 1/10 or adiabatic expansion) or by systematic errors in the measurements (e.g., lower merger time scales or an overestimation of the size evolution).We explore these processes/systematics in the following.
• Very minor mergers (µ < 1/10).Cosmological simulations find that µ ≥ 1/10 mergers are not the more common ones, with the merger history of massive galaxies being dominated by µ < 1/10 mergers (Shankar et al. 2010;Jiménez et al. 2011;Oser et al. 2012).However, in this simulations the mass accretion is dominated by µ ≥ 10 events due to the low mass of the very minor companions.As we show in Sect.6.2, a mass growth of δM ⋆ ∼ 10% due to very minor mergers since z = 1 is compatible with the observed mass assembly of massive galaxies (van Dokkum et al. 2010;Brammer et al. 2011).This translates to N vm ∼ 4 very minor mergers per massive ETG since z ∼ 1 (we assume that very minor mergers have 1/100 ≤ µ < 1/10 and estimate that µ vm ∼ 0.025 = 1/40 following the prescriptions in Sect.6.2).Note that we can increase arbitrarily the number of very minor mergers by lowering µ vm , but not their contribution to mass growth, which is fixed.We checked that the conclusions in this section are independent of µ vm .
We estimate β vm = 1.85 for very minor mergers, thus obtaining an extra size growth of ∆r e ∼ 20% due to mergers, δr m e (1) = 0.58 and α = 0.78 ± 0.12 when all µ values are taken into account.Hence, mergers since z ∼ 1 may explain ∆r e ∼ 75% of the observed size evolution, while ∆r e ∼ 95%, with δr PB e (1) = 0.47 and α = 1.1, when the progenitor bias is taken into account (dashed line in Fig. 9).In this picture, nearly half of the evolution due to mergers is related to minor (µ < 1/4) events.This result reinforces our conclusion that mergers are the main contributors to the size evolution of massive ETGs since z = 1, but observational estimations of the very minor merger rate (µ < 1/10) are needed to constraint their role.As a first attempt, Mármol-Queraltó et al. (2012) find that the merger fraction of massive galaxies at z 1 for µ ≥ 1/100 satellites is two times that of µ ≥ 1/10 satellites.That suggests N vm ∼ 1, and an additional contribution for even smaller satellites (µ < 1/100) could be possible.
• Adiabatic expansion.This will occur in a relaxed system that is losing mass.As mass is lost the potential becomes shallower, so the system expands into a new stable equilibrium.The amount that a system expands depends on both the ejected mass (M ⋆,eject ) and on the time scale of the process (T eject ).Fan et al. (2008Fan et al. ( , 2010) ) suggest adiabatic expansion due to quasar activity and/or supernova winds as an alternative process to explain the size growth of massive early-types, specially at z 1.These processes occur on very short time scales after the formation of the spheroid (T eject 0.5 Gyr, Ragone-Figueroa & Granato 2011), so we expect those galaxies with stellar populations older than ∼ 1 Gyr to be already located in the local stellar mass-size relation.This is not supported by observations, in which galaxies older than 3 Gyr at z ∼ 1 are still smaller than the local ones (see Trujillo et al. 2011, for details).Interestingly, minor mergers with gas-rich satellites (∼ 35% of our observed mergers) could trigger recent star formation and AGN activity in massive early types (e.g., Kaviraj et al. 2009;Fernández-Ontiveros et al. 2011), therefore favouring some degree of adiabatic expansion and adding an extra size growth to the merging process.Devoted N-body simulations are needed to explore this topic in details.It is also to be noted that the mass loss due to stellar winds from the passive evolution of stellar populations in a galaxy may lead to adiabatic expansion (Damjanov et al. 2009).Ragone-Figueroa & Granato (2011) show that a typical massive galaxy is able to eject enough mass due to galactic winds to increase its size by a factor of 1.2 in ∼ 8 Gyr.This result assumes that the potential of the galaxy is not able to retain any of the ejected mass, so this could indicate that at most ∆r e ∼ 20% of the size evolution since z = 1 could be explained by stellar winds.
• Overestimation of the size evolution.Results from Martinez-Manso et al. (2011) suggest that the photometric stellar masses of Trujillo et al. (2007) are an order of magnitude higher than those estimated from velocity dispersion measurements.This does not erase the size evolution, but makes it smaller (massive galaxies are more extended than less massive ones at a given redshift, e.g., Damjanov et al. 2011).Taking dynamical masses (M dym ) as a reference instead of photometric ones, van der Wel et al.It is also possible that the extended, low-surface brightness envelopes of high-z galaxies were missed and their r e were correspondingly underestimated.However, deep observations in the near infrared (optical rest-frame) from space (Szomoru et al. 2010;Cassata et al. 2010) and from groundbased facilities with adaptive optics (Carrasco et al. 2010) confirm the compactness of 1 z 2.5 massive galaxies.
On the other hand, also higher values of α than our fiducial value α = 1.2 ± 0.2 are present in the literature.For example, Buitrago et al. (2008) find α = 1.51 ± 0.04 at z < 2, while Damjanov et al. (2011) find α = 1.62±0.34.Assuming α = 1.5, µ ≥ 1/10 mergers would explain ∆r e ∼ 45% of the size evolution, while the addition of very minor mergers would increase the role of mergers up to ∆r e ∼ 65%.In that case, the contribution of other processes would increases to ∆r e ∼ 20%.Thus, even if the size evolution is faster than our fiducial α value, mergers would be still the dominant mechanism.
• Merger time scale.The main uncertainty in our merger rates is the assumed merger time scale, which typically has a factor of two uncertainty in their normalization (e.g., Hopkins et al. 2010c).The T MM from Kitzbichler & White (2008) are typically longer than others in the literature (e.g., Patton & Atfield 2008;Lin et al. 2010) or similar to those from N-body/hydrodynamical simulations (Lotz et al. 2010b,a).Thus, we expect, if anything, a shorter T MM , which implies a larger role of mergers in size evolution (i.e., higher merger rates and numbers of mergers since z ∼ 1).In fact, a shorter T MM by a factor of 1.5 is enough to explain the observed mass growth and size evolution without the contribution of very minor mergers.• Uncertainties in β.Equation ( 26), which we used to derive the values of β in our model, assumes parabolic orbits and dissipationless (gas-free) mergers.About the first assumption, Khochfar & Burkert (2006) and Wetzel & White (2010) show that most dark matter halos in cosmological simulations merge on parabolic orbits.On the other hand, we find that ∼ 65% of our mergers are dry, but the other ∼ 35% are mixed and an extra dissipative component is present.In these cases simulations suggest that β should be higher than derived from Eq. ( 26), even reaching β ∼ 2.5 (Hopkins et al. 2010b).This does not change our conclusions because it translates to a higher size evolution due to mergers.In addition, Oser et al. (2012) show that the size growth expected from Eq. ( 26) is in nice agreement with the growth measured in hydrodynamical simulations settled in a cosmological contest.
In summary, our results suggest that merging is the main contributor to the size evolution of massive ETGs, accounting for ∆r e ∼50-75% of the observed evolution since z ∼ 1.Nearly half of the evolution due to mergers is related to minor (µ < 1/4) events.
Additional constraints from velocity dispersion evolution
In addition to their mass and size, the velocity dispersion (σ ⋆ ) of ETGs evolves with redshift as δσ ⋆ = (1 + z) a .We assume a = 0.4 ± 0.1 in the following (δσ ⋆ = 1.32 at z = 1, Cenarro & Trujillo 2009;Cappellari et al. 2009;Saglia et al. 2010;van de Sande et al. 2011).When we apply our simple model using the prescriptions of Fan et al. (2010) for the evolution of σ ⋆ in merger events, µ ≥ 1/10 mergers are only able to explain 15% of the observed evolution, δσ m ⋆ (1) = 1.05.Hopkins et al. (2009b) propose another prescription to trace the evolution in σ ⋆ from the evolution in size that takes into account the dark matter component of the galaxy, where γ ∼ 1 for M ⋆ ∼ 10 11 M ⊙ galaxies.Using this prescription, the evolution of σ ⋆ is faster, but we still explain only ∼ 35% of the observed evolution, δσ m ⋆ (1) = 1.10 (solid line in Fig. 10).The addition of very minor mergers increase the contribution to ∼ 50%, δσ m ⋆ (1) = 1.16 (dotted line in Fig. 10).However, a small change of σ ⋆ due to mergers is consistent with the picture from Bernardi et al. (2011).They study in details the colour−M ⋆ and colour−σ ⋆ relation of ETGs in SDSS, finding that M ⋆ ∼ 2 × 1011 M ⊙ is a transition mass (M tran ) for which the curvature of the colour−M ⋆ relation change, while no deviation is present in the colour−σ ⋆ relation.These authors claim that (dry) mergers are the main process in the evolution of those ETGs with M ⋆ M tran ∼ M * ⋆ (see also van der Wel et al. 2009b;López-Sanjuan et al. 2010b;Oesch et al. 2010;Eliche-Moral et al. 2010;Méndez-Abreu et al. 2012, for a similar conclusion), as our results also suggest.
One missing ingredient in the model described in this section is the progenitor bias: new early-types which appeared since z ∼ 1 are not only more extended that previous ones, but also have a lower velocity dispersion (van der Wel et al. 2009a).Thus, the progenitor bias also mimic a decrease of σ ⋆ with cosmic time.The results of Saglia et al. (2010) suggest that a factor of 1.1 in the σ ⋆ evolution is due to the progenitor bias.Applying this extra evolution as a factor 1+0.1z to that from mergers (very minor ones included), we are able to explain 90% of the increase in velocity dispersion, δσ PB ⋆ (1) = 1.28 (dashed line in Fig. 10).Including this, our model is compatible with the observed evolution and suggests that mergers and the progenitor bias have a similar contribution to σ ⋆ evolution, somewhat different from the dominant role of mergers in size evolution.
Additional constraints from scaling relations
Nipoti et al. ( 2009) point out that the tightness of the local scaling laws of ETGs posses an important limit to the growth of these systems by (dry) merging (see also Nair et al. 2011).Using these local scaling laws, they conclude that typical present-day massive ETGs could not have assembled more than ∼ 45% of their present stellar mass and grew more than a factor ∼1.9 in size via merging.Even if uncertain, we can extrapolate our observed trends up to z ∼ 2 and compare the inferred mass and size growths with these upper limits provided by Nipoti et al. (2009).We obtain a mass growth by merging (including very minor mergers) of δM ⋆ ∼ 60% since z = 2, which implies that δM ⋆ (2)M ⋆ (2)/M ⋆ (0) ∼ 40% of the total mass at z = 0 was assembled by merging since z = 2.The size grows by a factor of ∼ 2 due to this merging in the same cosmic time lapse.Therefore, merging seems compatible with the upper limits in mass assembly and size growth imposed by the tightness of the local scaling laws, although a more complex model is needed to fully explore how these laws evolve due to our observed merger history.
Comparison with previous studies
In a previous work, Trujillo et al. (2011) use a similar model than ours to estimate the number of mergers needed since z ∼ 1 to explain size evolution if merging is the only process involved.They conclude that N m = 5.0 ± 1.64 mergers with µ = 1/3 are needed.This number of mergers is higher than our direct measurement by a factor of five, N m = 0.89 ± 0.14 (our average merger with µ ≥ 1/10 has µ ∼ 1/3).If we take into account our estimated very minor mergers, our numbers are N m ∼ 5 and µ ∼ 1/10.For this value of µ they infer N m = 11.20 ± 3.66, still higher than our estimation.The model of Trujillo et al. (2011) also estimates the mass growth due to mergers since z ∼ 1, which is a factor of 3 − 5, also higher than any observational estimation or constraint (a factor of ∼ 1.4, Sect.6.2).Newman et al. (2012) study the size evolution of red galaxies in the CANDELS 11 (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey, Grogin et al. 2011) survey and the role of mergers with µ ≥ 1/10 in this size growth at 0.4 < z < 2. Applying a similar model than ours to translate their observed total merger fraction to a size growth, they conclude that merging can reasonably account for the size evolution observed at z 1 after the progenitor bias is taking into account, while at z 1 mergers are not common enough.Despite they only have one merger fraction data point at 0.4 < z < 1 (Fig. 6), their conclusion is consistent with our more detailed study at z 1.
Expectations from cosmological models
Several theoretical efforts have been conducted to explain the size evolution of ETGs.In this section we compare the predicted size evolution from cosmological models with our best model, which suggests that ∆r e ∼ 75% of the evolution in size is due to mergers, ∆r e ∼ 20% to the progenitor bias and ∆r e ∼ 5% to other processes (e.g., adiabatic expansion).
The model of Hopkins et al. (2010b) predicts that, since z = 2, un-equal mass mergers explain ∆r e ∼ 60% of the observed size evolution, in agreement with our result.However, these authors only track the evolution of compact galaxies since z = 2 and do not take into account the possible contribution of the progenitor bias, but argue that it should impact their predictions.In fact, they predict that ∼ 45% of the size evolution since z = 1 is due to un-equal mass mergers, another ∼ 45% is accounted for by systematics in size measurements and the extra ∼ 10% is due to adiabatic expansion, probably reflecting their biased population.
The model of Shankar et al. (2011) predicts δr e ∼ 0.7 for massive galaxies, in agreement with our observational derivation due only to mergers (see also Khochfar & Silk 2006).Interestingly, the evolution increases to δr e ∼ 0.5 when individual galaxies are tracked along their evolution without any stellar mass selection.They predict that ∼ 40% of the mass accreted by merging in massive galaxies is due to major mergers with µ ≥ 1/3.Our best model implies that ∼ 47% of mass and size growth is due to major mergers with µ ≥ 1/3.The qualitative agreement between both works is remarkable.
On the other hand, Oser et al. (2012) find α = 1.12 (α = 1.44 for passive galaxies) and a ∼ 0.4 by re-simulating with high resolution a set of 40 galaxies with M ⋆ ≥ 6.3×10 10 M ⊙ in a cosmological context.They find that the number-averaged merger has µ = 1/16, while the mass-averaged merger has µ ⋆ = 1/5.From our model we estimate µ ⋆ = 1/3 and µ = 1/13.We check that µ ⋆ is independent of the assumed number of very minor mergers N vm , while we can vary µ arbitrarily by changing N vm .Thus, only the comparison with µ ⋆ is representative.The predicted value is lower than our measurement, but they find a higher role of major mergers at M ⋆ ∼ 10 11.1 M ⊙ (see also Khochfar & Silk 2009;Hopkins et al. 2010a;Cattaneo et al. 2011), with µ ⋆ ∼ 1/3 and a big dispersion due to the low statistics (see their Fig. 6).Future simulations with higher number of galaxies are needed to explore in more details this issue.
In summary, our result that merging is the main process involved in size evolution mostly agrees with simulations, but more observational and theoretical studies are needed to understand the remaining discrepancies.
Conclusions
We have measured the minor and major merger fraction and rate of massive (M ⋆ ≥ 10 11 M ⊙ ) galaxies from close pairs in the COSMOS field, and explored the role of mergers in the mass growth and size evolution of massive ETGs since z ∼ 1.
We find that the merger fraction and rate of massive galaxies evolves as a power-law (1 + z) n , with no or only small evolution of the minor merger rate, n mm ∼ 0, in contrast with the increase of the major merger rate, n MM = 1.4.The total (major + minor) merger rate evolves slower than the major one, with n m = 0.6.When splitting galaxies according to their HST morphology, the minor merger fraction for ETGs is higher by a factor of three than that for spirals, and both are nearly constant with redshift.The fraction of major mergers for massive spirals evolve faster (n spiral MM ∼ 4) than for ETGs (n ET MM = 1.8).Our results imply that massive ETGs have undergone 0.89 mergers (0.43 major and 0.46 minor) since z ∼ 1, leading to a mass growth of ∼ 30% (75%/25% due to major/minor mergers).We use a simple model to translate the estimated mass growth due to mergers into an effective radius growth.With this model we find that µ ≥ 1/10 mergers can explain ∼ 55% of the observed size evolution since z ∼ 1.We infer that another ∼ 20% is due to the progenitor bias (the new ETGs appeared since z = 1 are more extended than their high-z counterparts) and we estimate that very minor mergers (µ < 1/10) could contribute with an additional ∼ 20%.The remaining ∼ 5% could come from adiabatic expansion due to stellar winds or from observational effects.In addition, our picture also reproduces the mass growth and the velocity dispersion evolution of these massive ETGs galaxies since z ∼ 1.
We conclude from these results, and after exploring all the possible uncertainties in our model, that merging is the main contributor to the size evolution of massive ETGs at z 1, accounting for ∼ 50 − 75% of that evolution in the last 8 Gyr.Nearly half of the evolution due to mergers is related to minor (µ < 1/4) events.
Studies in larger sky areas are needed to improve the statistics, especially at lower redshifts when the cosmological volume probed is still the main source of uncertainty.We point out that a local measurement of the minor merger fraction and rate is needed to better constrain its evolution with redshift.Understanding the dependency of the minor merger rate on stellar mass, as well as extending observations to the very minor merger regime (µ ≤ 1/10) will be important to further improve this picture.In addition, extending the observational work at z > 1, when the massive red sequence seems to emerge, will be necessary to probe the early epochs of mass assembly.
Deep: cosmic variance effect In a previous work in VVDS-Deep, LS11 measure the merger rate of M e B ≤ −20 galaxies with spectroscopic redshifts, where M e B = M B + Qz and Q = 1.1 accounts for the evolution of the luminosity function with redshift, as a function of luminosity difference in the B−band, µ B = L B,2 /L B,1 .As an additional test of our methodology, in this section we compare the merger fraction in the COSMOS photometric catalogue with that measured by LS11 down to µ B = 1/10, reaching the minor merger regime in which we are interested on.To minimize the systematic bias, we use the same redshift ranges, z r,1 = [0.2,0.65) and z r,2 = [0.65,0.95), close pair definition (r max p = 30h −1 kpc), principal sample (M e B ≤ −20), and companion sample (M e B ≤ −17.5) than LS11.We check that the photometric errors are ∆ z 0.04 up to z ∼ 0.95 for faint companion galaxies (see Sect. 3.2).Note that LS11 use r min p = 5h −1 kpc, while we take r min p = 10h −1 kpc.Hence, we recompute the merger fractions in VVDS-Deep for
Fig. 3 .
Fig. 3. Merger fraction of M e B ≤ −20 galaxies as a function of luminosity difference in the B−band, µ B , at z ∈ [0.2, 0.65) (top) and z ∈ [0.65, 0.95) (bottom) for 10h −1 kpc ≤ r p < 30h −1 kpc close pairs.Diamonds are from present work in COSMOS (photometric catalogue) while dots are from VVDS-Deep (LS11, spectroscopic catalogue).The black solid lines in both panels show the maximum and minimum merger fractions, including 1σ f m errors, when we split the COSMOS field in VVDS-Deep size subfields (∼0.5 deg 2 ).
Fig. 4 .
Fig. 4. Major (dots), minor (squares) and total (major + minor, triangles) merger fraction of M ⋆ ≥ 10 11 M ⊙ galaxies as a function of redshift in the COSMOS field.Dashed, solid and dott-dashed curves are the least-squares best fit of a power-law function, f m ∝ (1 + z) m , to the major (m MM = 1.4), minor (m mm = −0.1)and total (m m = 0.6) merger fraction data, respectively.
Fig. 5 .
Fig. 5. Major (µ ≥ 1/4) merger fraction for M ⋆ ≥ 10 11 M ⊙ galaxies from r max p ∼ 30h −1 kpc close pairs.The dots are from present work, triangles are form de Ravel et al. (2011) in the zCOSMOS 10k sample, squares from Xu et al. (2012) in the COSMOS field, pentagons from Bluck et al. (2009) in the Palomar/DEEP2 survey, and diamonds from Bundy et al. (2009) in the GOODS fields.The stars are from Bridge et al. (2010) in the CFHTLS by morphological criteria.Some points are slightly shifted when needed to avoid overlap.The dashed line is the least-squares best fit of a power-law function, f MM ∝ (1 + z) 1.4 , to the major merger fraction data in the present work.
Fig. 6 .
Fig. 6.Total (major + minor, µ ≥ 1/10) merger fraction as a function of redshift.Triangles are from the present work in the COSMOS field for M ⋆ ≥ 10 11 M ⊙ galaxies, diamonds are from Mármol-Queraltó et al. (2012) for massive galaxies, squares are from Newman et al. (2012) for M ⋆ ≥ 5 × 10 10 M ⊙ galaxies, crosses are from Jogee et al. (2009) for M ⋆ ≥ 2.5 × 10 10 M ⊙ galaxies by morphological criteria, and dots are from Lotz et al. (2011) for M ⋆ ≥ 10 10 M ⊙ galaxies by morphological criteria.The solid line is the least-squares best fit of a power-law function, f m ∝ (1+z) 0.6 , to the total merger fraction data in the present work.
Table 4 .Fig. 8 .
Fig. 8. Major (upper) and minor (lower) merger rate of M ⋆ ≥ 10 11 M ⊙ ETGs as a function of redshift.Filled symbols are from the present work, while open ones are from LS11 in VVDS-Deep for red galaxies.Dashed lines are the best fit to the ETGs data, while dotted lines are the fits for the global population.
Fig. 9 .
Fig. 9. Effective radius normalized to its local value, δr e , as a function of redshift.The dott-dashed line is the observational evolution from van der Wel et al. (2008), δr e = (1 + z) −1.2 .The solid line is the evolution due to major and minor mergers (µ ≥ 1/10) expected from our results.The shaded areas in both cases mark the 68% confidence interval.The dotted line is the expected evolution when the progenitor bias (PB) is taken into account.The dashed line is the expected evolution when PB and very minor mergers (µ < 1/10) are included (see text for details).
(2008) find α = 0.98 ± 0.11, smaller than the α = 1.20 found by the same authors from photometric studies.The same trend is found bySaglia et al. (2010) from the ESO Distant Cluster Survey 10 (EDisCS;White et al. 2005) galaxies: α ∼ 0.65 from dynamical masses vs α ∼ 0.85 from stellar masses after the progenitor bias is accounted for.Finally, Newman et al. (2010) find α ∼ 0.75 for M dym ≥ 10 11 M ⊙ galaxies.Assuming these smaller α values from dynamical masses, major and minor mergers account for ∆r e ∼ 65% of the size evolution, and all the evolution is explained when the progenitor bias and very minor mergers are taken into account.
Fig. 10 .
Fig. 10.Velocity dispersion normalized to its local value, δσ ⋆ , as a function of redshift.The dott-dashed line is the observed evolution, δσ ⋆ = (1 + z) 0.4 .The shaded area marks the 68% confidence interval.The solid line is the evolution due to major and minor mergers (µ ≥ 1/10) expected from our results.The dotted line is the expected evolution when very minor mergers (µ < 1/10) are taken into account.The dashed line is the expected evolution when very minor mergers and the progenitor bias (PB) are included (see text for details).
Table 3 .
Minor and major merger fraction of spiral galaxies with M ⋆ ≥ 10 11 M ⊙ | 2012-11-05T10:31:40.000Z | 2012-02-21T00:00:00.000 | {
"year": 2012,
"sha1": "3b494d6697b1ebef1ec2c1cffdbb492609077663",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2012/12/aa19085-12.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "3b494d6697b1ebef1ec2c1cffdbb492609077663",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119108074 | pes2o/s2orc | v3-fos-license | Flavor Mediation Delivers Natural SUSY
If supersymmetry (SUSY) solves the hierarchy problem, then naturalness considerations coupled with recent LHC bounds require non-trivial superpartner flavor structures. Such"Natural SUSY"models exhibit a large mass hierarchy between scalars of the third and first two generations as well as degeneracy (or alignment) among the first two generations. In this work, we show how this specific beyond the standard model (SM) flavor structure can be tied directly to SM flavor via"Flavor Mediation". The SM contains an anomaly-free SU(3) flavor symmetry, broken only by Yukawa couplings. By gauging this flavor symmetry in addition to SM gauge symmetries, we can mediate SUSY breaking via (Higgsed) gauge mediation. This automatically delivers a natural SUSY spectrum. Third-generation scalar masses are suppressed due to the dominant breaking of the flavor gauge symmetry in the top direction. More subtly, the first-two-generation scalars remain highly degenerate due to a custodial U(2) symmetry, where the SU(2) factor arises because SU(3) is rank two. This custodial symmetry is broken only at order (m_c/m_t)^2. SUSY gauge coupling unification predictions are preserved, since no new charged matter is introduced, the SM gauge structure is unaltered, and the flavor symmetry treats all matter multiplets equally. Moreover, the uniqueness of the anomaly-free SU(3) flavor group makes possible a number of concrete predictions for the superpartner spectrum.
symmetry, of which the SU(2) subgroup is gauged, which shields first-two-generation scalars from the hierarchy in first-two-generation Yukawas. In this way, flavor mediation can deliver all the desired features of natural SUSY. A complete model of flavor mediation is shown in Fig. 1, where both the flavor gauge group and SM gauge groups participate in (Higgsed) gauge mediation to the supersymmetric standard model (SSM). Since the SM Higgs multiplets do not carry flavor quantum numbers, they are naturally lighter than the flavored sfermions, as needed to minimize fine-tuning. Since SM gauginos only get their masses from SM gauge mediation, they are also typically light. After accounting for renormalization group (RG) effects, the gluinos end up being a bit heavier than the third-generation squarks, perfect for a natural SUSY spectrum.
The uniqueness of the anomaly-free SU(3) F leads to a number of interesting predictions. First, because the flavor gauge group is broken by SM Yukawa matrices, the hierarchy between the third-generation squarks and the first-and second-generation squarks cannot be made arbitrarily large. Thus, a discovery of light stops and sbottoms would yield an upper bound for the masses of the remaining squarks. Second, in order for SU(3) F to be anomaly-free, both leptons and quarks must be charged under the flavor symmetry, so one expects light staus and third-generation sneutrinos to be accessible at LHC energies. Third, while generic natural SUSY models do not require a right-handed sbottom in the spectrum, flavor mediation treats right-handed stops and sbottoms democratically, with the only splitting arising from SM gauge mediation and RG effects. Finally, flavor mediation preserves many of the desired features of SUSY grand unified theories (GUTs). Since the anomaly-free SU(3) F does not require any new SM-charged chiral matter and treats all matter multiplets equally, SUSY gauge coupling unification is preserved. Assuming gauge mediation is the dominant source for gaugino masses, then SM gaugino masses also unify.
The outline for the remainder of this paper is as follows. In Sec. II, we introduce the anomaly-free SU(3) F flavor gauge group and describe how it is broken. In Sec. III, we describe the physics of flavor mediation, and how the massive flavor gauge bosons contribute to the sfermion spectra via Higgsed gauge mediation. We outline a complete model in Sec. IV, detailing the generation of gaugino masses in Sec. IV A, the Higgs sector in Sec. IV B, and typical sparticle spectra in Sec. IV C. We verify in Sec. V that flavor bounds are satisfied in this model. We sketch the key predictions of our model in Sec. VI and conclude in Sec. VII.
A. Motivating SU(3)F
A wide range of flavor symmetries have been proposed to explain some or all features of the quark and lepton mass matrices and mixings. As our goal is to link SM flavor structures with a natural SUSY soft mass spectrum, we must employ some additional guiding (or at least simplifying) principles to select a preferred gauged flavor symmetry.
First, the flavor symmetry should act equally on all three generations. There are SUSY models employing additional gauged U(1), SU(2), or U(2) flavor symmetries that can achieve a natural SUSY spectrum [5,6,8,10,11,13,14,26]. However, it is somewhat ad hoc to treat the first two generations separately from the third without some underlying reason. By treating all generations on an equal footing, one can more easily obtain the SM mass and mixing structure.
Second, the flavor symmetry should act equally on lepton and quark multiplets in order to allow for a GUT structure in the ultraviolet (UV). This is further motivation to treat all three generations equally, since U(1), SU (2), or U(2) flavor symmetries make it difficult to explain the near maximal neutrino mixing between the second and third generations. Of course, one can always imagine split GUT multiplets where quarks and leptons do not live in the same GUT multiplet, but we choose not to consider that possibility.
Third and finally, we wish to avoid adding additional chiral matter with SM gauge charges in the infrared (IR), in order to maintain SUSY gauge coupling unification in the UV. Many candidate flavor symmetries, particularly ones involving U(1)s, are anomalous, requiring the addition of further matter with SM charges to cancel the anomalies. We can avoid extra charged multiplets if the flavor symmetry has no SM gauge anomalies. In essence, this corresponds to taking the MSSM in the limit of zero Yukawas and gauging any anomaly-free symmetry compatible with GUT structures. 2 Fortunately, there is a well-known flavor symmetry satisfying all of these requirements: an SU(3) F flavor symmetry under which all SSM matter supermultiplets are fundamentals. The charge assignments of SSM supermultiplets under SU(3) F are shown in Table I. In fact, this is the maximal group involving SU(3) factors that is anomaly-free and treats all matter multiplets equally. 3 A number of successful flavor models employing this symmetry have been constructed [43][44][45][46][47][48][49][50][51][52][53][54], including models that allow for GUT multiplets in the UV.
B. Yukawa Couplings
In order to generate SM Yukawa couplings, the flavor group must be spontaneously broken. Clearly, the Yukawas must transform as a 3 ⊗ 3 under the SU(3) F symmetry. They could arise as the sum of pairs of fundamental representations, in which case the SM Yukawa coupling will be generated through a dimension-six operator and will depend on the square of vacuum expectation values (vevs). Alternatively, the Yukawas could arise from a dimensionfive operator through a symmetric or antisymmetric two-index representation.
As we will see, the effects of flavor mediation are enhanced by having a large hierarchy between the flavor boson masses, which is desirable to achieve a natural SUSY spectrum. With pairs of fundamentals, the flavor boson masses will be parametrically proportional to the square root of the SM Yukawas. With a two-index representation, the flavor boson masses will be linear in the SM Yukawas. Therefore, in order to generate the largest possible mass hierarchy, we will employ two-index representations. This also comes with the advantage of requiring fewer messenger superfields in order to generate the Yukawas. A single antisymmetric representation does not have sufficient rank to generate the SM Yukawas, so from now on we will consider symmetric representations.
First considering just quark superfields, we add two symmetric 6 representations of SU(3) F to the SSM, which we denote as S u,d . The cubic SU(3) F anomalies vanish if we also add right-handed neutrino superfields N c transforming as a 3. 4 The SM quark Yukawas can be generated through the higher-dimensional superpotential operators where flavor indices have been suppressed. These operators can arise by integrating out heavy vector-like Higgs pairs also in the 6, 6 of SU(3) F with mass M S ; unification is preserved in the usual way if these Higgses live in complete multiplets of SU(5) GUT . In particular, all SM quark Yukawas may be generated by integrating out fields transforming as (5, 6) ⊕ (5,6) ⊕ (5, 6) ⊕ (5,6) under SU(5) GUT × SU(3) F . This suggests the scale M S should be high enough to avoid inducing Landau poles in the SM gauge couplings below the unification scale, and an O(1) top Yukawa then implies that the flavor symmetry breaking scale should also be high. One can also introduce additional anomaly-free representations of SU(3) F in order to generate charged lepton Yukawas and neutrino masses. However, without committing to a particular model of neutrino mass generation, the less constrained leptonic flavor structure means that it is difficult to extract general features of the role lepton masses and mixings play in the breaking of SU(3) F , and subsequently the flavor-mediated soft masses. Henceforth, we will assume that the dominant breaking of the flavor symmetry lies in the generation of the quark Yukawas. We will see in Sec. III B that this is in fact a good approximation, and so for now it suffices to only consider the quark sector.
C. Flavor Breaking
To have a realistic model, the vevs of S u,d must generate the SM quark flavor structures. As the goal of this work is to connect the SM fermion flavor structure to sfermion flavor structure, and numerous patterns of SU(3) F breaking already exist in the literature (see e.g. [51]), we simply treat S u,d as flavor spurions which obtain vevs in a supersymmetric manor along a D-flat direction. 5 After performing an SU(3) F rotation, we can assume vevs of Here, the CKM mixing matrix V CKM arises due to the initial misalignment of the two vevs, and we assume the In this vacuum, the gauge symmetry is fully broken and SSM quark Yukawa couplings are generated. There is some freedom in choosing the relative scales of the up and down flavor symmetry breaking vevs. In particular, M Su need not equal M S d , and there is freedom in the Yukawas through the ratio of the up-and down-type Higgs vevs tan β ≡ H u / H d . We can parameterize both freedoms with one parameter α, since Varying α then leads to different flavor boson spectra. For small α the breaking of SU(3) F → SU(2) F → ∅ is determined dominantly by the hierarchies in the up-quark mass matrix, whereas for large α the down-quark mass matrix dominates the breaking pattern. In Fig. 2, we plot how the relative spectrum of flavor boson masses varies as a function of α. To achieve the largest hierarchy in the sfermion masses, α 100 is preferred for the generation of a natural SUSY soft spectrum. For this reason, and to limit the free parameters, we choose to set α = 1 for the remainder of this paper, simply noting that other values are also valid.
To understand the flavor boson hierarchies, consider only the flavor breaking from S u . To second order in v u2 , the spectrum of flavor bosons is where g F is the flavor gauge coupling. The hierarchy v u3 v u2 leads to a flavor gauge boson hiearchy with five heavy gauge bosons (corresponding roughly to the generators of SU(3) F /SU(2) F ) and three light gauge bosons (the remaining SU(2) F generators).
For simplicity, we will use the notation v u3 ≡ v F , since the dominant breaking is by the top Yukawa coupling. Including the down-type Yukawas and fixing α = 1, the other parameters are now fixed by the measured fermion The hierarchy is greatest in the limit of small α, where the breaking is dominated by the up-quark Yukawas. Furthermore, the two lightest bosons are more degenerate in the small α limit since mc/mt < ms/m b . Thus, α 100 is favored for a natural SUSY spectrum.
masses. The relative spectrum of flavor bosons is then set, with the overall mass scale depending only on g F v F . In descending order, the gauge boson masses are Again, one can clearly see the approximate symmetry breaking pattern SU(3) F → SU(2) F followed by SU(2) F → ∅ where the SU(2) F is broken a further two orders of magnitude below. In the next section, we demonstrate why such a structure is highly appealing for the generation of a natural SUSY spectrum.
III. SFERMION SPECTRUM
In flavor mediation, the flavor gauge symmetry is used to mediate SUSY breaking to the SSM. Because the flavor gauge group is spontaneously broken, this leads to a model of Higgsed gauge mediation [36,42], which depends both on the messenger scale and gauge breaking scale. Following the usual procedure for gauge-mediated scenarios, we add a pair of messenger superfields Φ/Φ c in a vector-like representation of SU(3) F and charged under a messenger-parity symmetry. We couple these fields to a SUSY breaking spurion X using the superpotential with the assumed vev X = M + θ 2 F . At two loops, any scalars charged under SU(3) F will obtain soft masses, in particular the squarks and sleptons of the SSM.
A. Mass Eigenstates
The calculation of two-loop soft mass contributions in Higgsed gauge mediation was first performed in Ref. [42]. A more transparent derivation using analytic continuation into superspace was shown in Ref. [36]. This method gives results valid to lowest order in F/M , for any mediating gauge group with arbitrary breaking pattern, and yields compact expressions which we now review.
The mass-squared is plotted linearly in the left panel, and logarithmically in the right. There is an overall suppression which occurs whenever the flavor-breaking scale becomes comparable to the messenger scale. The suppression of the thirdgeneration sfermion soft masses relative to the first two generations is also clear, arising from the fact that the dominant flavor symmetry breaking lies in the top-quark direction. The first two generations are highly degenerate, as expected from the SU(3)F → SU(2)F → ∅ breaking structure.
Once the gauge symmetry is broken, we can simultaneously diagonalize the (SUSY) gauge boson mass matrix and corresponding group generators, such that an eigenstate with mass M a V is associated with the generator T a . After performing this diagonalization, the resulting expression for sfermion soft masses at two loops is where {ij} indicates that these indices have been symmetrized, α F ≡ g 2 F /4π is the fine structure constant for the flavor gauge group, and C(Φ) is the Dynkin index of the messenger superfield representation. The suppression factor f (δ a ) tracks the difference between Higgsed gauge mediation and ordinary gauge mediation, and is given explicitly by with When δ = 0, f (0) = 1 gives the results from ordinary gauge mediation. For large δ, f (δ) 2(log δ − 1)/δ. Applying these results to the flavor group and breaking pattern described in Sec. II, the soft mass-squared for the i-th flavor of squark or slepton in a representation of SU(3) F is where C 2 (q) is the quadratic Casimir of the quark superfield q, and γ i (δ) is a generation-dependent suppression factor arising due to the breaking of the mediating gauge group, with the limiting behavior In Fig. 3, we plot the suppression of the sfermion soft masses compared to the case where the gauge group is unbroken, for a range of values of δ. In the limit where the gauge group is largely unbroken, the suppression of all In the left panel, the mass of the first-two-generation sfermions is shown relative to the third-generation. This splitting becomes important whenever the messenger mass scale approaches the scale at which SU(3)F is broken to an approximate SU(2)F . The splitting endures even when the scale of SU(2)F breaking is greater than the messenger mass scale, however it never exceeds a ratio of ∼ 100. In the right panel, we show the relative mass-squared splitting of the first-two-generation squarks,δ12 = (m 2 2 −m 2 1 )/m 2 , where m 2 = (m 2 2 +m 2 1 )/2. This splitting remains below 2 × 10 −5 regardless of the relative scales of flavor breaking and messenger masses.
three soft masses is negligible. Whenever the scale at which the gauge group is broken becomes comparable to or greater than the messenger mass scale, the suppression becomes quite significant.
In Fig. 4, we plot the various splittings between sfermion generations. When the gauge group is largely unbroken, the third generation is almost degenerate with the first two. As the breaking of SU(3) F → SU(2) F becomes significant, a large splitting between the third-and first-two-generation sfermion masses emerges. This splitting remains even when the breaking of the remaining SU(2) F becomes comparable to the messenger scale, although it never exceeds a ratio of greater than ∼ 100. The splitting between the first-two-generation sfermions is also shown, which first appears at order (m c /m t ) 2 . Flavor-changing neutral current processes are very sensitive to this splitting, and hence it is important for a successful model that this splitting is small. Regardless of the relative hierarchy between the flavor-breaking scale and the messenger scale, the squared-mass splitting never exceeds a fractional value of 2 × 10 −5 .
The soft mass spectrum generated via SU(3) F flavor mediation is extremely attractive from the perspective of natural SUSY. Relative mass splittings of O(10) between the third and first two generations can be easily accommodated, while mass splittings between the first two generations remain very small. All three generations are treated on a equal footing, and the mass splittings arise solely due to the flavor-symmetry breaking implied by the quark masses. The fact that only an SU(3), rather than U(3), symmetry is anomaly free means that the symmetry breaking structure is SU(3) F → SU(2) F → ∅, protecting the first two generations from mass splittings. Were it possible to gauge a U(3) symmetry, the remaining U(1) in the symmetry breaking structure U(3) F → U(2) F → U(1) F → ∅ would have generated additional splittings between the first two generations.
B. Mixing Angles
An interesting feature of flavor mediation is that phases and mixing angles from the SM CKM matrix are transmitted to the scalar soft mass matrix via the gauged flavor symmetry. In the model presented, phases and mixing angles in the symmetry breaking vevs generate mixings in the flavor boson mass matrix. These then show up in the scalar soft mass matrices through Eq. (7), since gauge boson mass eigenstates have off-diagonal elements in the generator basis.
The full soft mass matrix is a non-trivial function of the vevs and angles in Eq. (2), and includes a dependence on the function f (δ a ). To gain insight into the magnitude of these terms, we will perform a perturbative calculation valid when mixing angles are relatively small and the SU(2) F gauge bosons are much lighter than the For the case at hand, it is a good approximation to work in the limit of vanishing v u1 , v d1 . To first order in the mixing angles and zeroth order in v u2 / v 2 u3 + v 2 d3 and v d2 / v 2 u3 + v 2 d3 , the magnitudes of the resulting soft mass matrix elements arẽ where the approximate degeneracy of the first-two-generation scalar masses has been taken account of in the diagonal components.
There are a number of interesting features of Eq. (12). While V 12 is the largest mixing in the CKM matrix by a considerable amount, terms proprotional to V 12 are absent from our expansion since they come suppressed by a factor which is always small. In the limit v d3 v u3 , the off-diagonal components are suppressed and the soft masses become approximately diagonal. In the limit v d3 v u3 where the flavor-symmetry breaking is driven dominantly by the down sector, the off-diagonal components become large and correspond to a rotation determined by the CKM matrix. Comparing Eq. (12) to numerical calculations we find good agreement, with the dominant m 2 23 component agreeing to within 1%. The subdominantm 2 13 component agrees to within 10%, where the increased discrepancy comes from higher orders in the larger mixing angles. Them 2 12 terms are never relevant. In this work, we are focussing on the case where the flavor symmetry breaking is dominated by the up-type vevs, hence v d3 v u3 and the off-diagonal elements in the scalar soft mass matrix are small. As we will discuss further in Sec. V, much larger off-diagonal elements arise by rotating to the fermion mass eigenbasis, so that the off-diagonal terms in the gauge interaction basis are essentially irrelevant for experimental bounds.
Revisiting lepton flavor structures, the generation of lepton masses and mixings should arise through vevs of SU(3) F charged fields, in analogy with the quarks. However, as long as these vevs are sufficiently subdominant to S u , then the squark and slepton soft masses will remain predominantly determined by the structure of up-type quark Yukawas. In particular, large mixing angles in the neutrino sector will feed through to off-diagonal elements in the squark and slepton soft mass-squared matrices with a suppression of order v 2 3 /v 2 u3 . Thus for the case at hand, it suffices to consider the quark flavor structure alone when considering the scalar soft mass spectrum, and one is not forced to commit to a particular model of neutrino masses. Of course, one could imagine cases where v 3 v u3 and the leptonic mass and mixing matrices would be relevant.
A. Gaugino Masses
Flavor mediation is an elegant way to generate hierarchical soft masses in the squark sector. Even if messenger fields are uncharged under SM gauge groups, flavor mediation also contributes to SM gaugino masses at the three-loop level, as we show in App. A. While this naturally generates a hierarchy between gauginos and the first-two-generation squarks, the hierarchy is too big, as the gauginos are typically lighter than the third-generation squarks. Thus, to obtain phenomenologically acceptable gluino masses, we must augment the flavor-mediated soft masses with an additional source of SUSY breaking.
A number of different mediation mechanisms could raise the gaugino masses. Gaugino mediation [55,56] is in some sense the most minimal option, since additional contributions to the stop masses are suppressed relative to gaugino masses, keeping stops within naturalness bounds. Anomaly mediation or gravity mediation could also be employed, though this would raise the additional question of the coincidence in scales between these soft mass contributions and those from flavor mediation.
Our preferred option is depicted in Fig. 1, where gauge mediation via SM gauge groups is the additional source of gaugino masses. From a UV perspective, this situation might even be expected. If one takes the point of view that all SM gauge groups-including gauged flavor groups-should be treated on an equal footing, then one would expect all gauge groups to transmit SUSY breaking to the SSM. In this way, the gauged flavor symmetry should really be considered as an additional SM gauge group, which happens to be broken well above the weak scale. We will explore the spectrum of flavor mediation plus SM gauge mediation in Sec. IV C.
B. The Higgs Sector
Gauge mediation by itself does not resolve issues in the Higgs sector such as the µ problem or generating a B µ term of the correct size. On the other hand, because the Higgs bosons are uncharged under SU(3) F and SU(3) C , they do not feel the effects of flavor mediation or color gauge mediation. Thus, the Higgses are naturally lighter than the squarks, lessening fine tuning tensions in the Higgs sector. That said, recent hints of a 125 GeV Higgs exacerbate fine tuning tensions in the MSSM, especially in gauge mediation [57]. In our framework, the details of the Higgs sector are largely irrelevant for understanding flavor mediation, but we wish to give an existence proof that a 125 GeV Higgs is compatible with our flavor structure without fine tuning. This is most easily accomplished in the so-called S-MSSM [58], which is a singlet extension of the MSSM that differs from the NMSSM by the presence of explicit mass terms for the higgsinos and singlinos Of course, this superpotential does not address the µ problem. It does however alleviate fine tuning because the coupling λ generates a quartic coupling which is relevant at small tan β, which is also favored by trying to minimize α in Eq. (3). Such an extension of the Higgs sector can accommodate a Higgs mass in the vicinity of 125 GeV, and may arise dynamically as in Ref. [59]. Because we are agnostic as to the structure of the Higgs sector, we will not be able to give any precise predictions for the masses of electroweak-ino or singlino states. Following the philosophy of natural SUSY, though, we do expect µ H to be in the neighborhood of 200 GeV to minimize cancellations between SUSY and SUSY-breaking Higgs mass contributions.
C. Example Spectra
The addition of SM gauge mediation to flavor mediation introduces a number of new features. One appealing byproduct is that sfermion mass matrices remain approximately diagonal and the first two generations remain highly degenerate, ameliorating flavor constraints. Gauge mediation does introduce new representation-dependent splittings, such that sleptons and squarks are no longer degenerate, nor are left-and right-handed sfermions. Though the thirdgeneration sfermions do receive additional contributions from SM gauge mediation, they remain much lighter than the first two generations. Since three-loop flavor-mediated gaugino masses are small, the gaugino mass spectrum retains the highly predictive pattern of gauge mediation.
A less obvious feature arises when we consider RG running. In the simplest examples of gauge-mediated spectra, the stops are typically as heavy as the gluino, and even when RG running drives a stop lighter than the gluino, often only one stop runs lighter and still remains close in mass to the gluino. This is undesirable for a natural SUSY spectrum, which typically requires the gluino to be heavier than both stops. When SM gauge mediation is combined with flavor mediation, though, the stops are not only lighter than the first-two-generation squarks at the messenger scale, but RG effects due to the heavy first two generations drive the stop masses down at two-loops. This is in fact the dominant effect that sets an upper bound on the mass of the first-two-generation squarks in natural SUSY models, since if they are too heavy, they drive the stops tachyonic [4,60]. In our context, this RG effect typically drives both stops lighter than the gluinos, as desired for natural SUSY. 6 Although a complete study of RG behavior in the full parameter space of flavor mediation plus SM gauge mediation is beyond the scope of this work, we will study two benchmark points with an acceptable natural SUSY spectrum of states. For simplicity, we assume that the same SUSY-breaking spurion X = M + θ 2 F appears in both flavor mediation and SM gauge mediation, and we fix the ratio F/M = 100 TeV. We assume that the messengers consist of Table II. Because we are agnostic as to the Higgs sector, we only show the expected mass range for the charginos and neutralinos, guided by naturalness considerations and gauge-mediated expectations. As expected, the first-two-generation sfermions are largely decoupled, while the third-generation sfermions and gauginos are light. These spectra are chosen to lie in close proximity to recent collider bounds [22][23][24][25], representing optimistic scenarios for LHC observability.
one vector-like pair of SU(3) F fundamentals as in Sec. III, and one vector-like pair of SU(5) GUT fundamentals for SM gauge mediation. Of course one could imagine loosening these assumptions, but it is quite satisfying that the same SUSY-breaking spurion can be used for both kinds of gauge mediation.
In Table II, we show the UV parameters for a low-scale (M = 10 8 GeV) and a high-scale (M = 10 14 GeV) benchmark. These benchmarks have been chosen to be representative of the spectra possible within models of flavor mediation. Both benchmarks are minimal gauge-mediated scenarios with additional contributions from flavor mediation, modeled by adding universal soft massesm F 1,2 andm F 3 to the first-two-and third-generation sfermions, respectively. Using SuSpect 2.41 [61], we RG evolve the UV parameters to achieve the IR spectra depicted in Fig. 5. These parameters have been cross-checked with SOFTSUSY 3.3.0 [62] and are found to agree within O(2%) for most parameters.
The Higgs and electroweak spectra are highly dependent on the specifics of the SUSY Higgs sector, which may involve extra singlets or gauge groups, so we do not specify Higgs sector parameters in order to avoid committing to any specific model. As the neutralinos and charginos are not charged under the flavor symmetry, we can still estimate their masses. As stated in Sec. IV B, one expects higgsinos should be around 100-300 GeV from naturalness considerations. The bino and wino obtain masses from gauge-mediated SUSY breaking that gives a characteristic 1 : 2 : 6 mass ratio between the bino, wino, and gluino. For a 800-900 GeV gluino mass, as is the case for both benchmark models, we expect the bino and wino Majorana masses to also be 100-300 GeV, so it is safe to assume that most (if not all) the electroweak-inos have masses in this range.
As shown in Fig. 5, the combination of flavor mediation and SM gauge mediation delivers a natural SUSY spectrum. The first-and second-generation sfermions do not evolve very much from the messenger scale and remain heavy. The third-generation sfermions are light because of the suppressed flavor mediation, and exhibit mass splittings from SM gauge mediation between squarks and sleptons as well as between left-and right-handed modes. For this choice of parameters, the gluino is around a factor of 2 heavier than the lightest stops, which is desirable for natural SUSY and is a result of the RG flow from the messenger scale.
V. FLAVOR CONSTRAINTS
Flavor mediation leads to non-universal interactions and soft masses, requiring a careful consideration of constraints from precision flavor measurements. There are two separate sources of flavor-changing neutral currents (FCNCs) in these theories: a set of tree-level contributions coming from the massive SU(3) F gauge bosons, and the familiar set of one-loop box diagrams involving squarks and gluinos proportional to the off-diagonal terms in the soft mass matrices.
We will focus only on quark flavor violation below. We will not give detailed consideration to leptonic FCNCs, in large part because the leptonic mixing angles are not strongly constrained and depend somewhat on the details of the neutrino sector; thus there are no irreducible limits. However, we note that the leptonic sector enjoys the same U(2) sflavor symmetry as the quark sector, which guarantees that contributions to the most strongly constrained leptonic FCNCs (such as µ → eγ) will be small even in the presence of large mixing angles in the lepton mass matrix.
A. Tree-Level Flavor Boson Exchange
Integrating out the massive SU(3) F gauge bosons gives rise to a variety of dimension-6 operators of the form where These dimension-6 operators lead to various possible sources of concern. The first comes from flavor-conserving operators that mix fermions of different species, e.g., operators of the formqγ µ q¯ γ µ . In general, these operators are poorly constrained, and limits require only that M Va /g F few TeV. These dimension-6 operators preserve baryon and lepton number, so there is no additional source of proton decay. Also note that there are none of the usual flavor-conserving operators that contribute to precision electroweak observables, such as |h † D µ h| 2 and (h † D µ h)qγ µ q.
The strongest constraint arises from the dimension-6 operator where Greek indices denote color contractions. The latest limits on this operator require Λ 10 4 TeV assuming no complex phase [63]. In the presence of an O(1) CP-violating phase, the bound strengthens to Λ 1.4 × 10 5 TeV. In terms of the model parameters here, this implies v u2 10 4 TeV (1.4 × 10 5 TeV) without (with) O(1) CP violation. Assuming these limits on v u2 are satisfied, the rate for all other 1 ↔ 3 and 2 ↔ 3 FCNC processes are well below their experimental limits.
B. Gluino-Squark Box Diagrams
Beyond the flavor gauge bosons, the principal constraints arise from off-diagonal sfermion soft masses in the basis where both the fermion masses and the gluino couplings are diagonal. These off-diagonal sfermion soft masses lead to one-loop contributions to FCNC processes dominated by box diagrams involving squark and gluino exchange. There are two such contributions to these off-diagonal soft mass terms. One contribution appears in the gauge interaction eigenbasis from i = j terms in Eq. (7) (whose parametric behavior is given by the second term in Eq. (12)). The second contribution arises upon going to the fermion mass eigenbasis by diagonalizing the Yukawa textures in Eq. (2). The gauge interaction eigenbasis contributions are typically much smaller than those contributions coming from diagonalization of the fermion mass matrix, since the latter are suppressed by v 2 d3 /v 2 u3 . Thus, to leading order, we may focus on the off-diagonal terms arising solely from rotating to the fermion mass eigenbasis.
For the choice of SU(3) F vevs in Eq. (2), the up-type fermion mass matrix is already diagonal in the gauge interaction eigenbasis. The down-type mass matrix may be diagonalized by the transformation M d → V CKM M d V T CKM . SUSY relates this rotation on the fermions to a rotation on the corresponding sfermions. As such, the up-type scalar mass matricesm 2 u L ,m 2 u R are unchanged while the down-type scalar mass matrices transform as In the fermion mass eigenbasis, the diagonal entries in the sfermion mass matrix are of the form The off-diagonal entries are of the form and (m 2 The leading contribution to K 0 −K 0 mixing arises from a gluino-squark box diagram with two insertions of the offdiagonal soft masses, with additional suppressions coming from the heaviness of first-and second-generation scalars. However, this contribution is far below current limits, since it is proportional to (sin θ c ×δ 12 ) 2 10 −11 , where θ c is the Cabibbo angle andδ 12 ≡ (m 2 2 −m 2 1 )/((m 2 2 +m 2 1 )/2) is shown in the right panel of Fig. 4. A more important contribution arises from a box diagram with four insertions of the off-diagonal soft masses, at order (m 2 13m 2 23 ) 2 . Although this diagram is suppressed by additional small mixings, the momentum running in the loop diagram is dominated by lighter third-generation squarks. Constraints on such processes were studied in Ref. [60]. Here we may directly compute the dominant gluino-mediated contributions to meson mixing appropriate to a U(2)-symmetric hierarchical soft spectrum, which we obtain from Ref. [64] using the techniques of Ref. [60]. This gives contributions to meson mixing at the scale of the soft masses; limits are computed by RG evolving these contributions to the relevant IR scale as in Ref. [63].
We find that the sbottom-mediated FCNC contributions to the real part of K 0 −K 0 mixing are an order of magnitude below experimental constraints. Similarly, contributions to the imaginary part of K 0 −K 0 mixing from the SM CKM phase are below experimental limits. However, if there is an O(1) new CP-violating phase in the SUSY-breaking soft parameters, the sbottom-mediated contributions to the imaginary part of K 0 −K 0 mixing would be an order of magnitude larger than allowed by measurements of K . In flavor mediation, if the flavor fields S u , S d do not have significant F -term expectation values and the SSM gaugino masses are universal, the only potentially relevant new source of CP violation comes from a relative phase between gaugino masses and the B µ parameter. This phase does not enter into the leading gluino-mediated contributions to ∆F = 2 FCNCs, and its contribution to, e.g., electric dipole moments may be rendered negligible if B µ = 0 at the scale of SUSY breaking. Both real and imaginary contributions to analogous up-type squark processes such as D 0 −D 0 mixing are much less tightly constrained and have no large irreducible contributions in flavor mediation.
For the immediate experimental horizon, the most interesting constraints are from 2 ↔ 3 FCNC processes such as B-meson mixing, which arise at two insertions of the off-diagonal soft masses and are suppressed only by thirdgeneration scalar masses. We find that contributions from B 0 d −B 0 d are of the same order as current limits, and remain consistent with experimental limits for reasonable values ofm b and mg. Representative limits are shown in Fig. 6. However, the natural prediction for 2 ↔ 3 FCNC processes is within reach of future super-B factories provided mb 1 TeV [65]. Similar contributions to the rare decay B → X s γ [66] do not provide significant constraints unless tan β is particularly large.
Finally, the above discussion assumed α 1. Had we chosen α 100 the gauged flavor symmetry breaking would be driven dominantly by S d . In this case the down-type quarks and squarks would be flavor-aligned and the misalignment would now be between up-type quarks and squarks. This scenario would also be interesting to consider, as constraints are generally weaker for up-type rather than down-type flavor violation.
VI. PREDICTIONS
The degeneracy of the first-two-generation scalars arises as a non-trivial consequence of the mechanism of Higgsed gauge mediation combined with the structure of the SM Yukawas. Hence, this degeneracy is a key prediction of the model. This has considerable aesthetic appeal when compared to models that impose the degeneracy by hand. In models that invoke additional SU(2) or U(1) symmetries, the degeneracy between the first-two-generation scalars is an input rather than an output of the model. In flavor mediation, the custodial U(2) symmetry is tied directly to the peculiar structure of the up-type Yukawa matrix with hierarchical entries. By requiring that the gauged flavor symmetry is anomaly-free, treats all three flavors equally, and is consistent with GUT structures in the UV, we were led to an SU(3) F flavor symmetry under which all matter multiplets are fundamentals. These restrictions lead to a rather predictive structure for the sfermion soft masses. In particular, the entire third generation, including leptonic multiplets and the right-handed sbottom, should be energetically accessible at the 8 TeV LHC. This is to be contrasted with the most extreme natural SUSY spectrum possible in which the stops and left-handed sbottom are the only flavored scalars within LHC reach. In our case, the requirement of anomaly cancellation thus leads to the additional prediction of light right-handed sbottom, staus, and a sneutrino. 7 If SM gauge groups also mediate SUSY breaking, then the degeneracy of third-generation scalars is lifted, and the soft masses are dependent on the SM representation. Majorana gaugino masses follow the predictive 1 : 2 : 6 gauge-mediated pattern. Gluinos cannot be much more massive than the left-handed stop and should be close to current gluino mass bounds, within immediate LHC reach. As gaugino masses are generated via gauge mediation, the winos and bino should be close to the weak scale. As the Higgs multiplets are uncharged under SU(3) F and SU(3) C they are lighter than the stops. Consequently, all charginos and neutralinos should be close to the weak scale, and lighter than the stops and gluinos. To summarize, predictions for the LHC include potential observation of the electroweak sector and gauginos of the SSM, along with the entire third-generation of sfermions.
As in many theories of natural SUSY, contributions to flavor-changing neutral currents are largest among Bmesons. While the contributions are consistent with current limits for a wide range of soft masses, they are likely to be measurable at future B factories, particularly via contributions to B 0 d −B 0 d mixing and the decays B → X s γ. Moderate new sources of CP violation may also be first apparent in the B meson system while remaining consistent with limits coming from lighter mesons. If the sbottom and stop are heavy enough that their direct production rate is suppressed, these indirect measurements may provide the first indication of new physics.
Cosmological considerations are similar to those for standard gauge mediation. One expects the gravitino to be the lightest SUSY particle, and the gravitino phenomenology should remain much the same as in standard gauge mediated scenarios. For low mediation scales, the gravitino poses no cosmological hazard. For higher mediation scales, the thermal abundance of the gravitino may be too large, but all of the standard mechanisms to alleviate this challenge in gauge mediation may be employed, such as multiple SUSY breaking [67][68][69] or gravitino decoupling [70,71].
VII. CONCLUSION
Independent of the motivations from natural SUSY, flavor mediation is a compelling mechanism to link SUSY flavor to SM flavor. By gauging the maximum non-Abelian non-anomalous flavor group in the SM (consistent with GUT structures in the UV), we achieve a viable SUSY flavor structure that is neither flavor blind nor minimally flavor violating. The fact that a custodial U(2) flavor symmetry is present in these models is a non-trivial feature of SU(3) F with large top Yukawa breaking. The fact that such models are consistent with (and perhaps hinted by) LHC data highlights the importance of flavor structures for physics beyond the SM.
A number of open questions and directions for future investigation remain. This framework is quite restrictive, with the flavor gauge group and matter superfield representations determined almost entirely by anomaly cancellation, flavor universality, and the desire for gauge coupling unification. Hence only a few parameters are added in addition to a standard gauge-mediated model, and a full exploration of the parameter space and RG behavior is possible, along with consideration of the residual fine-tunings present in the model.
In this work, we have remained agnostic about the details of leptonic flavor, which is a reasonable assumption if the size of spontaneous flavor symmetry breaking in the lepton sector is small compared to the quark sector. Nonetheless, it is also possible to construct viable models of flavor mediation with large spontaneous flavor breaking in the lepton sector, which may lead to additional novel features in the slepton spectrum related to neutrino mixing angles. We have also remained agnostic about the Higgs sector, and it would be interesting to study whether flavor mediation might provide a different perspective on the µ and B µ problems. Another area relevant to current LHC SUSY searches is the impact of a gauged flavor symmetry on the size and structure of possible R-parity-violating operators in the SSM.
Looking towards the UV, it is necessary for the flavor symmetry breaking scale to be close to the messenger masses. This could be simply a coincidence, or it could be the case that the SUSY-breaking and messenger sectors also determine SM flavor structures. With this in mind, it would be interesting to embed all of these features within a single dynamical SUSY breaking sector, such as an ISS sector [72]. More ambitiously, it would be intriguing if all four gauge groups (or corresponding dual descriptions) could be unified into a single group in the UV. After all, even though the gauged flavor symmetry is broken at scales well above the electroweak scale, it is fundamentally no different from the other SM gauge groups. Finally, it may be possible to generalize features of flavor mediation beyond the specific model detailed here. For example, one could cast the physics in terms of correlation functions for Higgsed gauge mediation [73], much as was achieved in the "general gauge mediation" program [74]. Such studies would highlight the robustness of flavor mediation as a source of SUSY breaking. If there are no messengers charged under the SM, then S(µ) does not contain a θ 2 component, and we need only calculate the θ 2 component of the two-loop Z r (µ) to extract the three-loop gaugino mass. For an SSM superfield charged under the mediating gauged flavor symmetry SU(3) F , we can use the full two-loop effective Kähler potential calculated in Ref. [36,77]. Taking appropriate derivatives we have where ij are (symmetrized) flavor indices, α F is the flavor group fine structure constant, and C F (Φ) is the Dynkin index of the messenger superfields. The suppression factor is h(δ) = 2 (δ − 4)δ log δ − Ω(δ) (4 − δ) 2 δ , with Ω(δ) defined in Eq. (9). To find the total contribution to gluino soft masses, we must sum over contributions from all SSM quark superfields. The superfields Q, U c , D c are all in the same representation of SU(3) F and hence contribute in the same way to gluino masses. Summing over these fields we generate an additional factor of 4. We must also sum over the individual flavors, which contribute differing amounts. The combined result is The suppression factor h(δ a ) behaves as and hence large logarithms are generated in the limit of small δ. These logarithms can be thought of as arising due to running effects between the scale of the messenger masses M and the scale at which the flavor symmetry is broken M V . Whenever this signals a breakdown of perturbation theory and the validity of the two-loop calculation. In this regime, these large logs must be resummed appropriately. This is a nontrivial exercise, so here we limit α F and δ such that these terms are small and perturbation theory still holds. Fig. 3, we compared the flavor-mediated squark soft masses to the soft masses that would be generated in the limit of unbroken flavor mediation. We now compare the three-loop gaugino masses to the same parameter, in order to ascertain whether these three-loop contributions could be great enough to generate an acceptable gaugino spectrum. The ratio of squared masses is
Back in
Once the flavor breaking parameters δ a have been chosen in order to generate a particular squark soft mass hierarchy, the relative soft masses of the gauginos can only be adjusted by varying α F and the matter representations. In Fig. 7 we show the relative scales of the three-loop gluino masses in comparison to the sfermion masses, choosing the most optimistic value α F = 1. Even by maximizing the three-loop contributions from flavor mediation, it would be difficult to achieve the desired gluino and stop masses for a natural SUSY spectrum using flavor mediation alone. One could choose larger representations of flavor messengers to raise the relative gaugino masses, however it is clear that the three-loop contributions to gaugino masses alone are generally not sufficient to allow for a natural SUSY spectrum with mt 400 GeV and mg 600 GeV. In Sec. IV A, we therefore introduced an additional contribution from SM messengers. Of course, an alternative strategy would be to introduce messengers charged under both SU(3) F and the SM gauge group. | 2012-04-16T20:08:25.000Z | 2012-03-07T00:00:00.000 | {
"year": 2012,
"sha1": "97c6ed18ade16672d588373b2c1dadb13f90a952",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1203.1622",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "97c6ed18ade16672d588373b2c1dadb13f90a952",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
83461745 | pes2o/s2orc | v3-fos-license | QT Assessment in Early Drug Development: The Long and the Short of It
The QT interval occupies a pivotal role in drug development as a surface biomarker of ventricular repolarization. The electrophysiologic substrate for QT prolongation coupled with reports of non-cardiac drugs producing lethal arrhythmias captured worldwide attention from government regulators eventuating in a series of guidance documents that require virtually all new chemical compounds to undergo rigorous preclinical and clinical testing to profile their QT liability. While prolongation or shortening of the QT interval may herald the appearance of serious cardiac arrhythmias, the positive predictive value of an abnormal QT measurement for these arrhythmias is modest, especially in the absence of confounding clinical features or a congenital predisposition that increases the risk of syncope and sudden death. Consequently, there has been a paradigm shift to assess a compound’s cardiac risk of arrhythmias centered on a mechanistic approach to arrhythmogenesis rather than focusing solely on the QT interval. This entails both robust preclinical and clinical assays along with the emergence of concentration QT modeling as a primary analysis tool to determine whether delayed ventricular repolarization is present. The purpose of this review is to provide a comprehensive understanding of the QT interval and highlight its central role in early drug development.
History of The QT Interval and Its Importance in Early Drug Development
In 1895, Einthoven developed the human electrocardiogram (ECG) and described the PR segment, QRS complex and the T wave while a decade later he and Lewis also recognized the existence of U waves [1]. In subsequent years, while T wave morphology changes were deemed pathologic and QT lengthening was noted in the setting of myocardial infarction, routine measurement of this interval was not deemed warranted. However, in 1957 Jervell and Lange-Nielsen identified a family in which 4 of 6 children had QT prolongation associated with deafness in the absence of any structural heart disease [2]. The affected persons were noted to have syncopal episodes and 3 of the children subsequently died before the age of 10 without any autopsy evidence of cardiac pathology. In the ensuing years multiple families were discovered with long QT intervals often without associated deafness, who had a proclivity to syncope and sudden death. In the early 1960s Romano and Ward [3,4] both described young individuals with long QT intervals and a history of fainting and ventricular fibrillation, frequently mediated by exercise and ameliorated by beta blockers. In light of these observations, cardiologists and other physicians became sensitized to the importance of evaluating the QT interval and the recognition that abnormal lengthening may predispose to serious ventricular arrhythmias and sudden death.
In 1964 [5], a small series of patients being treated with quinidine, was reported to have syncope due to ventricular tachycardia in the setting of a prolonged QT interval. The morphology of the ventricular tachycardia had a peculiar undulating or twisting appearance which in 1966 was termed Torsades de Pointes (TdP) by Desertennes [6]. In the subsequent decades, attention was consequently directed to routine measurement of the QT interval in patients receiving cardiac antiarrhythmic drugs such as procainamide, quinidine and disopyramide but routine surveillance for other classes of medicines was not performed as there was no known link between those agents and QT prolongation or TdP. The landscape changed in 1988, when prenylamine (Segontin) [7], a calcium channel blocking analogue of amphetamine used in the treatment of angina pectoris, was the first drug to be withdrawn from the market due to sudden death associated with QT prolongation. Several years later a case report of TdP and QT lengthening with the antihistamine terfenadine (Seldane) was published and this drug was eventually removed from the market in 1997 [8]. In the following years, additional classes of medications including antibiotics and psychotropic drugs were linked to TdP and a number of these agents were subsequently withdrawn by the Food and Drug Administration (FDA) [9]. In response to these events, government regulators and the pharmaceutical industry realized that a comprehensive evaluation of the QT interval and arrhythmia risk should be incorporated into a new compound's development program.
The occurrence of life threatening ventricular arrhythmias in the presence of a prolonged QT interval also garnered the interest of British regulators who in 1999 drafted a paper entitled "Points to consider: the assessment of the potential for QT interval prolongation by non-cardiovascular medicinal products" [10]. This document was written in response to multiple reports of non-antiarrhythmic drugs prolonging the QT interval on the ECG resulting in TdP. It formed the basis for regulatory agencies worldwide to focus attention on the arrhythmic potential of drugs that were previously deemed to be safe.
Shortly thereafter Canada followed suit and developed a similar document in 2001 entitled "Assessment of the QT Prolongation Potential of Non Antiarrhythmic Drugs" [11] but no formal testing protocol was enumerated. In 2003 the International Conference on Harmonization (ICH) S7B document was published [12], which delineated the preclinical strategy to assess the potential of non-cardiovascular compounds to alter ventricular repolarization. This was followed in 2005 by regulators from the United States, Europe and Japan updating ICH S7B while also drafting the seminal clinical guidance document ICH E14 [13], which outlined recommendations to assess the arrhythmic risk of investigational human non-cardiac pharmaceuticals and their effect on ventricular repolarization. To date ICH S7B has not been formally amended, whereas ICH E14 has been updated with Q&A responses on multiple occasions in 2008, 2012, 2014 and 2015 [12] but has not been officially revised in entirety.
Overview of Ventricular Repolarization Electrophysiology
In order to properly understand the QT interval's role in the assessment of ventricular repolarization as well as contextualize its evolution in regulatory guidance, a basic review of the cardiac action potential and principal ion channels is warranted.
The electrical activity in mammalian cardiac cells is mediated by regenerative action potentials resulting from numerous voltage gated and ligand or receptor bound gated channels. These channels are proteins which change configuration so as to regulate the movement of various ions across the cell membrane. The channels may be open, closed or inactive and are dependent upon changes in myocyte membrane potential. Coordination of these action potentials in the ventricles is responsible for ventricular depolarization while their recovery and return to their baseline resting membrane potential results in repolarization. The cardiac action potential is divided into 5 phases as noted in Figure 1. Major inward and outward cardiac ion channels affecting the five phases of the cardiac action potential. Note that these phases represent time dependent intervals based upon the ingress and egress of the various ions and their impact on transmembrane voltage. Reproduced from [14].
Phase 0: The sharp upstroke of the action potential is primarily the result of a transient and rapid influx of Na + (INa) through opening of Na + channels; Phase 1: The termination of the action potential upstroke and initiation of the early repolarization phase are mediated by inactivation of Na + channels and the transient outward movement of K + (Ito) through K + channels and chloride (Cl − ) ions; Phase 2: The action potential plateau is ascribed to the balance between the influx of Ca 2+ (ICa) through long opening L-type Ca 2+ channels and the efflux of repolarizing K + currents; Phase 3: The sustained downward slope of the action potential and the late repolarization phase are due to the egress of K + (IKr and IKs) through delayed rectifier K + channels; Phase 4: The resting membrane potential is supported by the inward rectifier K + current (IK1), the sodium potassium ATPase pump, and the Na + /Ca 2+ exchanger.
Potassium channels are the most abundant of all mammalian ion channels. There are four physiologic subtypes: those which are calcium mediated, those which are known as tandem pore domain channels seen in neurons, those which are inwardly rectifying, and those which are voltage gated. The latter two subtypes are the main channels governing cardiac cell function (reviewed in [15]). Voltage gated cardiac potassium channels are pores in the membrane of excitable cells that open and close due to alterations in transmembrane voltage and electrochemical gradients thereby allowing the egress of potassium out of the cell. Of the multiple genes that have been identified that encode potassium currents, the inward rapidly delayed potassium rectifier current IKr encoded by the human ether a-go-go related gene (hERG) on chromosome 7 termed KCNH2 and the slowly activating delayed rectifier current IKs encoded by KCNQ1 and KCNE1 genes are the biggest contributors that influence action potential duration (APD) [16,17]. The inner region of the potassium pores is dense with aromatic and polar residues, which act as promiscuous target binding sites for both cardiac and non-cardiac pharmacological drugs [18]. Genes that encode the potassium channels are responsible for the formation of both alpha and beta subunits which combine into hetero-oligomeric complexes that affect the gating function of these channels. Blockade of the alpha Figure 1. Cardiac Action Potential Phases. Major inward and outward cardiac ion channels affecting the five phases of the cardiac action potential. Note that these phases represent time dependent intervals based upon the ingress and egress of the various ions and their impact on transmembrane voltage. Reproduced from [14].
Phase 0: The sharp upstroke of the action potential is primarily the result of a transient and rapid influx of Na + (I Na ) through opening of Na + channels; Phase 1: The termination of the action potential upstroke and initiation of the early repolarization phase are mediated by inactivation of Na + channels and the transient outward movement of K + (I to ) through K + channels and chloride (Cl − ) ions; Phase 2: The action potential plateau is ascribed to the balance between the influx of Ca 2+ (I Ca ) through long opening L-type Ca 2+ channels and the efflux of repolarizing K + currents; Phase 3: The sustained downward slope of the action potential and the late repolarization phase are due to the egress of K + (I Kr and I Ks ) through delayed rectifier K + channels; Phase 4: The resting membrane potential is supported by the inward rectifier K + current (I K1 ), the sodium potassium ATPase pump, and the Na + /Ca 2+ exchanger.
Potassium channels are the most abundant of all mammalian ion channels. There are four physiologic subtypes: those which are calcium mediated, those which are known as tandem pore domain channels seen in neurons, those which are inwardly rectifying, and those which are voltage gated. The latter two subtypes are the main channels governing cardiac cell function (reviewed in [15]). Voltage gated cardiac potassium channels are pores in the membrane of excitable cells that open and close due to alterations in transmembrane voltage and electrochemical gradients thereby allowing the egress of potassium out of the cell. Of the multiple genes that have been identified that encode potassium currents, the inward rapidly delayed potassium rectifier current I Kr encoded by the human ether a-go-go related gene (hERG) on chromosome 7 termed KCNH2 and the slowly activating delayed rectifier current I Ks encoded by KCNQ1 and KCNE1 genes are the biggest contributors that influence action potential duration (APD) [16,17]. The inner region of the potassium pores is dense with aromatic and polar residues, which act as promiscuous target binding sites for both cardiac and non-cardiac pharmacological drugs [18]. Genes that encode the potassium channels are responsible for the formation of both alpha and beta subunits which combine into hetero-oligomeric complexes that affect the gating function of these channels. Blockade of the alpha subunit which carries the I Kr current is the most common mechanism by which pharmaceuticals prolong the action potential resulting in QT lengthening which forms the basis for TdP.
In the absence of dispersion of depolarization, changes in phase 3 of the action potential are primarily responsible for QT prolongation and increases in the effective refractory period of cardiac tissue. This lengthening, in turn, may be related to modification or blockade of various cation voltage gated channels. To illustrate, prolongation of the action potential can result from inhibition of one or more of the outward K + currents, decreased inactivation of the inward Na + or Ca 2+ currents, increased activation of the Ca 2+ current, alteration of potassium channel trafficking or disruptions in protein synthesis [19], see Figure 1. Less commonly, QT lengthening may also be mediated by drugs such as quinidine that prolong APD and the effective refractory period of cardiac cells via blockade of the fast inward Na + current channels corresponding to the upstroke (phase 0) of the action potential. The resultant impact on cardiac tissue is to produce a transmural gradient of repolarization which provides the substrate for re-entrant arrhythmias.
How Is the QTc Calculated: Popular Correction Formulae for QT Values
There is typically an inverse relationship between the heart rate (RR interval) and the QT interval with shorter QT intervals at higher heart rates and longer QT intervals at lower heart rates. As such, multiple efforts have been advanced in an attempt to correct for this relationship throughout a range of heart rates ideally demonstrating that the regression line comparing the corrected QT (QTc) value against the RR interval is a flat line with a slope of zero. In 1920 Bazett determined that the duration of ventricular systole on the ECG was related to the square root of the heart rate and proposed a correction formula based upon measurements in only 12 subjects [20], see Table 1. In the same year, Fredericia posited that this relationship was a function of the cube root of the RR interval in a small sample of ECGs from a presumably normal population of individuals [21]. Since then numerous correction formulae involving exponential, linear, and logarithmic equations have been advanced in an effort to identify which formula optimally negates the effect of heart rate on the QT interval. In general, each of these population-based formulae using various mathematical models were applied to the dataset rather than allowing the dataset to drive the best fit relationship between the heart rate and QT interval. Moreover, these mathematical models were derived from heterogeneous populations of individuals thereby contributing to a lack of congruity especially when there is heart rate variability. As such, the most common population-based formulae have all demonstrated that the plot of QTc vs. the RR interval is usually imperfect with positive slopes that do not completely eliminate the impact of heart rate. As such, a single mathematical formula may not be appropriate to account for the QT-RR relationship when there is significant heart rate variability. Other Cross validated spline correction factor which is independent of HR [27] Table adapted from [28].
Historically, the most commonly used correction formula is Bazett which over the years was integrated into the output of most standalone digital ECG acquisition carts and is still utilized today by many clinicians worldwide. It is most accurate between heart rates of 60-100 beats per minute (bpm). At slower heart rates less than 60 bpm, the formula undercorrects the QTc value while at values over 100 bpm the formula overcorrects the QTc interval. The 2005 ICH E14 guidance document originally relied upon the Bazett calculation for reporting of cardiodynamic and safety ECG data in early clinical trials. Subsequently, due to the shortcomings noted above, Bazett's corrected data has been shown to be inferior to the Fredericia correction method and is no longer routinely warranted for reporting by the FDA (Q&A from FDA reference) [29]. It still, however, does enjoy a primary role in screening and management of patients with congenital long QT syndromes especially in the pediatric population. To amplify this point, there is some evidence that in a very young pediatric population, Bazett's formula best corrects for heart rate. In support of this proposal, Phan et al. [30] interrogated a database of 702 children under 2 years of age being screened for long QTc syndrome and compared regression slopes of heart rate to QTc employing Bazett, Fredericia, Hodges and Framingham formulae ( Table 1). In this specialized cohort, Bazett proved to yield the most consistent calculation of QTc and the upper limit of normal value (ULN) was 460 ms which coincides with that recommended for congenital long QTc screening in this population.
Fredericia's correction (QTcF) is currently the standard adopted by the FDA when submitting QT data for review. When there is suboptimal QTc correction by Fredericia, consideration should be given to use an individual correction factor especially when there is some variability in heart rate as described below. This approach requires determining an individual exponent that best eliminates the impact of heart rate on the QT interval. The magnitude of variability that has been proposed by regulators to perform this calculation is a documented heart rate change of 6-10 beats per minute. Morganroth [31] recommended a number of ECGs under non-treatment conditions for each individual be obtained across a sufficient range of heart rates (35 to 50 ECGs covering a range of heart rates from 50 to 80 bpm) in order to adequately make this calculation. Zareba and coworkers [32] have published data recommending that at least 400 predose QT-RR pairs for each individual subject are needed to compute an acceptable individual correction. Malik recently presented his viewpoint that QT/RR regression analysis in small cohorts of subjects as seen in early phase human studies is fraught with both false positive and false negative conclusions due to significant inter-individual variability [33,34]. To add, he stated that a single mathematical QT/RR relationship that fits for all persons individually is unobtainable and therefore should be derived separately for each subject in a clinical trial. He further commented that the individual correction factor is universally superior to population-based formulae although this position is not universally supported.
When electing to perform an individual correction (QTcI), one popular approach for this calculation is to obtain a full day of Holter recording at baseline prior to any treatment (Day −1) and then examine all evaluable QT-RR pairs to calculate this correction factor according to the following protocol: All QT/RR pairs from each subject are used for that person's individual correction coefficient, which is derived from a linear regression model: log(QT) = log(a) + b × log(RR). The coefficient of log(RR) for each subject, b i , is then used to calculate QTcI for that subject as follows: QTcI = QT/RR bi . Also, the effects of any potential hysteresis needs to be an important adjunct to this calculation [35].
There are a number of additional methodologies as to how to calculate QTcI in the presence of an increased heart rate as reported by Garnett et al. [35]. These authors reviewed the relative strengths and weaknesses of alternative techniques including Holter bin analysis, beat to beat comparisons and one stage fixed effect pharmacodynamic-pharmacokinetic modeling although there was no consensus as to which of these approaches was optimal. They further discussed the role of exercise and autonomic maneuvers including postural changes designed to augment the heart rate although these latter protocols are not standardized and routine acceptance has proven elusive.
Most recently, using a large US National Health and Nutrition Survey (NHANES) database of subjects from the general population, a cross validated spline correction factor was reported which essentially "obliterated" the relationship between the QTc and RR intervals and this effect was independent of gender [27]. The authors also compared this correction factor to 6 other well-known methods and found that the spline method showed superior performance. They deemed their approach "heart rate agnostic" as the regression lines of heart rate vs. QTc interval were virtually flat in all cases. Adoption of this novel approach, while intriguing, must await further validation and reproducibility in additional studies involving large populations of healthy subjects encompassing a spectrum of heart rates and ages.
What Is a Normal QTc Value
There is considerable debate as to what constitutes a prolonged QTc interval and there have been a large number of published studies on this subject over many decades. In general, the studies are not homogeneous as some included manual caliper measurement on paper while others involved the more precise method of utilizing on screen digital calipers with digitally acquired ECGs. To add, some protocols entailed measuring a single complex in a single lead, multiple complexes in a single lead and then averaging the results, choosing the longest lead on the 12-lead tracing, or using a representative global median beat for QTc determination. Moreover, studies were not always performed in healthy volunteers and were comprised of populations with different age cutoffs as well as those who may have had underlying cardiovascular or other clinical pathology. Lastly, not all studies employed the same QT correction method when reporting results although correction formulae for the lower limits of normal (LLN) values were generally consistent and independent of gender while those for the ULN values showed substantial variation [20].
In 2006, Goldenberg and colleagues examined 581 subjects of varying ages and found the ULN threshold for men was 450 ms while for women it was 470 ms [32]. Thereafter, in 2009 Rautaharju et al. published the American Heart Association (AHA)/American College of Cardiology (ACC)/Heart Rhythm Society (HRS) recommendations for normal QTc values which have been the most widely accepted with healthy males having an ULN value of 450 ms while for females the value was 460 ms [36]. In contrast, according to the AHA expert panel report by Drew and colleagues [37] on prevention of TdP, a prolonged QTc should be defined as a value that exceeds the 99th percentile of the population in question. For healthy postpubertal males this was 470 ms and for females this was 480 ms. Reijinbeek et al. studied 13,354 individuals ranging in age from 16-90 years and evaluated 12 lead QTc intervals employing a broad array of different correction formulae with a value exceeding the 98th percentile considered abnormal [38]. They found that the ULN for men was 460 ms while for women it was 470 ms employing Bazett's correction which was routinely 10-15 ms higher when compared to the Fredericia formula. They also determined that the QTc value increased slightly with age and was always higher in women by around 10 ms independent of the correction factor chosen. One criticism of this study is that it was a meta-analysis of data from 4 other trials which included heterogeneous populations of both healthy individuals and those with underlying disease.
Mason et al. derived reference ranges from 79,743 subjects enrolled in clinical drug trials with ages ranging from 3 months to 99 years [39]. Some of these individuals did have cardiovascular disease or other pathology and the QTc was determined using both single lead median beat measurement as well as taking the average of 3 complexes measured in the lead with the longest QTc value. There were a total of 21,457 males and 24,562 females with suitable ECGs for QTc analysis using digital on screen calipers. They employed the 98th percentile for the ULN threshold and found that for men that value using Fredericia's correction was 438 ms while for women the number was 450 ms. The corresponding QTcF LLN values were 355 ms for men and 365 ms for women. A final report of interest is that of Olbertz and colleagues who reviewed a cohort of 39,303 ECGs from healthy volunteers using ULN cutoffs at the 97.5th percentile while the LLN cutoff was at the 2.5th percentile [40]. They showed that the ULN for the entire population was 450 ms with females exhibiting an approximately 11 ms longer QTcF than males. The LLN cutoff for the entire population was <350 ms and there was no significant gender difference for this determination.
The decision as to which set of criteria should be utilized depends upon the goal of screening. To best identify individuals from the general adult population who may have congenital long or short QTc syndromes, LLN values of <330 ms and ULN values of >460 ms have been suggested by Ackerman [41]. In contrast, for early clinical trials there is a tradeoff of using lower ULN values to reduce the risk of the QTc reaching a threshold of regulatory concern during study conduct while offering a better margin of subject safety, versus a higher QTc value which would facilitate the recruitment of healthy volunteers. In this regard, a value of >450 ms irrespective of gender as originally suggested by the FDA in their ICH E14 guidance document is most often utilized in first in human protocols [12]. Alternatively, given the known gender differences, a cutoff value of >450 ms for males and >460 ms for females as per the AHA/ACC/HRS consensus document would also seem reasonable for screening in early phase clinical trials. Whatever numbers are chosen, these thresholds should be reviewed with sponsors when designing protocols and modified when appropriate depending upon preclinical data, the compound under investigation, and the study design, acknowledging that subject safety is always paramount in these discussions.
Measurement of the QT Interval
The QT interval is measured from the beginning of the QRS complex which depicts the surface representation of the initiation of ventricular depolarization acknowledging that some myocytes will begin repolarizing during this phase of the cardiac cycle. The end of the T wave corresponds to the surface representation of the termination of ventricular repolarization (reviewed in [42,43]), and shown in Figure 2. Ventricular repolarization in turn, is primarily demarcated from the end of the QRS or J point to the end of the T wave, also referred to as the J-T interval. Measurement can be performed on multiple QT complexes in a single lead, on a single lead median beat, on a 12-lead representative complex or on a computer generated 12 lead global composite median beat. [41]. In contrast, for early clinical trials there is a tradeoff of using lower ULN values to reduce the risk of the QTc reaching a threshold of regulatory concern during study conduct while offering a better margin of subject safety, versus a higher QTc value which would facilitate the recruitment of healthy volunteers. In this regard, a value of >450 ms irrespective of gender as originally suggested by the FDA in their ICH E14 guidance document is most often utilized in first in human protocols [12]. Alternatively, given the known gender differences, a cutoff value of >450 ms for males and >460 ms for females as per the AHA/ACC/HRS consensus document would also seem reasonable for screening in early phase clinical trials. Whatever numbers are chosen, these thresholds should be reviewed with sponsors when designing protocols and modified when appropriate depending upon preclinical data, the compound under investigation, and the study design, acknowledging that subject safety is always paramount in these discussions.
Measurement of the QT Interval
The QT interval is measured from the beginning of the QRS complex which depicts the surface representation of the initiation of ventricular depolarization acknowledging that some myocytes will begin repolarizing during this phase of the cardiac cycle. The end of the T wave corresponds to the surface representation of the termination of ventricular repolarization (reviewed in [42,43]), and shown in Figure 2. Ventricular repolarization in turn, is primarily demarcated from the end of the QRS or J point to the end of the T wave, also referred to as the J-T interval. Measurement can be performed on multiple QT complexes in a single lead, on a single lead median beat, on a 12-lead representative complex or on a computer generated 12 lead global composite median beat. The ECG complex(es) used for interval measurement is not uniform and differs between core laboratories. When using a single lead for analysis, controversy exists as to which lead or leads to utilize for measurement since the QT interval can vary by over 20 ms within a single lead or between leads in a 10 s tracing even in the presence of stable sinus rhythm [45]. Furthermore, an individual's QT and QTc demonstrates diurnal variability with the longest values occurring during sleep and the shortest values occurring during the day. This individual variability due to circadian rhythm was reported to be as high as 95 ± 20 ms by Molnar and associates in a small group of 21 subjects with the longest QT being recorded just after awakening [46]. In addition, the QT interval adapts to changes in heart rate with a delay known as QT hysteresis which may not be evident for several minutes, thereby underscoring the importance of measurements during periods of heart rate stability and in the absence of artifact or arrhythmias. As such, it is imperative in assessing the impact of drugs on The ECG complex(es) used for interval measurement is not uniform and differs between core laboratories. When using a single lead for analysis, controversy exists as to which lead or leads to utilize for measurement since the QT interval can vary by over 20 ms within a single lead or between leads in a 10 s tracing even in the presence of stable sinus rhythm [45]. Furthermore, an individual's QT and QTc demonstrates diurnal variability with the longest values occurring during sleep and the shortest values occurring during the day. This individual variability due to circadian rhythm was reported to be as high as 95 ± 20 ms by Molnar and associates in a small group of 21 subjects with the longest QT being recorded just after awakening [46]. In addition, the QT interval adapts to changes in heart rate with a delay known as QT hysteresis which may not be evident for several minutes, thereby underscoring the importance of measurements during periods of heart rate stability and in the absence of artifact or arrhythmias. As such, it is imperative in assessing the impact of drugs on the QT segment in the same lead or using the same composite lead methodology and at the same time of day during the study (time matched). Finally, standard and rigorous acquisition procedures should be adopted to control for the role of food, sleep, body position, gender, autonomic tone, electrolyte abnormalities and environmental factors all of which can influence the QT measurement.
For decades, the QT interval was measured on a 10 s gridline paper ECG by manual caliper placement at a paper speed of 25 mm/s in which a 1 mV calibration pulse signal produced a 10 mm deflection. Higher paper speeds of either 50 mm/s or 100 mm/s were occasionally used to provide more accuracy especially when there was a need to discern the presence of heart block or arrhythmias. However, these higher recording speeds are not routinely advised as they can lead to distortion of U waves and other low amplitude waveforms thereby confounding accurate interval measurement.
Prior to the 2005 ICH E14 guidance document, 3 QT complexes were usually assessed in lead II and the three measured intervals were then averaged to provide a mean QT value. Thereafter, the QTc was calculated from this number based upon the associated RR intervals bracketing each complex. The rationale for selecting lead II for measurement was rooted in a number of factors. In the past, Bazett used lead II for QT analysis since at that time ECGs were devoid of any precordial leads and this therefore became the defacto choice for many years. Secondly, in the majority of children with congenital long QT syndromes, lead II was recommended as it usually yielded the longest QT result in this group. Lastly, lead II has been the most popular lead since the P, QRS and T wave vector axes are mainly directed inferolaterally towards that lead and confounding U waves are not usually evident or as prominent as in the precordial leads.
The 2009 AHA/ACC/HRS article states that the QT should be measured in the lead demonstrating the longest interval and they suggest using leads V2 and V3 as these leads tend to have earlier QRS initiation compared to the limb leads [36]. This recommendation is somewhat contrary to findings from the observational study of 4429 volunteer subjects enrolled in clinical trials (RML personal observation), where the shortest QT intervals were noted in leads I, V1, and V2 while the longest intervals were seen in leads V3, V4 and V5 ( Figure 3). the QT segment in the same lead or using the same composite lead methodology and at the same time of day during the study (time matched). Finally, standard and rigorous acquisition procedures should be adopted to control for the role of food, sleep, body position, gender, autonomic tone, electrolyte abnormalities and environmental factors all of which can influence the QT measurement. For decades, the QT interval was measured on a 10 s gridline paper ECG by manual caliper placement at a paper speed of 25 mm/s in which a 1mV calibration pulse signal produced a 10 mm deflection. Higher paper speeds of either 50 mm/s or 100 mm/s were occasionally used to provide more accuracy especially when there was a need to discern the presence of heart block or arrhythmias. However, these higher recording speeds are not routinely advised as they can lead to distortion of U waves and other low amplitude waveforms thereby confounding accurate interval measurement.
Prior to the 2005 ICH E14 guidance document, 3 QT complexes were usually assessed in lead II and the three measured intervals were then averaged to provide a mean QT value. Thereafter, the QTc was calculated from this number based upon the associated RR intervals bracketing each complex. The rationale for selecting lead II for measurement was rooted in a number of factors. In the past, Bazett used lead II for QT analysis since at that time ECGs were devoid of any precordial leads and this therefore became the defacto choice for many years. Secondly, in the majority of children with congenital long QT syndromes, lead II was recommended as it usually yielded the longest QT result in this group. Lastly, lead II has been the most popular lead since the P, QRS and T wave vector axes are mainly directed inferolaterally towards that lead and confounding U waves are not usually evident or as prominent as in the precordial leads.
The 2009 AHA/ACC/HRS article states that the QT should be measured in the lead demonstrating the longest interval and they suggest using leads V2 and V3 as these leads tend to have earlier QRS initiation compared to the limb leads [36]. This recommendation is somewhat contrary to findings from the observational study of 4429 volunteer subjects enrolled in clinical trials (RML personal observation), where the shortest QT intervals were noted in leads I, V1, and V2 while the longest intervals were seen in leads V3, V4 and V5 ( Figure 3). However, other authors have argued that midprecordial leads should not be used as they may overestimate the true QT interval and they suggest using either leads II or V5 which remain the most commonly employed by ECG core laboratories as well as in the congenital long QT registry. Lead II and a "precordial lead" are also those that were initially recommended by the FDA in their 2005 guidance and are therefore widely accepted for regulatory submission.
FDA guidelines from the 2005 ICH E14 and Q&A R3 documents states that ECGs should be measured by a "few skilled readers" ideally in a centralized core laboratory. They advocated for However, other authors have argued that midprecordial leads should not be used as they may overestimate the true QT interval and they suggest using either leads II or V5 which remain the most commonly employed by ECG core laboratories as well as in the congenital long QT registry. Lead II and a "precordial lead" are also those that were initially recommended by the FDA in their 2005 guidance and are therefore widely accepted for regulatory submission.
ECG (counts) Percent (%)
FDA guidelines from the 2005 ICH E14 and Q&A R3 documents states that ECGs should be measured by a "few skilled readers" ideally in a centralized core laboratory. They advocated for measurement by either a fully manual or a manual adjudication methodology in which computers initially placed the interval markers which were then adjusted by the skilled reader. In view of the variability between different algorithms and equipment manufacturers, fully automated measurements were only deemed appropriate for later phase trials with compounds that did not reveal any QT liability concerns in preclinical or first in human studies.
An important area for discussion in this recommendation is that the FDA left open who should be considered a "skilled reader." A landmark paper by Viskin et al. found that the majority of physicians could not identify a long QT interval when one was present [47]. They provided 2 normal ECGs and 2 ECGs from patients with long QT syndrome to a worldwide group of physicians including 25 QT experts, 102 arrhythmia specialists, 332 cardiologists and 442 non-cardiologists and asked them to classify the QT as normal or abnormal. Ninety-six percent of QT experts correctly identified the QT intervals as long or normal, whereas only 62% of arrhythmia experts and less than 42% of cardiologists and non-cardiologists were correct in their QT diagnoses. This study underscores the importance of utilizing a centralized core laboratory with highly experienced pharma QT experts as the "skilled reader" pool. Furthermore, these readers should ideally be blinded to the treatment, time and all subject identifiers when interpreting ECGs in clinical trials.
Another factor of considerable significance is that intra-and inter-reader variability should be minimized and preferably established for each clinical trial while core laboratory quality metrics should be periodically obtained and tabulated to ensure reader certification and maintenance of their reading skills. As a standard operation procedure, when reading studies, assigning a single reader per subject or, if feasible, a single reader to the entire trial would optimize consistency and reduce variability. In this same vein, a comprehensive lexicon of diagnostic statements with associated definitions and criteria should be adopted by the core laboratory to establish uniformity in diagnostic interpretation and measurement. In core laboratories where the skilled readers only review computer generated outliers, it is further suggested that a 2% or greater blinded sampling rate of computer generated "inliers" should be procured and sent to the skilled reader(s) on a study specific basis to ensure the output of quality data.
As previously mentioned, the initial ICH E14 document stated that all ECGs in a clinical trial could be measured by the skilled reader pool using "manual adjudication" with or without computer assistance. In view of the high cost and considerable time involved in this scheme and pursuant to the 2014 FDA Questions and Answer (Q&A R2) guidance, there has been a shift away from using "skilled readers" to manually review all ECGs towards the use of computer assisted ECG algorithms [12]. These algorithms are designed to accelerate drug development timelines at reduced expense while maintaining data quality and integrity. These computerized platforms have been validated, their measurements are not statistically different from manually adjudicated ECGs, and datasets from these highly automated systems have been successfully submitted and accepted by the FDA for New Drug Applications (NDA). With the advent of on-screen digital calipers in combination with digital ECG acquisition, the ability to measure the QT interval precisely and accurately has dramatically improved. Today there are a number of software programs that permit the reader to evaluate this interval in a variety of presentations most commonly in a single lead II median beat format or in a representative superimposed 12 lead median beat. Finally, the approach of analyzing a limited number of single complexes may soon be supplanted by technology which permits automated measurement of all representative complexes over a 5 min or longer period of time so as to provide a more accurate temporal representation of the QT interval and repolarization lability.
An essential point to clarify is the fundamental difference between safety and cardiodynamic ECGs produced in clinical trials both of which may be obtained from the same digital recording device. The former are typically single replicate 12 lead 10 s ECG tracings secured for screening of subjects prior to enrollment, again prior to drug dosing, and periodically after dosing so as to enable real time review by the principal investigator to evaluate subject safety. Safety ECGs are also used in conjunction with clinical and laboratory information as an integral part of dose escalation decisions. In contrast, cardiodynamic ECGs are also 12 lead 10 s tracings that are typically obtained over a 24 h period from a continuous Holter recording device at specific timepoints coinciding with pharmacokinetic drug determinations. These ECGs are most commonly extracted in triplicate (or higher multiples) over a 5 min time window when subjects have been supine for at least 10 min with careful attention directed at minimizing any environmental stimuli that might affect data quality. The extracted cardiodynamic ECGs are then processed and blindly reviewed at a later time by the ECG core laboratory readers to determine if the test compounds has any impact on ECG morphology or fiducial intervals related to cardiac conduction and ventricular repolarization. This information is then sent to the sponsor in a Statistical Analysis System (SAS) validated dataset for their review. The waveform data is concomitantly sent to the FDA warehouse in an HL7-XML format where the FDA provides ECG quality metrics to the core laboratory [13], a representative excerpt of which is noted in Figure 4. An added benefit of the government's ECG warehouse is that it serves as an open access repository for research projects by all stakeholders to further the science in this field. period from a continuous Holter recording device at specific timepoints coinciding with pharmacokinetic drug determinations. These ECGs are most commonly extracted in triplicate (or higher multiples) over a 5 min time window when subjects have been supine for at least 10 min with careful attention directed at minimizing any environmental stimuli that might affect data quality. The extracted cardiodynamic ECGs are then processed and blindly reviewed at a later time by the ECG core laboratory readers to determine if the test compounds has any impact on ECG morphology or fiducial intervals related to cardiac conduction and ventricular repolarization. This information is then sent to the sponsor in a Statistical Analysis System (SAS) validated dataset for their review. The waveform data is concomitantly sent to the FDA warehouse in an HL7-XML format where the FDA provides ECG quality metrics to the core laboratory [13], a representative excerpt of which is noted in Figure 4. An added benefit of the government's ECG warehouse is that it serves as an open access repository for research projects by all stakeholders to further the science in this field. The traditional approach advocated by the FDA for cardiodynamic ECG sampling was to obtain ECGs in triplicate and this recommendation has been supported by several analyses both of which showed that obtaining more than 3 ECGs at each nominal timepoint yielded decreasingly small benefits in QT variability at increased cost [48,49]. However, within the past several years, there has been growing attention centered on the methodology known as high precision QT analysis [50], which involves extracting up to ten 10 s 12 lead ECGs at each protocol specific nominal timepoint as a means to enhance measurement precision. While this approach may indeed improve precision compared to conventional triplicate ECG tracings, it is unclear if the additional data points significantly reduce reader variability, justify the higher costs of this strategy, enhance the ability to detect a QT signal, or improve the likelihood that a new compound will be approved. Moreover, the FDA does not endorse any specific methodology, certification process, or core laboratory for early clinical trials nor have they mandated that more than triplicate ECGs be obtained to assess proarrhythmic risk. Thus, the advantage(s) of obtaining higher numbers of replicates remains controversial. The traditional approach advocated by the FDA for cardiodynamic ECG sampling was to obtain ECGs in triplicate and this recommendation has been supported by several analyses both of which showed that obtaining more than 3 ECGs at each nominal timepoint yielded decreasingly small benefits in QT variability at increased cost [48,49]. However, within the past several years, there has been growing attention centered on the methodology known as high precision QT analysis [50], which involves extracting up to ten 10 s 12 lead ECGs at each protocol specific nominal timepoint as a means to enhance measurement precision. While this approach may indeed improve precision compared to conventional triplicate ECG tracings, it is unclear if the additional data points significantly reduce reader variability, justify the higher costs of this strategy, enhance the ability to detect a QT signal, or improve the likelihood that a new compound will be approved. Moreover, the FDA does not endorse any specific methodology, certification process, or core laboratory for early clinical trials nor have they mandated that more than triplicate ECGs be obtained to assess proarrhythmic risk. Thus, the advantage(s) of obtaining higher numbers of replicates remains controversial.
Problematic and Challenging Issues in QT Assessment
Measurement of the QT interval can prove quite challenging and may be affected by multiple ECG abnormalities. As a general rule, if the QT interval is less than half of the RR interval in sinus rhythm, then the QTc is probably within normal limits and less than 460 ms. Another caveat is that computer measurements should not be accepted as being accurate and careful inspection and manual adjudication are recommended whenever the computer-generated values are abnormal. This is especially true in the setting of any of the ECG findings enumerated below. To add, it is imperative that all manually adjudicated QT measurements be executed in a highly magnified view with 1 ms resolution to enhance accuracy. This is especially critical given the FDA's 10 ms QTc threshold of regulatory concern and their requirement to submit individual categorical QTc analyses for values that deviate by both 30 ms and 60 ms from the baseline determination or breach a value that would be classified as an adverse event.
Artifact: Measurement of the QT interval should be performed in tracings without any artifact that may obscure the intervals and lead to erroneous values. As such, a segment of the extracted ECG devoid of artifact should be used for measurement or additional "clean" ECGs as close to the nominal timepoint specified in the protocol time and events schedule should be secured and used for interval assessment. To aid in this regard there are automated computer algorithms designed to ensure that extracted cardiodynamic ECGs are obtained without significant artifact, dysrhythmias or heart rate instability. U waves: U waves are common especially in young individuals with relatively slow heart rates and often are distinct positive waves after the T wave and best seen in leads V2 and V3 (Figure 2). They are thought to represent a final phase of ventricular repolarization involving the summation of early afterdepolarizations or repolarization of mid myocardial M cells, papillary muscles or purkinje fibers [51]. U waves may be attenuated with filtering or indistinct when there is significant tachycardia. The U wave should be clearly identified as distinct from the T wave and should not be included in QT measurement as normal QU values have not been established and inclusion of the U wave would lead to gross over-measurement of the QT interval. When it is unclear if U waves are present, inspection of neighboring leads on the ECG may be helpful in separating a discrete U wave from the T wave. Bifid T waves: T waves may be notched or bifid in appearance and the end of the T in these cases should be measured after the second peak. Also, careful inspection of notched T wave morphology and the distance between notches may be useful in distinguishing the T wave from a superimposed U wave. While differentiating a notched T wave from a superimposed U wave can be difficult, viewing alternate leads in the tracing should be performed to help make this distinction. Flat T waves: When the T waves are flat, measurement of the QT interval should be carried out in a lead(s) where the T waves are positive, monophasic and best defined. In the absence of any positive unidirectional T waves, a clearly visible monophasic negative T wave would also suffice for this assessment. When a single lead median beat approach is being used and the designated lead is not suitable for measurement, the alternate lead utilized should be identified on the report and subsequent ECGs from that subject should all be measured in that same lead in order to provide procedural consistency and reduce variability that may be misconstrued as drug mediated. T-U fusion: U waves may be fused with the T wave thereby artificially prolonging the QT interval by casual visual inspection. In this setting, as initially recommended by Lepeschkin and Surawicz [52], a tangent line should be drawn through the steepest portion of the T wave downslope until it intersects the isoelectric line (defined by the T-P segment) and that crossing point is to be designated as the end of the T wave.
Arrhythmias: Whenever the RR interval shows significant variability as in the case of sinus arrhythmia or atrial fibrillation, multiple evaluable QT complexes should be measured and the QT value averaged for all complexes so as to avoid over or under estimation of the QT interval. In the case of premature supraventricular or ventricular beats, measurement in the complex immediately following the premature beat should be avoided as ventricular repolarization is altered in the complex after a premature beat. Asymptotic prolonged downsloping T waves: This is a finding in which the QT interval can easily be overmeasured due to a prolonged T wave "tail". As such, when the end of the T wave approaches the isoelectric line asymptotically, a tangent function utilizing the steepest portion of the downslope of the T wave should be employed as described for T-U fusion. Wide QRS complexes: The presence of a widened QRS such as with bundle branch block, ventricular pacing, pre-excitation, or intraventricular conduction delays, may contribute to a prolonged QTc interval which may not be a consequence of significantly altered ventricular repolarization. In these circumstances, the formula QTc = measured QTc-(QRS-100 ms) has been suggested to provide a clinically useful determination of the true QTc interval [41]. This approach has been advocated by those involved in assessing individuals with suspected or known congenital long QTc syndromes. Misconnected limb leads: In cases where there is limb lead misconnection involving only reversed arm leads, measurement can still be affected in lead II if a single lead median beat approach is used. In cases where lead II is effected by misconnected limb leads, an alternative precordial lead such as V5 is suggested. Misconnected limb leads should not significantly alter the QT measurement when a representative 12 lead median beat is utilized.
Some computer software such as Cal ECG from Analyzing Medical Parameters for Solutions (AMPS) provides additional measurement tools to aid in determining the onset and offset of the PR, QRS, QT and J-T intervals. These include the ability to vertically separate all 12 leads when a representative median beat is used for measurement as well as the use of a superimposed vector magnitude display ( Figure 5). These tools in the hands of a QT expert may assist in improving precision and accuracy of all fiducial markings compared to systems that lack these capabilities. Another alternative involves assessing the QT/RR relationship over longer time periods to better account for possible hysteresis due to dynamic and autonomically mediated heart rate effects. Finally, there have also been advocates for using either a dynamic beat to beat or a Holter "bin" method to determine the QT interval which has its greatest utility under non-steady state heart rate conditions [35,53,54]. To date none of these latter approaches, although appealing, have been universally supported and accepted by most stakeholders.
QTc Syndromes
The screening of subjects for enrollment in early phase clinical studies is predicated on their satisfying certain QTc electrocardiographic inclusion and exclusion criteria universally aimed towards those with a prolonged rather than a shortened QTc interval (see Figure 6). These abnormalities in the QTc may be either acquired or inherited and occasionally the two conditions may coexist. It is therefore incumbent upon principal investigators involved in subject screening to recognize individuals who manifest criteria for either the long QTc or short QTc syndrome, exclude them from study participation, and refer them for further diagnostic evaluation when indicated.
QTc Syndromes
The screening of subjects for enrollment in early phase clinical studies is predicated on their satisfying certain QTc electrocardiographic inclusion and exclusion criteria universally aimed towards those with a prolonged rather than a shortened QTc interval (see Figure 6). These abnormalities in the QTc may be either acquired or inherited and occasionally the two conditions may coexist. It is therefore incumbent upon principal investigators involved in subject screening to recognize individuals who manifest criteria for either the long QTc or short QTc syndrome, exclude them from study participation, and refer them for further diagnostic evaluation when indicated. While conventional analysis of the QTc interval as previously described remains the standard for screening of individuals for both inherited and acquired QTc prolongation, a somewhat novel and intriguing approach to detect altered ventricular repolarization is that of root mean squared electrocardiography (RMS ECG) [55]. The basis for development of this technology is that due to a low signal to noise ratio, the end of the T wave, being a low amplitude signal, may be difficult to detect especially with confounding U waves or in the presence of a short T-P interval. RMS-ECG is designed to address these concerns by measuring the QRS to T wave peak as a marker of mean ventricular APD and the RMS T wave width as a reflection of temporal dispersion of repolarization. The cellular basis of these measures has been confirmed in animal electrophysiologic studies but there is limited data in humans. [56] To address this shortcoming, Lux and coworkers derived RMS ECGs signals from 24 h Holter tracings in both subjects with moxifloxacin induced QT prolongation and in a pediatric population with congenital long QTc-2 syndrome. They demonstrated there was high precision in identifying individuals with delayed ventricular repolarization with less variability in QT values than that obtained from a standard lead II ECG. They concluded that this technique is best suited for newborns when rapid heart rates make distinguishing the end of the T wave from an encroaching P wave challenging. However, it has not been embraced or validated as a superior metric to conventional analysis in the healthy adult volunteer population and must await further studies in this group.
Acquired LQTS
There are multiple medications which are known to prolong the QT interval and predispose to ventricular dysrhythmias. While cardiovascular antiarrhythmic drugs have the greatest notoriety, multiple classes of non-cardiac drugs such as antipsychotics, antibiotics, antidepressants, and antihistamines have been implicated and some of these have been withdrawn from the market due to their torsadogenic potential ( Table 2). It is beyond the scope of this discussion to detail the more than 200 approved drugs which have the potential to affect the QT interval and predispose to TdP although this risk is comprehensively addressed and continually updated in the public access website CredibleMeds, www.crediblemeds.org. While conventional analysis of the QTc interval as previously described remains the standard for screening of individuals for both inherited and acquired QTc prolongation, a somewhat novel and intriguing approach to detect altered ventricular repolarization is that of root mean squared electrocardiography (RMS ECG) [55]. The basis for development of this technology is that due to a low signal to noise ratio, the end of the T wave, being a low amplitude signal, may be difficult to detect especially with confounding U waves or in the presence of a short T-P interval. RMS-ECG is designed to address these concerns by measuring the QRS to T wave peak as a marker of mean ventricular APD and the RMS T wave width as a reflection of temporal dispersion of repolarization. The cellular basis of these measures has been confirmed in animal electrophysiologic studies but there is limited data in humans. [56] To address this shortcoming, Lux and coworkers derived RMS ECGs signals from 24 h Holter tracings in both subjects with moxifloxacin induced QT prolongation and in a pediatric population with congenital long QTc-2 syndrome. They demonstrated there was high precision in identifying individuals with delayed ventricular repolarization with less variability in QT values than that obtained from a standard lead II ECG. They concluded that this technique is best suited for newborns when rapid heart rates make distinguishing the end of the T wave from an encroaching P wave challenging. However, it has not been embraced or validated as a superior metric to conventional analysis in the healthy adult volunteer population and must await further studies in this group.
Acquired LQTS
There are multiple medications which are known to prolong the QT interval and predispose to ventricular dysrhythmias. While cardiovascular antiarrhythmic drugs have the greatest notoriety, multiple classes of non-cardiac drugs such as antipsychotics, antibiotics, antidepressants, and antihistamines have been implicated and some of these have been withdrawn from the market due to their torsadogenic potential ( Table 2). It is beyond the scope of this discussion to detail the more than 200 approved drugs which have the potential to affect the QT interval and predispose to TdP although this risk is comprehensively addressed and continually updated in the public access website CredibleMeds, www.crediblemeds.org. QTc determination and I Kr block are sensitive surrogate markers for arrhythmia risk but are not specific and have only a modest positive predictive value. There is no QTc value that definitively predicts the occurrence of TdP and the presence of a prolonged QTc interval does not in and of itself indicate that a malignant ventricular arrhythmia will occur. As a rough rule, it is estimated that the incidence of ventricular arrhythmias increases by 5%-7% for each 10 ms increment in the QTc value. Moreover, a QTc value >500 ms increases the risk of TdP by a factor of 2-3 fold while an increase of >60 ms from the baseline measurement is also thought to confer increased risk of arrhythmias [37]. Shah computed the relative torsadogenic risk in a group of noncardiac drugs and found that the risk increased incrementally when the placebo corrected QTc change from baseline exceeded 10 ms and had the highest potential for TdP when the QTc increased beyond 26 ms [57]. These data are in keeping with the current FDA thresholds of regulatory concern.
Prolongation of the QTc value and the occurrence of TdP in the clinical setting may be influenced by pharmacogenetic properties of the drug and its metabolite(s), an individual's pharmacogenetic sensitivity and susceptibility, the compound's lipophilicity, its distribution between plasma and myocardial tissue, and negative use dependence (augmented effects with bradycardia). Moreover, the proclivity to ventricular arrhythmias may be amplified by numerous physiologic conditions and drug interactions. In this regard, Zeltser et al. have demonstrated that in 71% of cases of drug related TdP two or more identifiable risk factors were consistently present [58]. A list of risk factors is highlighted in Table 3. Most prevalent amongst these risk factors is female gender as approximately 70% of cases are seen in women [22]. Also, the presence of polypharmacy and drug-drug interactions are frequently implicated in augmenting TdP risk with the cytochrome P450 system enzyme inhibitors most often being culprit. It is further recognized that individuals who develop TdP may have an underlying congenital LQT gene polymorphism or a genetic condition which alters drug metabolism either of which may predispose to this arrhythmia. Furthermore, in special populations such as cancer patients there is increased risk of drug interactions resulting in QTc prolongation. For example, greater prevalence is associated with breast and gastrointestinal cancer diagnoses as well as solid-based malignancy [59]. Finally, TdP may be influenced by an individual's repolarization reserve [60], which is genetically determined and involves compensatory mechanisms that modulate unfavorable changes in depolarizing or repolarizing currents which may contribute to arrhythmogenesis.
Congenital LQTS
Congenital prolongation of the QT interval comprises a group of inherited genetic mutations resulting in ion channelopathy that can lead to syncope, seizures and sudden death. To date there have been at least 13 different phenotypes related to 16 gene polymorphisms with some overlapping of clinical features [61][62][63]. Approximately 75% of cases are comprised of genetic mutations in KCNQ1, KCNH2 and SCN5A which are termed long QTc syndrome (LQTS) 1, LQTS 2 and LQTS 3 respectively. The prevalence of congenital long QT syndrome is approximately 1 in 2000 individuals and is determined by measurement of resting QTc values as well as inspection of T wave morphology. The AHA proposed ULN QTcF values of >450 ms for adult males and >460 ms for adult females, it has been determined that the positive predictive value (PPV) for LQTS using these cutoffs is less than 1% and therefore they are not very helpful. Superior predictive recommendations for diagnosing LQTS have been promulgated by Ackerman and colleagues from their experience with the congenital LQTS registry and involve a point scoring set of diagnostic criteria [61], see Table 4. Key elements are that prepubertal individuals with a QTc value of >460 ms should be considered abnormal while for adult males a value of >470 ms and for females a value >480 ms increases the PPV of identifying this genetic disorder and may warrant referral for further diagnostic and possible genetic testing. Genetic testing would also be advised for any prepubertal individual with a QTc >480 ms on serial ECGs in the absence of any confounding factors. Testing is also appropriate for associated family members when genetic screening confirms the diagnosis of congenital LQTS in the affected individual. Finally, any suspected subject with a resting QTc value >500 ms dramatically increases the PPV for LQTS as well as dysrhythmias and mandates electrophysiology referral.
When evaluating subjects for LQTS a number of provisos need to be emphasized. Gene carriers may exhibit incomplete penetrance which may not be readily evident on a standard ECG while the presence of a long QTc by itself does not establish the diagnosis. Drugs, comorbid conditions and other secondary factors may be responsible for QTc prolongation and should be diligently sought. Measurement of the QTc should be undertaken when heart rates are between 50-100 bpm as bradycardia underestimates and tachycardia overestimates the QTc value which may lead to misdiagnosis. Repeat 12 lead ECG measurements may be necessary and computer-generated values should be confirmed by manual interrogation of either lead II or V5 while avoiding evaluation in the right precordial leads (V1-V2). T wave morphology should be carefully examined and the diagnosis of LQTS 2 suspected when bifid T waves are present with the second peak being taller than the first. Finally, a QTc value below the aforementioned AHA ULN thresholds, especially in the setting of a positive family history or supporting symptoms, does not exclude the diagnosis of LQTS (concealed LQT 1) and warrants further evaluation [64].
≥3.5
Adapted from: Schwartz and Ackerman [61]. LQTS is diagnosed in the presence of an LQTS risk score ≥3.5 in the absence of a secondary cause for QT prolongation and/or in the presence of an unequivocally pathogenic mutation in one of the LQTS genes or in the presence of a corrected QT interval for heart rate using Bazett's formula (QTc) ≥500 ms in repeated 12-lead electrocardiogram (ECG) and in the absence of a secondary cause for QT prolongation. In addition, LQTS can be diagnosed in the presence of a QTc between 480 and 499 ms in repeated 12-lead ECGs in a patient with unexplained syncope in the absence of a secondary cause for QT prolongation and in the absence of a pathogenic mutation.
Acquired SQTS
Similar to the long QTc syndrome, short QTc syndrome (SQTS) can either be acquired or congenital. In the 1990s it was postulated that short QT intervals in humans and some species of kangaroos may increase the risk of sudden cardiac death (SCD) [65,66]. Prior to 2000 when the short QT syndrome was first described, it was thought that shortening of the QTc interval was only seen secondary to a variety of metabolic, pharmacologic and pathologic conditions. These included hyperkalemia, hypercalcemia, acidosis and carnitine deficiency. Pharmacologic agents linked to QT shortening included digitalis, acetylcholine, ATP activators and catecholamines. Pathologic conditions that have been reported involve hypothermia, certain endocrine disorders, myocardial ischemia and post ictal states following epileptic seizures [62,65,66]. However, it is uncertain whether any of these predisposing factors routinely shortens the QT sufficiently to provoke any major clinical sequelae including syncope, seizures, malignant arrhythmias or SCD.
Congenital SQTS
In 1999, Gussak and coworkers first reported clinical cases of patients with shortened QTc intervals [65]; and in 2000 Gussak and Brugada [65], described a distinct inherited channelopathy in subjects who had a propensity for atrial and ventricular arrhythmias, syncope and SCD termed SQTS. SQTS may be sporadic or autosomal dominant in inheritance and is extremely rare with approximately 100 cases reported worldwide of the with a well-documented clinical syndrome. The mean age of presentation is around 30 years old and the majority of affected individuals manifest some symptomology the most common being palpitations, light headedness or syncope, and atrial fibrillation. Unfortunately, 28-40% of index cases may present with cardiac arrest. The prevalence of asymptomatic short QT values was reported in an 8-year study of 18,825 healthy young individuals from Seattle [67]. Twenty-six patients (0.1%) had a QTc <320 ms and 44 (0.2%) had a QTc <330 ms with a higher prevalence in males, athletes and those of African-Caribbean origin.
There are at least 6 gene defects that have been confirmed with this syndrome as illustrated in Table 5. The electophysiologic basis for this disorder appears to be missense mutations or genetic polymorphisms in genes that encode either potassium or calcium channels [65,66,68,69]. As a consequence, ventricular repolarization may be shortened by either a gain in function of the outward potassium channel or loss of function of the inward calcium channel. In either case, APD is reduced as is the effective refractory period of cardiac tissue resulting in a proclivity to both atrial and ventricular arrhythmias. Moreover, in response to the abbreviated repolarization time as a result of impaired calcium loading, echocardiographic findings of abnormally low global longitudinal strain and systolic dysfunction have recently been reported. The three predominant forms of SQT involve gain of function mutations of the potassium channel [66], as detailed in Table 5.
Criteria for Diagnosis SQTS
Due to the paucity of specialized referral sites that are capable of identifying nucleotide polymorphisms and the small number of cases in current registries, the criteria for diagnosis are somewhat varied. As such, there is some controversy as to what constitutes a short QTc value and screening based solely upon QTc results has been unrewarding. Population based studies and criteria from the Heart Rhythm Society consensus statement would suggest that a QTc interval ≤330 ms independent of gender or correction formulae is diagnostic even in the absence of any clinical history of cardiovascular events or confounding factors. Others have proposed that for females, a QTc ≤340 ms is also thought to be diagnostic without any history of prodromal symptoms. Lastly, a QTc value <360 ms in the presence of a documented missense mutation, a family history of SQT, a family history of sudden death in persons younger than 40 years of age or a history of atrial fibrillation, syncope or cardiac arrest are also considered diagnostic.
These guidelines are in keeping with those of Gollob et al. who have developed a point scoring scheme for SQTS predicated upon measurement of the QTc interval in conjunction with patient symptoms and a detailed family history [70]. As illustrated in Table 6, a score of 4 or greater confers a high probability for this diagnosis. To help establish the diagnosis, ECGs should be repeated on multiple occasions and typically when the heart rate is in the normal range ideally below 80 bpm, as correction formulae may underestimate the prevalence of this abnormality at higher heart rates leading to false negative diagnoses. Careful inspection of the ECG and associated rhythm strip may detect findings such as prominent U waves and peaked T waves, an absent ST segment, a prolonged Tpeak-Tend interval, or the inability to lengthen the QT interval as the heart rate slows. Similar to individuals with the long QTc syndrome, confounding factors should be excluded before conferring the diagnosis. Mutation of undetermined significance in a culprit gene 1 Adapted from Gollob et al. [70]. A score of 4 or greater confers a high probability of confirming the diagnosis of short QT syndrome.
The presence of a shortened QTc interval as an isolated abnormality does not augur a poor prognosis as affected individuals have lived well into adulthood without any noteworthy clinical sequelae. As such, the role of genetic screening becomes unclear as documenting a missense mutation may not predict an adverse outcome. Nonetheless, in those persons with suspected SQTS, referral for additional diagnostic and possible genetic testing would appear warranted including a battery of tests to exclude latent structural heart disease. In those, however, with a well-documented symptomatic SQTS, genetic and electrophysiologic evaluation is required as is screening of first-degree relatives for risk stratification.
The Evolution of Regulatory Guidance Regarding Ventricular Repolarization
ICH S7B [12] refers to the pre-clinical "Evaluation of the Potential for Delayed Ventricular Repolarization (QT Interval Prolongation) by Human Pharmaceuticals." The elements of cardiac safety testing detailed in this publication were formulated based upon the knowledge that all drugs that were withdrawn from the market due to QT prolongation and TdP were associated with prolongation of phase 3 of the cardiac action potential primarily due to hERG channel block. As such, the major foci of the ICH S7B strategy included an in vitro assessment of ionic currents targeting a compound's effect on I Kr along with an in vivo QT assay in conscious or anesthetized non-human primates. Additionally, evaluation of action potential pharmacodynamics and proarrhythmic effects was suggested in isolated cardiac preparations the results of which were to be integrated with the above assays into a composite risk assessment for QT prolongation. All studies were to be conducted using Good Laboratory Practice (GLP) standards to comply with regulatory submission.
Of interest is the ability of preclinical assays to predict clinical QT effects as recently reviewed by Park et al. [71]. They examined hERG assays, APD and in vivo QTc effects obtained for 150 drugs in which ventricular repolarization had been evaluated in thorough QT trials [71]. They found that these assays showed a high false positive rate (poor sensitivity) but a low false negative rate (high specificity) at low concentrations with QTc as an endpoint. hERG and in vivo QTc data had better positive predictive values than the APD and were of moderate utility although they cautioned that these results can be variable dependent on drug concentration and testing conditions. Somewhat similar findings were reported by Kleiman and colleagues who computed the sensitivity and specificity of preclinical in vitro and in vivo studies to predict clinical QT prolongation [72]. Reduced sensitivity (30%-68%) across all assays was again noted while APD demonstrated a slightly higher specificity of 93% compared to the other studies at 85%. ICH E14 [12] was designed to assess the clinical impact of pharmaceutical agents on the QT/QTc interval. This document recommended that most new chemical entities (NCE) that have systemic bioavailability be subjected to a randomized dedicated ECG trial involving normal volunteers termed a thorough QT (TQT) study. The purpose of this study is to determine if the novel agent under investigation demonstrates a threshold effect on the QTc interval. TQT study designs can be crossover, parallel or adaptive in configuration and recruitment of both male and female subjects, when appropriate, is encouraged. The conventional TQT study is typically comprised of 4 components: a therapeutic dose of the investigational compound, a supratherapeutic dose of the test compound, a placebo cohort and a positive control arm with a known QT prolonging agent, most often the antibiotic moxifloxacin, designed to confirm assay sensitivity. The parameters to be analyzed in these protocols include either a change from baseline of the heart rate corrected QT interval (∆QTc) or the maximum time matched placebo adjusted change in heart rate corrected QT (∆∆QTc) at any time point during the study. The threshold level of regulatory concern was set at a mean ∆∆QT effect of approximately 5 ms. This translates to a 10 ms upper bound of the one sided 95% confidence interval (CI) around the largest time matched mean effect of the baseline adjusted difference between placebo and study drug. This boundary is independent of whether it is breached by either the therapeutic or supratherapeutic dose of the compound. ∆∆QTc mean effects less than 10 ms provides reasonable assurance that the compound under investigation does not adversely alter ventricular repolarization. In this case, routine collection of on drug ECGs in later stage trials would be sufficient to monitor cardiac liability. In contrast, with pharmaceuticals that display a 10 ms (20 ms for oncology agents) or greater ∆∆QTc signal, expanded ECG surveillance in later stage development would almost always be required to characterize the QT effect of the agent. However, a positive signal should not automatically be interpreted as proarrhythmic nor should this constitute the sole grounds for the compound to be halted in development. In analyzing the pharmacodynamics effect of the test compound on the ECG, an important consideration is that while a mean change in a cohort(s) may not reach the 10 ms threshold, there may well be individuals within the study who have abnormally high changes in QTc exceeding 10 ms from baseline values. These "concealed outliers" would likely impact drug development including go/no-go decisions as well as the intensity of ECG surveillance in later stage protocols.
The TQT study is typically carried out late in stage II or early in stage III of drug development after pharmacokinetic and proof of concept information is available and is not limited to only NCE. It might also be undertaken for approved drugs when a different patient population is targeted, or a new dosing regimen or route of administration is being considered. In contrast, regulators have suggested that TQT studies are not routinely required for drug combinations when the individual components have previously been evaluated and did not manifest any QT liability. They also proposed that TQT trials are usually not indicated for large molecular weight biologic proteins or monoclonal antibodies which do not enter the cell and have minimal direct ion channel effects or agents which do not manifest any systemic bioavailability. Finally, during execution of these TQT studies, compliance with Good Clinical Practice (GCP) is necessary to ensure that data meets the regulatory standards for submission [43].
An additional topic that was recently addressed in a 2016 FDA document is the role of race and ethnicity in clinical trials [73,74]. Historically, TQT studies have focused on the QT liability or intrinsic risk of a compound independent of its effects on people of various races or ethnic persuasions and many trials are underrepresented by these subgroups. ICH E14 stated that ethnicity would likely not affect the results of a TQT trial. Although there is a scarcity of large-scale scientific studies, it is now recognized that different populations of individuals may have a genetic predisposition to being more susceptible and sensitive to a pharmaceutical's effects either by attaining higher exposures or altering the drug's disposition. This in turn may be due to heterogeneity in various alleles that code for different ionic channels or that impact metabolizing enzymes. To this point, one well known example of differential racial sensitivity is that of the agent BiDil [75], a combination of a nitrate and hydralazine. Despite considerable controversy, it was approved by the FDA in 2005 as there was a statistically significant improvement in all-cause mortality and first hospitalizations in self-identified Black subjects versus the overall study population. In contrast to TQT trials, early phase studies typically encompass small cohorts of subjects and therefore are underpowered to provide clinically meaningful results regarding ethnicity, race, and pharmacogenetic susceptibility and sensitivity. Therefore, efforts should be expended to recruit subjects of varying backgrounds especially as drug development proceeds into larger scale later phase studies. To this end, the FDA recommends that all Investigational New Drug (IND), NDAs and Biologic License Applications (BLAs) report tabulated demographic data per their 2016 guidance document [74].
Current FDA Guidance for Assessing QT Liability
TQT studies are resource intensive and expensive to conduct averaging between 1-4 million dollars depending upon study design. As such, all stakeholders have been interested in developing a more cost effective and equally informative scheme to assess a compound's effect on the QT interval and its propensity to induce arrhythmias. To this end, there has been a paradigm shift towards a more comprehensive mechanistic approach to defining a compound's cardiac risk and potential to produce TdP rather than only profiling its effects on the QT interval. This strategy entails a more robust preclinical and clinical evaluation of new compounds integrated with concentration QT (cQT) analysis in early phase clinical trials.
Comprehensive In Vitro Proarrhythmia Assay (CiPA)
In 2013 an initiative was undertaken by industry, academicians and government regulators to develop a suite of preclinical and early clinical assays which would complement the ICH S7B guidance and provide an integrated assessment of arrhythmia risk. This initiative was known as the Comprehensive in vitro Proarrhythmia Assay (CiPA) [76]. In its most current iteration, it consists of the following 4 elements:
Preclinical
Evaluation of multiple cardiac ion channels as a tool to profile risk when clinical studies demonstrate a modest QTc signal (typically <20 ms) and there is evidence of potassium channel/hERG block. In addition to a hERG kinetic score, the most important channels to be evaluated include L-type calcium (L-typeCa 2+ ), late sodium (late Na + ), slow potassium (I Ks ), and occasionally other inward and outward voltage gated cations which may affect ventricular repolarization Robust in silico computer modeling of a compound's likelihood to cause TdP based upon data derived from ion channel studies. Compounds are to be classified as low, moderate, and high risk relative to a training and validation set of 28 approved reference drugs with known torsadogenic risk. A TdP metric score known as qNET has been devised to quantify risk into three subsets at exposures that are a multiple of the maximal concentration (Cmax). This model targets the assessment of early after depolarizations which are thought to be involved in the pathogenesis of TdP. It is not, however, designed to look at a compound's direct effects on the cardiac action potential.
Human induced pleuripotent stem cell cardiomyocyte preparations have been formulated to ascertain whether there are any unanticipated effects of a compound on ventricular cardiac tissue not documented by other investigations. This assay may be useful to uncover a NCE's effects not identified by the in silico model, in situations where there is inadequate human exposure in early phase studies, or if there is discordant data from other preclinical assays.
Clinical
Novel ECG biomarkers particularly evaluation of the early phase of cardiac repolarization, known as the J-T peak interval, a mechanistic biomarker of multichannel ion block, which may mitigate cardiac risk when a modest (<20 ms) QT signal is observed in first in human studies. The J-T interval primarily corresponds to ventricular repolarization and is governed by the interplay of multiple voltage gated cations. This interval, in turn can be subdivided into the J-T peak and the Tpeak-Tend segments (see Figure 7). While IKr is the predominant outward ion channel influencing the length of ventricular repolarization and hence the overall J-T interval, inward currents mediated by calcium and late Na + channel block are the principal factors responsible for the J-Tpeakc subinterval. It has been demonstrated that in compounds that prolong the QT interval due to single channel IKr block, an associated change of <9 ms in the J-Tpeakc value of the test compound is an indicator of late sodium multichannel ion block. [78]. This late sodium channel block has been postulated to attenuate the effects of IKr blockade and may therefore reduce the likelihood of ventricular arrhythmias.
The presence of multichannel ion block was first evaluated by Johanssen et al. [79], who published a paper in which they measured the QT, J-Tpeakc and Tpeak-Tend intervals in 4 drugs which were known to prolong the QT interval; ranolazine, quinidine, dofetilide and verapamil. They found that the pure hERG blocker dofetilide prolonged both the early and late components of the J-Tc interval and therefore had a higher risk of proarrhythmia. In contrast, the other three agents had a low risk of proarrhythmia and TdP due to multichannel ion block as evidenced by no appreciable prolongation of the corrected J-Tpeakc interval. In a follow up investigation to amplify their findings, these authors studied the ability to counteract the QT prolonging effects of the pure hERG blocker dofetilide by administering mexiletine and lidocaine which are two drugs known to affect late Na + current [80]. They showed that these late Na + channel blocking agents shortened the QTc by 20 ms and that this shortening was exclusively due to shortening of the J-Tc peak interval.
Pursuant to these observations, at a recent presentation by Vicente [81], updated commentary While I Kr is the predominant outward ion channel influencing the length of ventricular repolarization and hence the overall J-T interval, inward currents mediated by calcium and late Na + channel block are the principal factors responsible for the J-T peak c subinterval. It has been demonstrated that in compounds that prolong the QT interval due to single channel I Kr block, an associated change of <9 ms in the J-T peak c value of the test compound is an indicator of late sodium multichannel ion block. [78]. This late sodium channel block has been postulated to attenuate the effects of I Kr blockade and may therefore reduce the likelihood of ventricular arrhythmias.
The presence of multichannel ion block was first evaluated by Johanssen et al. [79], who published a paper in which they measured the QT, J-T peak c and Tpeak-Tend intervals in 4 drugs which were known to prolong the QT interval; ranolazine, quinidine, dofetilide and verapamil. They found that the pure hERG blocker dofetilide prolonged both the early and late components of the J-Tc interval and therefore had a higher risk of proarrhythmia. In contrast, the other three agents had a low risk of proarrhythmia and TdP due to multichannel ion block as evidenced by no appreciable prolongation of the corrected J-T peak c interval. In a follow up investigation to amplify their findings, these authors studied the ability to counteract the QT prolonging effects of the pure hERG blocker dofetilide by administering mexiletine and lidocaine which are two drugs known to affect late Na + current [80]. They showed that these late Na + channel blocking agents shortened the QTc by 20 ms and that this shortening was exclusively due to shortening of the J-T peak c interval.
Pursuant to these observations, at a recent presentation by Vicente [81], updated commentary was provided about the utility of measuring the early phase (J-T peak c) and late phase (J-T peak c-Tend) of the J-T interval as biomarkers that may modulate proarrhythmic concerns in the setting of a moderately positive QT signal and reduce the need for intensive ECG monitoring in late stage trials. To this end, open source code was released by the FDA using a vector magnitude function and software vendors including AMPS [82], Mortara and Philips have developed technology which offers measurement of these intervals based upon different mathematical models [83]. Although a meaningful disparity between the different vendors' algorithms has not been evidenced, a number of critical elements in this approach have yet to be standardized which may affect the accuracy and utility of this biomarker. These include which computer algorithm to use; how best to determine the J point origin and the peak of the T wave especially when the T wave apex is not monophasic; in what lead(s) the J-T component intervals should be measured; and what is the optimal heart rate correction factor to utilize. It is unclear at this time if these metrics will be routinely required by regulators to assess QT liability or only be recommended to qualify risk whenever a modest QT signal is identified. Moreover, the decision as to which algorithm ECG core laboratory should use and validate is problematic given that the FDA's posture is not to favor or endorse any specific methodology.
To date the FDA has recommended CiPA assays to 5 sponsors as part of their IND submissions although it is not a mandatory requirement that they be conducted or incorporated into drug protocols [84]. It therefore remains to be determined when CiPA information will be requested by government regulators, which elements of CiPA will be required, and who should be entrusted to provide these analyses. Furthermore, whether these assays should be pre-emptively undertaken by sponsors prior to IND submission and what would be the economic impact of adding these assays to drug development are both open questions for discussion. Hence, as additional CiPA assays are performed and the FDA databases this information, their precise role in a compound's development scheme will be better delineated.
In summary, the major purpose of the CiPA initiative appears to be to improve product labeling regarding proarrhythmic potential, enable further development of drugs that previously may have been prematurely terminated due to perceived cardiac risk, and influence the intensity of ECG monitoring in late phase trials [85]. It may also prove helpful when uninterpretable TQT data due to heart rate changes are present, when the test compound cannot be given to healthy volunteers such as in oncology patients, and when inadequate or low exposure margins are seen in phase I trials. Its implementation may further be beneficial for pharmaceuticals that manifest significant heart rate changes and when there is confounding QT data from preclinical and early clinical studies. Finally, it should be viewed as a complementary approach to the ICH S7B guidance as hERG assays and animal telemetry observations are still required to assess cardiovascular safety.
Concentration QT Modelling
The second major development in QT liability assessment involves the use of exposure response analysis (ER) or concentration QT modelling (cQT) [86,87]. This type of analysis was under investigation in 2003 and became an integral part of TQT trials in 2006 albeit as a secondary analysis tool relative to the primary central tendency by time point approach known as the Intersection Union Test (IUT) as detailed in ICH E14. In 2008 the FDA drafted a guidance document elucidating the role of cQT in drug development [88] and in 2015 it became a key element in the FDA's Q&A (R3) publication [12] that established it as a primary analysis tool to be applied predominantly in dose escalation studies as an alternative or "substitute" to conducting a large scale TQT study.
The emergence of cQT as a primary analysis modality is supported by a number of factors. Chief amongst these are the results of the Innovation and Quality Cardiac Safety Research Consortium (IQ-CSRC) SAD-like study [89], supporting the value of this approach in a small cohort of subjects, multiple computer simulation studies undertaken by stakeholders during the past 10 years, and a large body of knowledge garnered by conducting this analysis in hundreds of TQT trials. cQT has the distinct benefits of reducing resources and development timelines while enabling earlier go/no-go decisions which may obviate the need to undertake a large scale conventional TQT study. Additional advantages include facilitating extrapolation of a compound's cardiac risk at exposures that may not have been evaluated but are clinically important, obtaining information in challenging populations such as oncology patients, and clarifying ambiguous results that may have been generated when the IUT was undertaken. The correlation between the findings of cQT analysis and those of the IUT applied to traditional QT studies is high when the highest clinically relevant exposure is achieved, and regulators have reviewed and accepted cQT with multiple IND submissions. cQT modeling has now become an important part of the discussion with sponsors as they plan their drug development programs and study designs.
When applying cQT analysis, most commonly, a pre-specified linear mixed effects model is used to compute the mean and two-sided 90% CI of ∆∆QTc at clinically relevant exposures covering concentrations that are therapeutic as well as those that might be seen with hepatic or renal impairment or in the presence of CYP enzyme polymorphisms. When the upper bound of the two-sided 90% CI for the QTc effect of a drug treatment as determined by exposure-response analysis is <10 ms, one can infer that the compound under investigation is not high risk and further testing beyond routine ECG surveillance in later stage development is usually not required assuming that the highest clinically relevant exposure has been attained. In contrast, when the cQT regression line manifests a positive slope that crosses the 10 ms threshold of regulatory concern, the compound is deemed to be at higher risk for proarrhythmia and may justify additional preclinical testing per the CiPA paradigm or more extensive ECG monitoring in later phase studies. Finally, when employing the cQT model, it is essential to avoid extrapolating data at concentrations that were not clinically observed and to ensure that tests for hysteresis are applied.
cQT is typically performed in SAD studies as the highest drug concentrations are usually achieved in these protocols and the probability of reaching a targeted exposure level is greater than that which would be seen in multiple ascending dose (MAD) protocols. Incumbent in the SAD trials is that there are a sufficient number of subjects to provide adequate power to exclude a false negative result. This typically includes 3 or more cohorts of 6-9 subjects on active drug and at least 2 subjects per cohort receiving placebo. However, there are situations when cQT in SAD investigations may be challenging and the preferred strategy in these cases would be to incorporate cQT analysis in a MAD design. These scenarios include drugs which may be time released such that the model fails to capture a sufficient range of drug exposures, drugs which exhibit significant hysteresis and heart rate variability, and for drugs in which hERG blocking effects of the parent moiety and an associated active metabolite may not be fully manifest in a single dose escalation protocol. Moreover, in circumstances when the investigational agent has non-linear kinetics, or when there may be a long half-life, major metabolites or significant drug accumulation, the SAD study may not have enough subjects for meaningful analysis. In these cases, data from the SAD investigation may be combined with pharmacokinetic (PK) and pharmacodynamic data from a complementary MAD study assuming that clinical conduct of both trials is homogeneous and ECGs are rigorously collected, of high quality, and reviewed in a consistent and blinded manner so as to minimize data variability [43].
The decision to undertake cQT in early phase trials may be influenced by budgetary constraints although the expense associated with this strategy is significantly less than that of a dedicated TQT trial. In these situations, several cost effective alternatives may be attractive. One would be to only analyze the PK/QTc relationship in the highest dose cohorts while another approach would be to collect Holter recordings for each cohort, and store and save the information for future analysis, as most NCE will not proceed beyond phase II in development.
An important issue of regulatory concern is whether or not a positive control to document assay sensitivity is required for exposure response analysis to reduce the likelihood of false negative results. The original ICH E14 document stated that "the confidence in the ability of the study to detect QT/QTc prolongation can be greatly enhanced by the use of a concurrent positive control group (pharmacological or non-pharmacological) to establish assay sensitivity." Current guidance for cQT does not mandate the use of a pharmacologic positive control assuming the drug achieves an adequate exposure margin defined as twice the highest clinically relevant exposure which is generally higher than the supratherapeutic dose observed in conventional TQT trials. This exposure represents a worst-case clinical scenario accounting for both intrinsic and extrinsic factors that can affect drug metabolism. These include renal or hepatic impairment, drug-drug interactions, gender, age, food and metabolic enzymes that impact drug disposition. As such, the margin for patient safety and the likelihood of false negative responses are optimally addressed by analyzing data bracketing the Cmax, the highest clinical exposure, and at the timepoint when a multiple of the highest relevant clinical exposure occurs. In the event that a 2X exposure margin is not reached, then a positive control with moxifloxacin administered to 20-24 subjects has been proposed to establish assay sensitivity.
Alternatively, a non-pharmacologic approach to assess assay sensitivity known as "bias metric" [90], is currently undergoing examination and is most suitable in cases where the exposure margin falls just short of waiving the need for a positive control. This approach uses the slope of Bland Altman plots to address any potential bias between fully automated computer-generated QT measurements and those that are procured from the core laboratory algorithm used in the primary analysis. Darpo et al. published their results using this metric for 5 drugs with known QT prolonging effects that were also administered in the IQ-CSRC study [91]. When bias severity was greater than −20 ms over a range of QTc values of 100 ms or greater, then the ability to predict the QT effects of the five studied drugs failed. Contrariwise, when there was a QTc difference of <−10 ms bias severity, then the chance of a false negative finding was lowered to less than 5%. They concluded that a metric of bias severity "has to be included" in all reported QTc information obtained in early phase studies using exposure response analysis to minimize the risk of false negative results and ensure accuracy and consistency of fiducial measurements. To date, this recommendation has not been routinely fostered by government regulators who view its application on a case by case basis where it seems to have the greatest impact with drugs that demonstrate some degree of QT prolongation. Other non-pharmacologic approaches that have been proposed to establish assay sensitivity include the effects of standardized meals as advocated by Taubel [92], and the role of positional changes reported by Wheeler [93]. Neither of these have gained traction within industry and all non-pharmacologic approaches to determine assay sensitivity await further testing and validation before they become viable and accepted alternatives to moxifloxacin. From December 2015-November 2016, the QT-Interdisciplinary Review Team (QT-IRT) reviewed 25 proposals for cQT analysis submitted under the aegis of ICH E14 (Q&A R3) [94], 18 of which were early phase SAD/MAD study designs including some where data was pooled from the separate trials. The reviewers accepted 11 proposals while rejecting 14 of the submissions with the predominant reason for rejection being inadequate exposure margins or an inadequate modeling analysis plan. In a more recent update of 16 study designs of non-oncology compounds, the QT IRT accepted 14 as a substitute for conducting a TQT study while rejecting 2 [84]. Those that were rejected were once again primarily due to inadequate exposure margins. Other possible reasons for protocol modification or outright rejection have included failure to sample at Cmax, failure to sample for a sufficient time out to 24 h, failure to consider the impact of metabolites and inadequate justification for pooling of data from multiple studies. Moreover, when submitting a proposal involving cQT, there are other important elements to consider in the study design that might prompt rejection if omitted. These entail exploratory plots to test model assumptions, utilizing a linear mixed effects model or other alternative models such as Emax, and comprehensive treatment−placebo difference calculations. While not all protocols are mandated to be submitted for review to the FDA's cardio-renal division and QT-IRT, submission of all first in human drug proposals should be considered if there are concerns about study design, modelling or exposure levels. This review process typically encompasses 30-60 days following submission and should be incorporated by sponsors into their development timelines.
Commentary
The 10 ms threshold of regulatory concern, while having been successful in preventing any new drugs that are torsadogenic from coming to market since 2005, is not typically a value that would prompt intervention by clinicians, particularly if the calculated QTc after drug administration remains well within the normal range. Moreover, depending upon the specialty of practice, clinicians may not be aware that certain compounds have a risk of inducing rhythm disturbances and, more importantly, that drug combinations in the setting of underlying disease or electrolyte abnormalities may be synergistic in increasing the risk of an arrhythmic event. To add, even when there is documented and known QT liability for a drug, clinicians may not procure pre and post treatment ECGs to monitor the QTc interval. For example, a study by Choo et al. from the UK surveyed the prescribing practice of hospital based cardiologists and general practitioners over 6 months involving drugs which carried a risk of prolonging the QTc interval [95]. A baseline prolonged QTc was recognized by only 14% of cardiologists and 6% of general practitoners. Of the 4133 patients in the study, 22 had QTc values >500 at baseline but only 2 of these were identified prior to drug dosing. Surprisingly, despite manifesting a prolonged QTc at baseline, a significant number of subjects (37.8%) were nonetheless prescribed QTc prolonging drugs and only 8% had a follow up ECG at 48 h. Broszko and Stanciu surveyed a small group of psychiatrists from a university program and found that despite American Psychiatric Association (APA) recommendations [96], a large number of the physicians did not routinely screen for syncope, family history of sudden death, heart disease or congenital long QTc syndromes. Moreover, obtaining ECGs prior to initiating treatment was performed much less frequently in the outpatient cohorts compared to the inpatient population. This study may well be representative of other subspecialties underscoring the need for all practitoners to be cognizant of a drug's potential for producing arrhythmias prior to initiating therapy coupled with careful cardiovascular screening. To aid in this endeavor, the website CredibleMeds is an important and user-friendly reference in the public domain as it maintains a comprehensive list of medications that may be linked to TdP and classifies them as "known risk, possible risk and conditional risk". It also has a category of drugs to be avoided in the setting of congenital long QTc syndromes and the drug lists are continually updated as new information becomes available.
Based upon the aforementioned information, it is evident that prescribers including cardiac specialists fall short in appropriate surveillance for QT liability and need to be more diligent. This would entail performing a careful history for cardiovascular disease and obtaining a baseline ECG if indicated. While it is incumbent that regulatory agencies focus on public safety including the challenge of weighing a compound's benefit versus risk during the approval process, more attention needs to be directed towards practitioners to increase their awareness about drugs with known QTc effects often embedded in drug insert black box warnings. To illustrate this need, in a survey of roughly 5 million prescriptions in the UK, 23% of patients were prescribed a drug that the prescriber did not recognize could prolong the QT interval [97]. Accordingly, the mission of protecting the public is a shared responsibility involving both government regulators and health care providers and additional safeguards should be instituted to ensure patient safety when drugs that may affect cardiac repolarization are administered.
As an aid in this endeavor, computer software programs that identify adverse drug responses and potential drug interactions based upon individual patient demographic and laboratory data need to be developed and universally adopted to optimally safeguard patients from QT liability. To this end, the Mayo Clinic and University of Indiana have independently implemented clinical data support systems (CDSS) in an effort to alert pharmacists and physicians to medications which may predispose to arrhythmias [98]. For example, at Indiana University Medical Center, 2400 electronic records from coronary care unit patients were interrogated and a QTc alert risk score was developed based upon a QTc >500 ms or a change from baseline of >60 ms. Using this approach, they demonstrated that the risk of exceeding either of these thresholds could be reduced for both cardiac and non-cardiac drugs that have been associated with QT prolongation and TdP. Unfortunately, however, in the majority of cases when alerts were generated by the CDSS systems, prescribers and pharmacists ignored the alerts thereby compromising the favorable impact of these systems on reducing the likelihood of serious ventricular arrhythmias.
Despite these shortcomings, the further development of comprehensive CDSS that are integrated with laboratory and clinical proarrhythmic risk factors should be pursued in combination with continuing education for those entrusted with dispensing and prescribing medications. The development of wearable smart devices or handheld software applications that can monitor a patient's rhythm, assess the QT value, and transmit an alert would be useful for risk stratification and surveillance. Moreover, as genetic screening becomes more widespread and cost effective, this information could be integrated into these computer alert platforms thereby allowing persons with congenital QT syndromes to be identified sooner. This in turn would translate into heightened awareness of all parties involved and permit earlier prophylactic intervention so as to reduce patient morbidity and mortality. Future strategies to mitigate arrhythmia risk may include stereoselective and molecular engineering of new compounds to reduce undesirable arrhythmogenic effects, the development of agents that shorten the QT interval such as hERG activators that could be targeted to counteract the QT prolonging activity of drugs, and compounds that can shorten APD via blockade of the Ca 2+ /Na + exchanger. However, as cautioned by Malik [99], the science to support these approaches is yet to be substantiated and there is the possibility that QT shortening compounds may paradoxically create unwanted heterogeneity of repolarization and arrhythmogenicity in some cases.
Conclusions
It is evident from the foregoing discussion that the QTc interval, despite its modest positive predictive value for TdP and ventricular arrhythmias, continues to occupy a central focus of regulators as a surrogate marker for proarrhythmic risk during drug development. However, there has been an evolution in how stakeholders are approaching cardiac liability as embodied by the CiPA paradigm with its advanced preclinical assays, and the emergence of cQT as a primary analysis tool in dose escalation studies. Taken together, these paradigms offer a more mechanistic and cost-efficient approach to assess arrhythmia risk than that afforded by a conventional TQT trial.
While the QTcF correction formula remains the reporting standard for regulatory review, there is a lack of consensus as to what constitutes a normal value and it is recommended that protocol specific thresholds are chosen balancing the elements of subject safety against recruitment challenges. When QTcF does not adequately correct for heart rate effects, determination of an individual QTcI either by a full Day-1 acquisition of QT-RR pairs or following autonomic maneuvers such as postural change has been suggested. While the measurement of the QT interval can be problematic, the use of computer assisted algorithms to initially place fiducial markers followed by manual review and adjudication by experienced pharma QT expert readers would ensure that the precision and accuracy necessary to profile a compound's effects on ventricular repolarization are realized. Equally important is the recognition of both acquired and congenital LQTS and SQTS polymorphisms so as to permit appropriate triage of these individuals for diagnostic evaluation and testing. It is also critical to exclude them from enrollment in pharmaceutical studies that may aggravate an underlying predisposition to alter ventricular repolarization leading to serious ventricular arrhythmias and TdP. Finally, continued development and acceptance of user-friendly interfaces for health care professionals which allow real time access to drug information and clinical data are promising tools to ameliorate the risk of QT associated dysrhythmias.
Author Contributions: R.M.L. developed the concept and wrote the initial manuscript. S.P. contributed to a section of the discussion and critically reviewed the manuscript. I.A.J. provided assistance with graphical displays and critically reviewed the manuscript. | 2019-03-20T13:03:50.785Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "5940b1cfeab358e8cdfe0e05c508e92fe91fd714",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/20/6/1324/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5940b1cfeab358e8cdfe0e05c508e92fe91fd714",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270962326 | pes2o/s2orc | v3-fos-license | Plerixafor for pathogen‐agnostic treatment in murine thigh infection and zebrafish sepsis
Abstract Plerixafor is a CXCR4 antagonist approved in 2008 by the FDA for hematopoietic stem cell collection. Subsequently, plerixafor has shown promise as a potential pathogen‐agnostic immunomodulator in a variety of preclinical animal models. Additionally, investigator‐led studies demonstrated plerixafor prevents viral and bacterial infections in patients with WHIM syndrome, a rare immunodeficiency with aberrant CXCR4 signaling. Here, we investigated whether plerixafor could be repurposed to treat sepsis or severe wound infections, either alone or as an adjunct therapy. In a Pseudomonas aeruginosa lipopolysaccharide (LPS)‐induced zebrafish sepsis model, plerixafor reduced sepsis mortality and morbidity assessed by tail edema. There was a U‐shaped response curve with the greatest effect seen at 0.1 μM concentration. We used Acinetobacter baumannii infection in a neutropenic murine thigh infection model. Plerixafor did not show reduced bacterial growth at 24 h in the mouse thigh model, nor did it amplify the effects of a rifampin antibiotic therapy, in varying regimens. While plerixafor did not mitigate or treat bacterial wound infections in mice, it did reduce sepsis mortality in zebra fish. The observed mortality reduction in our LPS model of zebrafish was consistent with prior research demonstrating a mortality benefit in a murine model of sepsis. However, based on our results, plerixafor is unlikely to be successful as an adjunct therapy for wound infections. Further research is needed to better define the scope of plerixafor as a pathogen‐agnostic therapy. Future directions may include the use of longer acting CXCR4 antagonists, biased CXCR4 signaling, and optimization of animal models.
with aberrant CXCR4 signaling.Here, we investigated whether plerixafor could be repurposed to treat sepsis or severe wound infections, either alone or as an adjunct therapy.In a Pseudomonas aeruginosa lipopolysaccharide (LPS)-induced zebrafish sepsis model, plerixafor reduced sepsis mortality and morbidity assessed by tail edema.There was a U-shaped response curve with the greatest effect seen at 0.1 μM concentration.We used Acinetobacter baumannii infection in a neutropenic murine thigh infection model.Plerixafor did not show reduced bacterial growth at 24 h in the mouse thigh model, nor did it amplify the effects of a rifampin antibiotic therapy, in varying regimens.While plerixafor did not mitigate or treat bacterial wound infections in mice, it did reduce sepsis mortality in zebra fish.The observed mortality reduction in our LPS model of zebrafish was consistent with prior research demonstrating a mortality benefit in a murine model of sepsis.However, based on our results, plerixafor is unlikely to be successful as an adjunct therapy for wound infections.Further research is needed to better define the scope of plerixafor as a pathogen-agnostic therapy.Future directions may include the use of longer acting CXCR4 antagonists, biased CXCR4 signaling, and optimization of animal models.
WHAT IS THE CURRENT KNOWLEDGE ON THE TOPIC?
The role of CXCR4 has yet to be fully understood.Studies yielding paradoxical results posit the existence of noncanonical pathways.CXCR4 likely has many
INTRODUCTION
CXCR4 is a well-described chemokine receptor that upon binding with its ligand CXCL12 initiates multiple downstream effects, including retention of hematopoietic stem cells and neutrophils in the bone marrow. 1,2Plerixafor is a CXCR4 antagonist approved in 2008 for hematopoietic stem cell collection in combination with granulocyte colony-stimulating factor (G-CSF). 3 Plerixafor has also been shown to rapidly mobilize leukocytes to circulating blood in murine models. 2,4][7] In humans, plerixafor has shown promise in treating patients with Warts, hypogammaglobulinemia, infections, and myelokathexis (WHIM) syndrome.Patients with WHIM syndrome treated with plerixafor had higher sustained levels of peripheral leukocytes in addition to a decreased burden of bacterial and viral infections. 8Similar results were observed using the oral CXCR4 antagonist mavorixafor, highlighting the reproducibility of the previous results with plerixafor and providing stronger evidence for the proof of concept of CXCR4 antagonists playing a role in infectious processes. 9From a mechanistic perspective, there is plausibility as CXCR4 antagonism may lead to higher leukocyte counts readily available at the site of infection, decreased tissue injury via prevention of platelet-neutrophil complexes and neutrophil extracellular traps, among other mechanisms. 10s therapeutic dosing of plerixafor is well tolerated in the general population 11 we sought to confirm and expand the therapeutic potential of plerixafor as a possible pathogen-agnostic therapeutic to prevent and treat infections.Specifically, we tested plerixafor in a piscine lipopolysaccharide (LPS) sepsis model and multidrugresistant (MDR) Acinetobacter baumanii (AB5075) murine thigh infection model.
Zebrafish studies
We evaluated increasing concentrations of plerixafor in a previously established zebrafish sepsis model. 12Adult AB Danio rerio (zebrafish) outcrossed with stock from Zebrafish International Resource Center were maintained at an AAALAC International accredited facility.At 5 days post-fertilization (dpf), LPS derived from Pseudomonas aeruginosa was added to zebrafish wells at a concentration of 60 μg/mL.At the same time, plerixafor was added to reach a well concentration of 10, 1, 0.1, 0.01 μM.An LPS-positive control group (without plerixafor) and a negative control group (in embryo media without LPS) were roles.It is present in multiple populations of stem cells and all leukocytes.It plays key roles in cellular trafficking.CXCR4 expression is increased in various etiologies of inflammation and malignancy.Recent clinical trials have demonstrated the benefit of CXCR4 antagonism in primary immunodeficiency.Ongoing studies are evaluating a plerixafor combination in severe COVID (Clini calTr ials.gov Identifier NCT04646603).Our study questions the potential application of CXCR4 antagonists.
WHAT QUESTION DID THIS STUDY ADDRESS?
Can CXCR4 fill a clinical need aside from stem cell mobilization or selected cases of immunodeficiency?WHAT DOES THIS STUDY ADD TO OUR KNOWLEDGE?
Our study suggests partial CXCR4 inhibition may be beneficial in bacterial sepsis.Biased CXCR4 signaling is briefly discussed and may explain paradoxical observations in focal lesions.
HOW MIGHT THIS CHANGE CLINICAL PHARMACOLOGY OR TRANSLATIONAL SCIENCE?
Our study highlights the ability of sustained CXCR4 antagonism in hyperinflammatory states.This may lead to the development of CXCR4 antagonists with longer half-lives.Additionally, our study encourages clinical development in sepsis.
present.Sixteen fish were in each group.The zebrafish were evaluated at 6 and 24 h for tail edema and mortality.Tail edema assessment was made visually by a trained technician.We performed a two-sided Fischer's exact test without correction for multiplicity.
Murine thigh infection studies
We performed multiple iterations of a murine neutropenic thigh infection model used extensively to test in vivo antibiotic efficacy for wound treatment. 13A total of 10 ICR-CD1 mice (five males and five females) weighing 25-35 g were assigned to each study group.Neutropenia was induced with cyclophosphamide (150 mg/kg IP ×1) on day −3, and repeated on day −1 (100 mg/kg IP ×1).On day 0 animal thighs were inoculated with 10^6 colony forming units (CFUs) of the highly virulent and resistant strain of Acinetobacter baumannii (minimum inhibitory concentration of rifampin 2 μg/mL). 14On day 1 thigh biopsy was performed, with tissue weighed and plated for culture in triplicate.On day 2, we recorded culture growth determined CFUs by visual inspection, and the triplicate average was recorded.The sample weight was adjusted to yield CFUs per gram (CFU/g) of thigh tissue.Statistical analysis was performed using Log 10 -transformed one-way analysis of variance (ANOVA) followed by Dunnett's pairwise multicomparison analysis.Statistical analysis was performed with Graph Pad Prism V 9.4.1.
In the first iteration, there were 10 groups.The dosing schedule for the first iteration is seen in Table 1.Group 1 was untreated, receiving vehicle control only.Groups 2-10 received antibiotics, plerixafor, or both.Groups 5-10 were administered subtherapeutic (2.5 mg/kg IP BID; Groups 5-7) or therapeutic (10 mg/kg IP BID, Groups 8-10) rifampin.Plerixafor was administered subcutaneously at 5 mg/kg BID SC in one of 3 regimens -as preventionon days 3, −2, −1, 0 (Groups 2, 6, and 8); prevention on day −1, 0 (Group 3) or treatment on day 0 only (Groups 4, 7 and 10).To maximize cell mobilization, the plerixafor dose chosen corresponded with the highest prolonged dosing used in mice literature. 15n the second iteration of testing, there were seven groups of ICR-CD1 mice.Instead of prophylactic administration, all plerixafor was given as treatment at the time of Acinetobacter baumanii inoculation.Group 1 was untreated, receiving vehicle control only.Groups 2-6 received solely plerixafor for treatment on day 0. We administered plerixafor at descending doses starting with Group 2 at 0.5 mg/kg, Group 3 at 0.1 mg/kg, Group 4 at 0.05 mg/kg, Group 5 at 0.01 mg/kg, and Group 6 0.005 mg/kg.Group 7 did not receive plerixafor, only a subtherapeutic (2.5 mg/ kg) dose of rifampin.
Ethics statement
conducted in an AAALACi accredited facility in compliance with the Animal Welfare Act and other federal statutes and regulations relating to animals and experiments involving animals and adheres to principles stated in the Guide for the Care and Use of Laboratory Animals, NRC Publication, 2011 edition.
Zebrafish studies
Zebrafish results are depicted in Figure 1.Three fish had died by 6 hours, and and 8 fish (50%) by 24 h in the LPS-only group.An additional 7 (45%) demonstrated tail edema.In the high dose (10 μM) plerixafor group 5 fish died in the first 6 h and 7 died after 24 h.Three additional fish demonstrated tail edema.In the 0.1 and 0.01 μM, groups 1 and 3 fish had died by 6 h respectively.By 24 h, 2 and 8 died while 5 and 1 had tail edema.At 24 h significantly fewer fish were affected (died or tail edema) in the 0.1 and 0.01 μM groups (15 vs. 7, p-value 0.0059 and 15 vs. 9, p-value 0.0373).No fish died in the LPS-free wells at any time.
Murine neutropenic thigh studies
Murine results are depicted in Figure 2a-c In iteration 2 (Figure 2b), the group with plerixafor dosing of 0.5 mg/kg showed a nonsignificant decrease mean of 7.85 Log CFU/gm of tissue vs. the vehicle control 8.08 Log CFU/gm.All other plerixafor treatment groups were associated with increased mean log CFU/gm compared with control.ANOVA test statistic p < 0.001 and Dunnett's comparison resulted in p-value <0.01 between the control and rifampin group.
In Iteration 3 (Figure 2c), plerixafor treatment groups of 0.25, 0.5, and 1 mg/kg showed a nonsignificant decrease mean log CFU/gm vs. the vehicle control which demonstrated (7.83, 8.38, 8.35 vs. 8.46 log CFU/mg).However, all plerixafor treatment groups were associated with increased mean log CFU/gm counts compared with active 10 mg/kg rifampin control (7.64, 7.67, 8.34 vs. 7.54 log CFU/mg).One-way ANOVA test statistic showed p < 0.001.The rifampin 10 mg/kg group showed p-value = 0.025 and both groups without cyclophosphamide conditioning p < 0.001.A non a priori two-tailed T test on the two groups without conditioning gave a p-value of 0.096.
DISCUSSION
Our study further investigates plerixafor as a pathogenagnostic prophylactic and therapeutic treatment in a zebrafish model of sepsis and a murine thigh infection model.We found plerixafor was effective to reduce the from LPS-induced sepsis in zebrafish at a concentration of 0.1 μM.This is not surprising as LPS has been shown to lead to activation of cellular response via CXCR4. 16A limitation of the zebrafish model was the lack of repeat iterations.In our study, plerixafor did not reach statistical significance in mouse soft tissue infection caused by a multi-drug resistant and highly virulent strain of Acinetobacter baumannii.However, at a dose of 5 mg/kg whether prophylactic or therapeutic, plerixafor was associated with a mean decrease in log CFU/gm compared with untreated controls.
The 5 mg/kg initial dosing had been commonly used in mice and is approximately one-third the LD 50 .This high dose was used to maximize leukocyte mobilization.Previous murine infectious studies reporting improvement in survival used C57BL/6 mice without cyclophosphamide conditioning and had at least three additional differences in model design.The models used longer observation to demonstrate benefit, evaluated pathogens that affected the host systemically (rather than local), and some used different dosing.
First, regarding the duration of observation, our data were obtained 24 h after inoculation and altogether showed our plerixafor regimen was likely not effective at that time.Many aspects of adaptive immunity take more than 24 h to initiate, therefore, our results are likely reflective of interaction with innate rather than adaptive immunity.Studies that reported a survival benefit in mice monitored the mice for several days.It is possible that adaptive immunity positively influenced the outcomes in those studies.However, our zebrafish model did show benefit and was also limited to 24 h with a trend at 6 h of observation.Therefore, innate immunity also likely plays a significant role and justified the 24 h time frame for the mouse studies.Further support for the ability of plerixafor to act via the innate immune system in sepsis is that the majority of CXCR4 are found on myeloid cells 4 and the age of the zebrafish (5 dpf) suggests their adaptive immunity was not well developed. 17As mentioned previously, CXCR4 plays a role in platelet-neutrophil complexes and neutrophil extracellular traps (an innate function). 10ltogether this points to a potential beneficial impact in the resulting sepsis rather than decreasing the infectious burden.
Second, considering localized vs. systemic infection, impaired homing may explain our results.We found a benefit in LPS-induced sepsis which is consistent with results observed in murine models studying systemic infections.However, in localized infections, we did not observe a benefit with plerixafor.CXCR4 antagonism mobilizes neutrophils from the bone marrow and CXCR4 signaling is needed for neutrophils to arrive at a site of injury. 4Damage-associated molecular patters (DAMPS) use CXCR4 signaling to recruit neutrophils. 18,19Plerixafor is a CXCR4 antagonist but does activate some downstream effects through biased signaling.For example, plerixafor induces βarrestin recruitment to CXCR4 allowing for receptor internalization and turnover.Alternatively, the antagonist mavorixafor blocks internalization of CXCR4 and increases its expression. 20CXCR4 βarrestin recruitment is a possible mechanism that results in impaired neutrophil homing.Corroborating this, human studies in focal infectious lesions suggested worse outcomes and delayed healing when treated with plerixafor (of note, only one dose was evaluated). 21Conversely, mavorixafor does not allow CXCR4 internalization and results in increased CXCR4 expression. 20This may have contributed to the effects observed in human studies demonstrating increased leukocyte homing to local tumor sites with mavorixafor. 22I G U R E 1 The number of zebrafish alive, alive with tail edema (indicative of vascular leak), or dead after 24 h of exposure to LPS.Note that deceased fish also demonstrated tail-edema (i.e., 15 fish in the LPS had tail edema).Each group had 16 fish and continuous exposure to the level of plerixafor indicated.
the lack of improvement with plerixafor local infection may be specific to plerixafor due to biased signaling and impaired homing.
A drug not allowing CXCR4 internalization may improve leukocyte homing.DAMPS such as HMGB-1 form a complex with CXCL12 that is more efficient at recruiting CXCR4 + cells than CXCL12 alone. 18Altogether, an inhibitory concentration of CXCR4 that mobilizes leukocytes from bone marrow and simultaneously allows for localization may be achievable and improve outcomes in localized infections.
Third, regarding dosing considerations, plerixafor has already been studied in a zebrafish LPS model with differing results reported. 23Novoa et al. reported adding F I G U R E 2 (a) Using a murine neutropenic thigh model, groups of 10 ICR-CD1 were conditioned with cyclophosphamide.They were given plerixafor prophylactically or at the time of inoculation.All plerixafor-group mice had plerixafor dosed at 5 mg/kg SC BID.Three groups had either subtherapeutic or therapeutic rifampin.After 24 h, the thigh biopsy was performed and the tissue cultured.24 h after culture, the samples were assessed for colony forming units (CFUs) per gram of thigh tissue.The logarithmic average is displayed.Error bars indicate standard deviation.Inlays (b) and (c) depict additional results for murine neutropenic thigh model groups as indicated.LPS to a maximum nontoxic dose of plerixafor (10 μM) which resulted in toxicity.This should not be surprising as a near-toxic dose of medication may exhibit synergistic deleterious effects when combined with another toxin (LPS in this case).Our results corroborate the findings of Novoa in that we also found 10 μM of plerixafor tended toward worsened outcomes.At 6 h, the 10 μM zebrafish group demonstrated more zebrafish mortality than the LPS control group.It is worth noting that FDA filings report the half-maximal inhibitory concentration (IC 50 ) for plerixafor-induced ligand binding at 0.651 μM and chemotaxis at 0.015 μM. 24Therefore, doses 10 μM may induce complete receptor inhibition whereas our positive result at 0.1 μM was likely due to partial inhibition.Impaired neutrophil chemotaxis may have compromised the host's ability to react to LPS.Conversely, presumably higher levels of CXCR4 signaling activated additional inflammatory mechanisms as this would simulate a physiologic state with a high burden of tissue injury.
Aside from lower dosing, a significant difference in our zebrafish and murine models was continuous versus intermittent exposure.In the zebrafish model plerixafor is dissolved in embryo media.In contrast, the half-life in mice is ~1 h.To the best of our knowledge, no pharmacokinetic and pharmacodynamic relationship has been established between plerixafor and neutrophil mobilization in ICR-CD1 mice.However, our twice daily dosing likely caused relatively shorter CXCR4 inhibition (as the fish had continuous exposure).We will note one mice study showing mortality benefit used single injection 5 while another used continuous infusion. 6Because CXCR4 signaling is induced by DAMPS, a sudden decrease in CXCR4 antagonism could have mimicked CXCR4 signaling during health recovery.Thereby, fluctuating or decreasing levels of CXCR4 antagonism may have had a stabilizing effect on myeloid cells which may not be helpful in a local infection.Finally, zebrafish have two paralogs of CXCR4 25 with unique distribution whereas mice have a single gene.Altogether, if plerixafor potentially has a beneficial impact on humans, these are a few of the factors needed in consideration.
CXCR4 antagonists with partial CXCR4 inhibition, longer half-life, or preventing CXCR4 internalization may result in different outcomes.Our data combined with successful infectious murine studies already published suggest that continuous low-dose exposure and/or partial CXCR4 inhibition may be primary considerations.
Further research could include confirmation of the benefit seen in LPS sepsis in another animal model.This would confirm the effect and facilitate pharmacokinetic and pharmacodynamic studies which are not easily performed in fish models.Pharmacokinetic and pharmacodynamic relationships should be established in a feasible model and with other CXCR4 antagonists as this would enable the optimization of downstream signaling and may determine the degree and duration of inhibition needed.
CONCLUSIONS
Our zebrafish model demonstrated decreased end-organ damage from LPS-induced sepsis at a concentration of 0.1 and 0.01 μM.Our murine thigh study models demonstrated nonsignificant trends in decreased log CFU/gm counts in select plerixafor groups compared with controls.However, these results were inconsistent, and in some groups, plerixafor was associated with increased bacterial growth compared with controls.Plerixafor is a promising pathogen-agnostic therapeutic for the treatment of generalized sepsis; however, there is currently not enough evidence to support plerixafor use as an adjunct to treat localized wound infections.Further research should investigate antagonists without partial CXCR4 signaling, longer half-lives as well as lower doses targeting lower receptor occupancy.
Act and other federal statutes and regulations relating to laboratory animals.
Animal protocol 23 -
ET-21 was approved by the Walter Reed Army Institute of Research, Institutional Animal Care and Use Committee in accordance with National and Department of Defense guidelines.The research was T A B L E 1 Visualization of dosing regimens for the first iteration of testing.Results are depicted in Figure 2a.
2
Visualization of dosing regimens for the third iteration.Results are depicted in Figure2c. | 2024-07-05T06:17:18.334Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "fcd77d92ab850a0584e60ac86456587d23da47a4",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "44447d92f41b0b722090407a895fa7fbe65593a3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238478813 | pes2o/s2orc | v3-fos-license | Hybrid Particle Filter trained Neural Network for Prognosis of Lithium-ion Batteries
Prognostics and Health Management (PHM) plays a key role in Industry 4.0 revolution by providing smart predictive maintenance solutions. Early failure detection and prediction of remaining useful life (RUL) of critical industrial machines/components are the main challenges addressed by PHM methodologies. In literature, model-based and data-driven methods are widely used for RUL estimation. Model-based methods rely on empirical/phenomenological degradation models for RUL prediction using Bayesian formulations. In many cases, the lack of accurate physics-based models emphasizes the need to resort to machine learning based prognostic algorithms. However, data-driven methods require extensive machine failure data incorporating all possible operating conditions along with all possible failure modes pertaining to that particular machine / component, which are seldom available in their entirety. In this work, we propose a three-stage hybrid prognostic algorithm (HyA) combining model-based (Particle Filters-PF) and data-driven (Neural Networks-NN) methods in a unique way. The proposed method aims to overcome the need for accurate degradation modeling or extensive failure data sets. In the first stage, a feedforward neural network is used to formulate lithium-ion battery’s degradation trends and the corresponding NN model parameters are used to define the initial prior distribution of PF algorithm. In the second stage, the PF algorithm optimizes the model parameters and the posterior model parameter distributions are utilized to ‘warm-start’ the neural network used for prognosis and the third/final stages focuses on prognosis and RUL estimation using the trained NN model leveraging on the posterior distributions of the PF fine-tuned weights and biases. The proposed method is demonstrated on CALCE and NASA lithium-ion battery capacity degradation datasets. The efficacy of the proposed hybrid algorithm is evaluated using root mean square error (RMSE) values and alpha-lambda prognostic metrics. Also, the impact of the NN architecture on the prediction accuracy and computational load are analyzed.
I. INTRODUCTION
Electronic devices and systems are subjected to thermal, electrical and mechanical stresses on the field and hence, the reliability of these devices is of utmost concern. Prognostics and health management (PHM) for electronic systems aims to detect, isolate and predict the onset and source of system degradation as well as the time to system failure [1]. In general, prognostic algorithms are categorized as modelbased and data-driven methods. Model-based or physics-based methods (as they are often referred to interchangeably) use an accurate degradation model curated for the specific system/component under pre-defined environmental and operating conditions. The prediction accuracy of the degradation model gets compromised due to the variability in the environment/operating conditions or if an individual system/component tends to follow a significantly different degradation trend compared to the rest of the lot due to intrinsic or extrinsic factors that are not fully attributable. Commonly used model-based methods include Kalman filters (KF) [2], particle filters (PF) [3] and adaptations of the aforementioned methods [4] - [7].
Data-driven methods are alternate approaches to modelbased methods as they identify degradation trends in the available degradation (start to end failure) data and use it for prognosis. Typical data-driven methods include support vector machine (SVM) [8], relevance vector machine (RVM) [9], neural networks (NN) [10] - [12], Gaussian process regression (GPR) [13] - [15] etc. Though a powerful approach, data-driven methods elevate computational complexity as they require large amount of failure data incorporating all the possible failure modes and operating conditions.
With emerging technologies and advancement in manufacturing processes, new devices/components are being developed to cater to these emerging needs. The major challenges inhibiting reliability studies on such newly developed devices/systems are the lack of sufficient failure data and also, the lack of full-fledged physics-of-failure models. To address the generalization problem, hybrid approaches combining model-based and data-driven methods are being widely used in the recent past. Hybrid prognostic approaches have an upper edge on such new devices as they neither require an accurate degradation model nor a large amount of training data for the purpose of remaining useful life (RUL) estimation. In other words, they make best use of partial knowledge and sparse data available for the new device / component under prognostic investigation.
Hybrid/Fusion Prognostics has become a research hotspot in the recent past. Wang et al. [16] used RVM for sparse representation of the degradation data along with the use of an empirical degradation model for predicting the RUL of the rolling element bearings. Chang et al. [17] used RVM to determine the measurement noise in lithium-ion battery capacity degradation dataset and apply it for parameter estimation in a PF algorithm. Similarly, Song et al. [18] combined an autoregression (AR) model with PF for RUL estimation on spacecraft lithium-ion battery dataset from the NASA repository. Sun et al. [19] used the extreme learning machine (ELM) approach to construct a degradation model for battery degradation data and used it as the "measurement function" in PF state-space formulation algorithm for RUL estimation.
From the above-mentioned methods, it is evident that PF based approaches are promising for the purpose of RUL estimation. The reason being the ease of its applicability to highly non-linear systems along with non-Gaussian noise present in it. There have also been attempts to use a suitable surrogate data-driven model as the state transition function in PF algorithm over empirical/ phenomenological models. The surrogate model formulation can be done using statistical curve fitting methods such as auto regressive integrated moving average (ARIMA). However, ARIMA methods are more suited for linear trends and for short-time horizon predictions. On the other hand, artificial neural networks outperform other statistical approaches due to its versatility to map complex input-output relationships and its ability to identify hidden degradation patterns in the system failure data. One of the first attempts to combine PF and NN for the purpose of RUL estimation was proposed by Baraldi et al. [20] back in 2013. The authors developed an ensemble NN model which creates a large amount of training datasets. These data are then converted into analytical models by data mining techniques and substituted into a PF algorithm as the measurement and state transition functions. The authors succeeded in overcoming the need of using an empirical model in the PF algorithm though the computational complexity and load was enormous compared to conventional methods. Also, the accuracy of the proposed method primarily depended on the amount of training data simulated by the ensemble NN model.
Sbarufatti et al. [21] proposed a self-learning adaptive algorithm to improve the approach proposed in Ref. [20]. The authors proposed a hybrid prognostic algorithm where the radial bias function (RBF) neural network was used as the degradation model for prognosis of lithium-ion batteries. The kernel parameters of the neural network model were estimated through the PF algorithm. Further, the proposed damage model was projected in future to predict the battery capacity and the remaining useful life. The discharge curves of 5 batteries from NASA's repository were used in this work. The data with the longest curve corresponding to a pristine battery was used for training. Even though the prediction results of the proposed method were promising, the algorithm was tested on a dataset with a slow but progressive aging dynamics. This puts doubt on the adaptability of the algorithm on degradation data due to accelerated aging and unforeseen sudden changes in the degradation behavior.
Since the RBF networks are restricted to one hidden layer and linear output activation functions, the authors further extended their work to adopt a multilayer perceptron (MLP) NN model and tested the approach on NASA and CALCE datasets for estimating the RUL [22]. The authors used an MLP network with 5 hidden neurons to construct the degradation model. The NN degradation model was used in the PF algorithm for predicting capacity degradation. The authors tested the same method on fatigue crack growth data [23] as well. For crack growth data, the algorithm was trained with constant-amplitude degradation curve and tested on changing-amplitude degradation data. The prediction accuracy was poor in the early degradation stages and improved when more than 50% of degradation data was available for prognosis. Wu et al. [24] introduced a bat algorithm for resampling the particles in PF algorithm to improve the performance of MLP+PF hybrid approach. In all of the above-mentioned methods, NN model was used as the damage evolution model in the PF framework and the network parameters were solely dependent on the training dataset. No information from the test dataset was plugged into the algorithm and hence a large amount of data from the test data was required for the algorithm to identify the damage evolution trend.
In this work, we propose an intelligent and adaptive hybrid approach where PF based state estimation is used to warmstart a NN model. The informed stochastic parameter initialization helps to overcome the generalization issues faced by other hybrid approaches in literature. Secondly, the PF optimized NN model is trained with available information from the test dataset using a Levenberg Marquardt (LM) algorithm to find the optimal network parameter values which can be used for prediction of the future states and the RUL as well. The proposed approach was tested on NASA and CALCE lithium-ion batteries capacity degradation datasets and the RMSE values were used as the performance metrics. Our work moving forward is organized as follows. In Section II, the standard NN and PF approach for prognosis are explained and the prediction results using these conventional methods are presented in Section III. In Section IV, the proposed hybrid PF based NN approach is introduced and the prediction results for both NASA and CALCE dataset at different prediction starting points using the proposed hybrid framework are examined in Section V. Also, the impact of the network architecture is discussed as well. Finally, the conclusions of the study and possible scope for future work are summarized in Section VI.
A. FEEDFORWARD NEURAL NETWORKS (FFNN)
In this work, the neural network architecture chosen is a Multi-Layer Perceptron (MLP) with M number of hidden neurons. The MLP model adopted here represents the degradation trend of lithium-ion battery's discharge capacity with respect to charge/discharge cycles. The number of cycles is fed into the NN model as input. The input node is connected to M neurons in the hidden layer. Each neuron generates an output based on a sigmoidal activation function in the hidden layer as represented below.
where wi (1) and bi (1) are the weight and bias values corresponding to the input node and h(.) represents the hidden layer activation function. The charge/discharge cycle index is represented by k and i = 1,2…., M represents the index for hidden neurons with M being the number of hidden neurons used is the network architecture. The weighted sum of all the hidden neurons gives the predicted battery capacity.
A linear activation function is used at the output layer and the overall output of the network can be represented as: where wi (2) and bi (2) are the weight and bias values associated with the hidden layer and M is the number of hidden neurons in the NN network. The network output g(.) gives the predicted battery capacity with respect to the cycle index k. Also, h(.) is the non-linear sigmoid activation function of the hidden layer and f(.) represents the linear output activation function. In a standard FFNN model, the network parameters are optimized for the training dataset by minimizing the mean squared error values using LM algorithm. The trained NN model is used for predicting the battery capacity values on the test dataset.
B. STANDARD PARTICLE FILTER
Particle filters are sequential Monte Carlo (SMC) methods and work on the concept of recursive Bayesian method for state estimation. In this work, the PF algorithm is used for estimating the NN model parameters recursively for a given a set of observations/test data. Ideally, PF algorithm employs empirical or physics-based models for state estimation but in this work, we use the NN degradation model shown in Eqn. (2). Therefore, the state space representation for the system can be expressed as: where xk and xk-1 refers to the current and previous state, respectively and k-1 is the process noise. The state transition function g(.) is the NN degradation model shown in Eqn. (2) and is the measurement noise. The predicted lithium-ion battery capacity is represented by zk. The particle filtering algorithm consists of two stages → State Estimation and State Prediction. In the first stage i.e., state estimation, PF recursively estimate the posterior probability distribution of the NN parameters, xk, given a set of test data z1: k where k is the charge/discharge cycle index. At the first time step, k=1, Ns number of samples are generated based on the assumed initial prior distribution. For subsequent time steps, posterior distribution of the previous time step (k-1) is used as the prior distribution for the current time step (k). Each sample is assigned a weight value sk and the current damage state is transmitted through the state transition function based on a likelihood function to deduce the next damage state. The likelihood function can be expressed as where zk is test data, xk j is the damage state predicted based on the model parameters . is the vector of NN hyperparameters. Here, the model parameters are predicted based on the state transition function g(.) represented by Eqn.
(2). Subsequently, the posterior distribution can then be expressed by: where Ns is the number of samples/particles and each sample is drawn from an initial prior distribution which was obtained based on the user's knowledge of the system and (. ) is the Dirac delta function. It is to be noted that the particle weights, sk, used in Eqn. (6), are different from the NN weight values, w, used in Eqn. (2). The weight of each particle is computed as: where ( | ) is the likelihood of the observation zk. The estimated network parameters are used to project the state transition equation till the end-of-life of the system to predict the future state.
A. BATTERY DEGRADATION DATASETS
Two different sets of battery degradation data from different laboratory setups are used in this work to evaluate the performance of the proposed hybrid algorithm.
1) CALCE DATASET
Four LiCoO2 prismatic cells with a rated capacity of 1.1Ah were subjected to degradation. These cells underwent a constant current/constant voltage charging protocol with a constant current rate of 0.5C until the voltage reached 4.2V. The batteries were sustained at 4.2V till the charging current dropped below 0.05A. The failure threshold for these batteries were set to be 0.88Ah. All four batteries showed similar capacity degradation trends; hence we chose CS-36 for training and CS-37 and CS-38 for the purpose of testing the proposed hybrid algorithm (HyA). The capacity degradation curve versus the charge/discharges cycles of the three batteries considered in this work are shown in Fig.1(a).
2) NASA DATASET
For the second dataset, we chose to use the degradation data of 18650 LiCoO2 batteries from the NASA repository. The rated capacity of these batteries is 2.1Ah and unlike the CALCE dataset, these batteries were cycled under random currents rather than constant discharge currents. The battery capacities were measured after every 1500 periods and the failure threshold was set to be 1Ah. The capacity degradation curve versus the charge/discharges cycles is shown in Fig.1(b). The labels of batteries used as training and test datasets from both CALCE and NASA repository are shown below in Table I. In this section, the prediction results of the standard FFNN model and standard PF approach are compared. The battery CS-36 of CALCE dataset and RW9 of NASA dataset were used to train the neural network model with 3 hidden neurons. The number of hidden neurons greatly affects the performance of the NN model. However, there are not any definitive methods available in literature for the optimal selection of hidden neurons. The selection methods proposed in literature were either specific to the system/device considered or was specific to the type of NN architecture used for their study. Also, the number of NN weight and bias parameters for 5-6 hidden neurons scales up exponentially which can also result in overfitting of the time series Threshold degradation patterns and thereby lead to convergence issues and multiple local optima solutions. Hence, we adopted a trial-and-error method based on Bayesian Information Criterion (BIC) to fix the number of hidden neurons suitable for our study [25,26].
The trained NN model was used to predict the capacity values of test data sets, RW10 and CS-37. The prediction results are shown in Fig. 2(b) and 2(d), respectively. The trained NN model was executed for 50 repetitions, of which only 3 repetitions were able to trace the actual degradation trend for RW10. A particular repetition was considered to be successful if the prediction trace lies within the 2σ bounds. If the prediction trace was beyond the 2σ limits, then it was considered an outlier and eliminated. The CALCE dataset has lesser inflections in the degradation trend compared to the NASA dataset. Despite that, only 5 out of 50 repetitions were successful. Similar predictions results were observed for CS-38 as well. The prediction success rate and accuracy can be improved by opting for more complex NN architecture such as increasing the number of hidden neurons or by using a sigmoid output activation function though both the options comes with additional computation cost. The take away message here is that even a simple NN architecture fails to capture the predict the degradation trend with good prediction accuracy even if it is trained with full run to failure data of one device/system.
To address the limitations of the FFNN, a PF algorithm with NN degradation model as the measurement equation was analyzed. This approach is similar to the hybrid approached used in [21] - [24], as described earlier in Section I. The curve fitting results for RW9 and CS-36 using NN model were used as the initial parameter guess for the PF algorithm. The prediction traces for RW10 and CS-37 for 5000 particles are shown in Figs. 2(a) and 2(c), respectively. The degradation model with 3 hidden neurons is expected to be capable of capturing the non-linearity in the degradation trend. However, PF algorithm is unable to track the actual degradation trend. The prognostic performance can be improved by using more complex NN architecture or by using meta-heuristic algorithms for resampling weighted particles in the PF algorithm. However, those approaches increase the computational load and complexity tremendously and employing such techniques in real-time would be challenging. This calls for the need of an adaptive hybrid approach for RUL estimation for systems wherein the network architecture is simple yet powerful enough to capture highly non-linear degradation trends.
IV. PROPOSED HYBRID PROGNOSTIC FRAMEWORK
The proposed hybrid particle filter trained neural network framework (HyA) is shown in Fig. 3. The proposed method can be split into three stages -(A) Degradation Model Formulation and Parameter Initialization, (B) Bayesian State Estimation and (C) Neural Network Prognosis with Bayesian Posterior Weights and Bias. To begin with in Stage A, one of the batteries run-to-failure data from each dataset was chosen as the training dataset. In this case, CS-36 and RW9 were chosen as the training datasets. The curve fitting results for NN model by varying the number of hidden neurons (2 to 6) were obtained. The Bayesian information criterion (BIC) was evaluated and used as the deciding factor for network model selection. This is because the BIC is a robust metric for model selection which penalizes the use of a model with too many fitting parameters for any given data set. The NN architecture model with the lowest BIC value was chosen for the purpose of this study. The BIC can be expressed as where is the maximized value of the likelihood function of the NN model, k is the number of parameters to be estimated by the model and n is the number of observations in the training dataset. The first term in the equation ( ( )) is the penalty term that prevents the choice of an artificially complex model for any given data set. distribution of the NN weights and biases for subsequent recursive Bayesian updating. The PF algorithm employed in this framework is solely used for the purpose of state estimation. In Stage B, the entire training dataset is fed into the PF algorithm as measurement data. The NN degradation modelbased state transition function, shown in Eqn. (3) and Eqn. (4) are used to estimate the network parameters. The first two stages of the framework constitute the training phase. In Stage C, a FFNN model with 3 hidden neurons was configured using the test dataset. As mentioned earlier, the choice of using 3 hidden neurons was based on the BIC analysis. Batteries CS-37, CS-38 and RW10 were the test datasets used for state prediction and RUL estimation. The network parameters estimated by the PF algorithm in Stage B are used to configure the initial weights and biases of the FFNN model. This approach helps to warm start the neural network training for the test dataset. As warm starting the neural network helps to restrict the network parameters closer to the optimal values, the prediction accuracy is expected to be higher. The PF algorithm in the training phase is executed with 5000 particles and the predicted network parameter values with 1σ limits of the posterior distribution were chosen. A uniform distribution was generated using the 1σ bounds and 50 random samples were generated from the uniform distribution.
The FFNN model in Stage C was executed for 50 repetitions with 50 different initial configuration of weights/bias values. The FFNN model is now further trained using the available test data (using the training data led PF based posterior weights and biases as the starting values of the NN) for the test dataset with 50 different initial weight/bias values. LM algorithm was opted for training the network parameters. The trained model was used to predict the future battery capacity till the end-of-life and hence estimate the remaining useful life of the batteries.
A. RUL PREDICTION FOR NASA DATASET USING HyA
The prediction results for battery RW10 at three different prediction starting points i.e., 30, 60 and 90 cycles are shown in Figs. 4(a), 4(b) and 4(c) respectively. At 30 cycles, with just few cycles of data available in the test data, the proposed HyA is able to capture the degradation trend including the inflection at the 120 th cycle. Compared to the results obtained using standard FFNN shown in Fig. 2(b), 43 out of the 50 repetitions successfully traced the degradation trend. As mentioned in earlier sections, a particular repetition was considered successful if the degradation trace lies within the 2σ bounds. At 60 cycles, the prediction accuracy was expected to improve with a greater number of data available for prediction. However, the kink in the degradation data at the 60 th cycle caused the HyA to predict an exponentially increasing trend for few of the repetitions. Despite the glitch in the prediction results, the proposed HyA achieved a success rate of about 66%. At 90 cycles, with the availability of more test data along with warm start settings, the HyA approach achieved a success rate of 88%. For the RUL distribution shown in Fig.5, the repetitions wherein the predictions failed to be within 2σ bounds were omitted and only the successful repetitions were used to construct the RUL distribution. The predicted RUL almost coincides with the true RUL at 30 cycles but the large width of RUL pdf indicates uncertainty in the predictions. The width of the RUL pdf is narrow at 60 and 90 cycles even though there is an error of 18 cycles between the predicted and true RUL values. In order to evaluate the effectiveness of the proposed hybrid approach, the root mean squared error (RMSE) value was adopted as the prognostic performance metric. The RMSE value can be expressed as where k is the cycle index, n is the number of predictions, T is the prediction starting point and dk corresponds to the error between predicted capacity and actual capacity value at the k th time instant as shown below: Comparison of the RMSE values (mean value of 50 repetitions) between the standard PF algorithm, FFNN and the proposed hybrid algorithm (HyA) are listed in Table III.
The RMSE values clearly show that the proposed hybrid algorithm performs better than the other two conventional methods. Despite the low success rate of 66% at 60 cycles due to the sharp inflection point, the RMSE value is better, clearly indicating the robustness of the proposed hybrid approach. For the CALCE dataset, battery CS-36 was used for training the network parameters. The failure threshold for these batteries were set at 0. 88Ah.The prediction results at 120 th , 185 th and 250 th cycles for both the batteries CS-37 and CS-38 are shown in Fig. 6 and Fig. 7, respectively. Unlike the NASA dataset, the CALCE dataset has a simple exponentially decreasing trend. Hence, the prediction success rate is in the range of 80-86% for all three prediction starting points for both the batteries. The probability density function of the RUL for CS-37 and CS-38 batteries are shown in Figs. 8(a) and (b), respectively.
For CS-37, the width of RUL pdf for all three prediction starting points are about 240 cycles indicating very good performance of the proposed approach. At 250 cycles, 42 out of the 50 predicted traces did not cross the failure threshold as shown in Fig. 6(c). However, for the remaining 8 successful repetitions, the RUL error between the predicted RUL and true RUL was found to be less than 1%. On the other hand, the width of the RUL pdf for CS-38 at the 120 th cycle spanned over the entire lifetime of the battery. The reason for this being that the degradation curve for CS-38 has lot of battery regenerative signature compared to the other two batteries Similar results were observed for the fourth battery dataset as well (CS-39) but for the sake of brevity, the results are not discussed here. With the availability of a greater amount of test data, the prediction accuracy improves and the RUL pdf width at the 250 th cycle is substantially narrow with minimum RUL error of about 50 cycles between the predicted and true values. Also, the RMSE values for both the batteries at all three prediction starting points are listed in Table IV. The error values (highlighted) clearly indicate that the proposed hybrid algorithm (HyA) is efficient and accurate compared to standard PF and NN approaches.
C. CHOICE OF NETWORK ARCHITECTURE
A good choice of network architecture is essential for enhancing the prediction capabilities of the NN model. An optimal number of hidden neurons is essential for improved mapping of the complex input / output relations. In this work, the choice of hidden neurons was done by a trial-and-error procedure using BIC values. A lower number of neurons would inhibit the NN model to capture the nuances in the degradation trend and therefore, the analysis for a single hidden neuron was omitted as it would oversimplify the trend. The BIC results for different number of hidden neurons are shown in Table II. The model with the minimum BIC value was considered as the best suited model for the purpose of this study. Choosing a very high number of hidden neurons may result in overfitting issues. The prediction results for 4, 5 and 6 hidden neurons are shown in Fig. 9(a), Fig. 9(b) and Fig.9(c) respectively. It can be seen from Fig.9(b) that the HyA tries to mimic the kink in the true data at 60 cycles and eventually distorts the prediction trend. Similar behavior was observed for 6 hidden neurons as well. The RMSE values as well as computational time for predictions against number of hidden neurons are listed in Table V. From Table V, it is evident that 3 hidden neurons (with minimum BIC value) is the optimal count for the handling the complexity in the considered battery degradation datasets in this study. Since 3 hidden neurons would require optimizing 9 parameters over 13 parameters for 4 hidden neurons, the computational load was found to be comparatively less. Thus, the choice of 3 hidden neurons was eventually found to be a fair compromise between low error values and less computational time as well. Moreover, if we went with the option of a higher neuronal count, then we would end up risking the chances of particle degeneracy or impoverishment in the PF as well.
D. COMPUTATIONAL TIME
One of the major challenges with developing hybrid approaches is to make sure that the computational time is not compromised for better prediction accuracy. Hence, we chose to analyze the execution time for the proposed algorithm along with PF and FFNN approaches. The standard PF approach with 5000 particles took about 8.12 seconds for execution and a standard FFNN model took about 48.36 seconds for executing 50 repetitions. For the proposed hybrid algorithm (HyA), the execution time was found to be 53.14 seconds for 50 repetitions. Therefore, it can be concluded that the proposed hybrid approach is not only effective in terms of prediction accuracy but also in terms of computational load. It is to be noted that the simulations were executed in a standard DELL ® desktop workstation (Model-Inspiron 14 -5459) with 16GB RAM and Intel Core i5 processor.
E. COMPARISON OF PREDICTION RESULTS WITH EXISTING PF+NN HYBRID PROGNOSTIC FRAMEWORK
There are few hybrid prognostic methodologies proposed in literature combining PF and NN. However, most of those research works were tested on datasets different from the battery degradation datasets considered in this work such as battery voltage discharge curves, crack propagation datasets etc. However, Ref. [24] shows the prediction results for CALCE battery CS-37 and NASA RW11 battery. In Ref. [24], Wu et al. have used the NN model equation as the state transition function in the PF framework and subsequently PF algorithm was used for the purpose of RUL estimation. The authors have performed curve fitting on each of the battery dataset considered for their study and the corresponding curve fitting parameters were fed into the PF framework as the initial parameter guess values. The prediction results comparing Ref. [24] and our proposed HyA are shown in Fig.10(a) and Fig.10(b) for CS-37 and RW11 respectively. It is to be noted that the blue curve representing HyA in Fig.10 is the mean value of predictions for 50 repetitions. The extracted RMSE values for the prediction results of Ref. [24] was found to be 0.3686 and 0.2698 for RW11 and CS-37 respectively. This is much higher than the RMSE vales for the same data set extracted using our framework as shown in Table V below. It is evident from the results that our proposed HyA framework outperforms the prediction results of Wu et al. [24] despite their impractical assumption that the entire test dataset information is available apriori for curve fitting. It can thus be concluded that our proposed method is robust, more realistic and highly adaptable. In this work, a hybrid particle filter based neural network model is proposed for RUL prediction of lithium-ion batteries. A neural network model with one hidden layer containing 3 hidden neurons and a sigmoid activation function at the input side and linear activation function in the output side was used as the damage evolution model in the particle filter framework. The novelty of the proposed method lies in using PF algorithm for estimating the posterior distributions of the network weight and bias parameters corresponding to the training dataset and utilizing those parameters to warm start an MLP network model for the test dataset. The MLP model is further trained to optimize the network parameters for the test dataset. The trained MLP model is used to predict the future battery capacity values and remaining useful life. The proposed method was tested on NASA and CALCE datasets. The prediction results were compared with standard PF and FFNN methods. It was evident that the proposed approach was versatile enough to use the same NN architecture for both NASA and CALCE even though their degradation patterns are vastly different. The versatility of the proposed method makes it an appropriate choice for reliability studies on newer systems. Also, the proposed methods have very good prediction accuracy with very low RMSE values and is efficient in terms of computational time as well compared to the two conventional methods.
For future work, we intend to test the algorithm on LED luminosity degradation datasets where there are three distinct degradation phases and would be challenging for any prognostic algorithm to model and predict such highly nonmonotonic degradation phenomena [27]. Also, the impact of different resampling strategies used in PF algorithm on RMSE values and computational time would be explored in future. The proposed approach can also be modified to include physics-informed surrogate models over FFNN models which would have the capability to encode underlying physical laws in a given dataset. The degradation comparison plots between HyA and Ref. [24] for (a) CALCE (CS-37) dataset (b) NASA (RW11) dataset for LiCoO2 cell technology. The corresponding RMSE values for the two data sets using the two different hybrid prognostics frameworks is shown in Table V. | 2021-10-09T13:13:21.594Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "02079e938b7b35354d7a885986bda2abfc0b0c2c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09551933.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "02079e938b7b35354d7a885986bda2abfc0b0c2c",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
267780795 | pes2o/s2orc | v3-fos-license | Association between serum uric acid and bone mineral density in males from NHANES 2011–2020
Currently, the relationship between serum uric acid (SUA) and bone mineral density (BMD) in men remains controversial. This study aims to investigate the relationship between SUA and lumbar spine BMD in American men using data from the National Health and Nutrition Examination Survey (NHANES). A total of 6254 male subjects aged 12–80 years (mean age 35.52 ± 14.84 years) in the NHANES from 2011 to 2020 were analyzed. SUA was measured by DxC using the timed endpoint method, and lumbar spine BMD was measured by dual-energy X-ray absorptiometry (DXA). Multivariate linear regression models were used to explore the relationship between SUA and BMD by adjusting for age, race/Hispanic origin, drinking behavior, smoking behavior, physical activity, body mass index (BMI), poverty-to-income ratio (PIR), total protein, serum calcium, cholesterol, serum phosphorus, and blood urea nitrogen. After correcting for the above confounders, it was found that SUA was positively associated with lumbar spine BMD in the range of SUA < 5 mg/dL (β = 0.006 95% CI 0.003–0.009, P < 0.001), and BMD of individuals in the highest quartile of SUA was 0.020 g/cm2 higher than those in the lowest quartile of SUA (β = 0.020 95% CI 0.008–0.032, P = 0.003). This study showed that SUA was positively correlated with lumbar spine BMD in American men within a certain range. This gives clinicians some insight into how to monitor SUA levels to predict BMD levels during adolescence when bone is urgently needed for growth and development and during old age when bone loss is rapid.
Osteoporosis, a systemic skeletal disease characterized by low bone mass and microarchitectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fractures 1 , is common and increases as the population ages 2 .It causes a large number of bone-related diseases and increases mortality and health care costs 3 .Moreover, lumbar fractures are the most typical osteoporotic fractures, and they are strongly associated with lower bone mineral density (BMD) in the lumbar spine 4 .Therefore, it is particularly valuable to investigate the factors associated with reduced BMD due to the increasing economic and social burden caused by osteoporosis.
Uric acid is the end product of purine metabolism 5 , and serum uric acid (SUA) disorders are major risk factors for gout 6 .Numerous studies have concluded that hyperuricemia is also a predictor of the development of hypertension, metabolic syndrome, type 2 diabetes, cardiovascular disease, and renal disease 5,[7][8][9] .However, an increasing amount of experimental and clinical evidence suggests that uric acid, as an antioxidant in humans, has an important role.A review provides current evidence on the antioxidant role of uric acid and suggests its potential therapeutic effects as a marker of oxidative stress and as an antioxidant 10 .Due to its antioxidant properties, SUA may enhance bone density by inhibiting osteoclast bone resorption and promoting osteoblast differentiation 11 .A meta-analysis of data on BMD, osteoporosis, and fractures in people with high and low SUA concentrations confirmed that there was a correlation between hyperuricemia and BMD, and uric acid played a protective role in disorders of bone metabolism 12 .
In conclusion, there is limited evidence on the relationship between SUA and BMD in men.Exploring the above relationship in different age groups and races can provide important information for individuals, clinicians, and health care providers to develop osteoporosis prevention and treatment strategies.Therefore, SUA and gout were selected as indicators, and a representative accredited sample from the National Health and Nutrition Examination Survey (NHANES) was used to assess the relationship between SUA and BMD in American men.
Study design and population
The NHANES is a representative survey of the national American population through a complex, multistage, and probability sampling design, which provides general health and nutritional status for the civilian and noninstitutional population of the United States.The data from the NHANES 2011-2012, 2013-2014, 2015-2016, and 2017-2020 cycles were combined in this study.The subjects were American men.The participants with missing lumbar spine BMD data (n = 11,440), missing SUA data (n = 1107), and diseases affecting BMD (n = 729) which included cancer patients (n = 233), thyroid disease (n = 163), rheumatoid arthritis (n = 158), and liver disease (n = 175) were excluded.Finally, 6,254 eligible subjects were enrolled in this study.The flow chart of participant selection is presented in Fig. 1.
Variables
SUA and gout were the exposure variables in this study.SUA was measured as a part of the routine serum biochemistry profile using the Beckman Coulter UniCel ® DxC800 with the timed endpoint method.SUA was oxidized by uricase to produce allatoin and hydrogen peroxide.Catalyzed by peroxidase, the hydrogen peroxide reacted with 4-aminoantipyrine (4-AAP) and 3, 5-dichloro-2-hydroxybenzene sulfonate (DCHBS) to produce a colored product.A system was used to monitor the change in absorbance at 520 nm at a fixed time interval, and the change in absorbance was directly proportional to the concentration of uric acid in the sample.Gout was confirmed by trained interviewers using a structured questionnaire collected at home through a computerassisted personal interview (CAPI) system.The outcome variable was the lumbar spine BMD measured by dual-energy X-ray absorbtiometry (DXA).The measurement site was the lumbar spine, and the scans were all performed with a Hologic QDR 4500A sector beam densitometer (Hologic, Inc., Bedford, Massachusetts).All measured BMD values were collected and standardized by professionals.
In addition, the following covariates were included: age, sex, race/Hispanic origin, body mass index (BMI), poverty-to-income ratio (PIR), physical activity, blood urea nitrogen, total protein, total cholesterol, serum phosphorus, serum calcium, drinking behavior, and smoking behavior.The details on SUA, gout, and lumbar
Statistical analysis
For the included data, the software packages R and EmpowerStats were used for analysis, and P values < 0.05 were considered statistically significant.The data were expressed as mean ± standard error (SE) for continuous variables and as percentages for categorical variables.To compare the differences between groups, the weighted chi-square test and the weighted linear regression model were used for continuous variables and classification, respectively.The participants were characterized by quartiles of SUA (Category 1: 0.4-4.9mg/dL; Category 2: 5.0-5.7 mg/dL; Category 3: 5.8-6.6 mg/dL; and Category 4: > 6.7 mg/dL).A weighted multiple logistic regression model was used to assess the association between SUA and lumbar spine BMD.Multiple regression analyses stratified by age and race were performed.Subsequently, smooth curve fitting and generalized additive models were used to analyze the nonlinear relationship between SUA and lumbar spine BMD.Finally, threshold effect and saturation effect analyses were used to calculate the inflection point of the relationship between SUA and lumbar spine BMD, and a segmented linear regression model was established on both sides of the inflection point.
Ethics statement
The NCHS Ethics Review Board granted approval for the conduct of NHANES, and written informed consent was obtained from all participants.We confirm that all methods were performed by the relevant guidelines and regulations.
Results
This study included a total of 6254 male participants who were aged between 12 and 80 years and classified according to the SUA quartiles (Category 1: 0.4-4.9mg/dL; Category 2: 5.0-5.7 mg/dL; Category 3: 5.8-6.6 mg/ dL; and Category 4: > 6.7 mg/dL).The inclusion and exclusion processes were shown in Table 1.Baseline characteristics differed significantly between the SUA quartiles except for total serum calcium, serum phosphorus, and protein.Compared with the other subgroups, participants in the highest quartile of SUA were more likely to be non-Hispanic White and non-Hispanic Black and had higher age, BMI, PIR, blood urea nitrogen, cholesterol, and lumbar spine BMD.In addition, individuals in the highest quartile of SUA might have better education, but they were more likely to drink and smoke.The correlation between SUA and lumbar spine BMD in American men was assessed by multiple regression analysis, and the results were shown in Table 2.In the uncorrected model, SUA was positively associated with lumbar spine BMD in American men (β = 0.014, 95% CI 0.011-0.017,P < 0.001).After correction for covariates, the correlation between SUA and lumbar spine BMD was also significant in Model 2 (β = 0.013 95% CI 0.010-0.017,P < 0.001) and Model 3 (β = 0.006 95% CI 0.003-0.009,P < 0.001).The smooth curve fitting of the relationship between SUA and lumbar spine BMD was shown in Fig. 2.After the subsequent conversion of SUA from a continuous variable to a categorical variable (quartile), individuals in the highest quartile had a higher BMD of 0.020 g/cm 2 than those in the lowest SUA quartile (β = 0.020 95% CI 0.008-0.032,P = 0.003).
In the subgroup analysis stratified by age and race, as shown in Table 2, it was found that the positive association between SUA and lumbar spine BMD remained significant in the 12-19 years age group (β = 0.024 95% CI 0.018-0.029,P < 0.001) and in the > 50 years age group (β = 0.010 95% CI 0.003-0.018,P = 0.004) but not in the 20-34 years age group (β = − 0.002 95% CI − 0.008 to 0.004, P = 0.535) and 35-50 years age group (β = − 0.003 95% CI − 0.010 to 0.004, P = 0.415).The positive association between SUA and lumbar spine BMD was more pronounced in non-Hispanic White (β = 0.009 95% CI 0.003-0.015,P = 0.003) but not in Mexican Americans (β = 0.005 95% CI − 0.002 to 0.012, P = 0.181), other Hispanic (β = 0.001 95% CI − 0.010 to 0.012, P = 0.879), non-Hispanic Black (β = 0.003 95% CI − 0.005 to 0.011, P = 0.412), and other races (including the multiracial population) (β = 0.000 95% CI − 0.008 to 0.008, P = 0.948).The smooth curve fitting and the generalized additive model used to characterize the nonlinear relationship between SUA and lumbar spine BMD were shown in Figs. 2 and 3A and B. Data analysis also revealed that in the > 50 years age group, the relationship between SUA and lumbar spine BMD was a positive U-shaped curve with an inflection point of 4.5 mg/ml determined by using a two-stage linear regression model; however, in the 12-19 years age group, the relationship between SUA and lumbar spine BMD was an inverted U-shaped curve with an inflection point of 5.1 mg/ml determined by using a two-stage linear regression model.This inverted U-shaped curve was also present in other Hispanic and non-Hispanic Black.By analyzing the NHANES data for the whole population (including women) from 2011 to 2016 (Table 3), this study found a correlation between the presence or absence of gout in individuals and the lumbar spine BMD, with individuals with gout having a higher BMD of 0.038 g/cm 2 than those without gout (β = 0.038 95% CI 0.018-0.059,P < 0.001).In the analysis stratified by gender, age, and race, this positive correlation was more pronounced in men, the 20-34 years age group, the > 50 years age group, Mexican Americans, and other Hispanic populations.We also performed a multiple regression analysis of the association between SUA and lumbar spine BMD in adolescent males aged 12-19 years (Supplementary Table 1).
Discussion
In this cross-sectional study, the data from a large multilevel sample of the US population were evaluated to explore the relationship between SUA and BMD in American men, and a positive correlation between them was found.This association was significantly expressed in men in the 12-19 years age group (β = 0.024 95% CI 0.018-0.029,P < 0.001), ≥ 50 years age group (β = 0.010 95% CI 0.003-0.077,P = 0.004), and non-Hispanic White (β = 0.009 95% CI 0.003-0.015,P = 0.003).In addition, a positive association between gout and lumbar Vol:.( 1234567890 www.nature.com/scientificreports/spine BMD was also found, with those with gout having a higher BMD of 0.038 g/cm 2 than those without gout (β = 0.038 95% CI 0.018-0.059,P < 0.001).
Osteoporosis is a systemic disease characterized by a decrease in both bone mass per unit volume and bone strength, which predisposes affected bones to fractures.It is currently one of the leading causes of morbidity and mortality in the elderly worldwide 13 .Epidemiological investigations and laboratory studies in recent years have indicated that SUA may be involved in a variety of biological processes, associated with diseases such as gout, obesity, and chronic kidney disease, and also a protective factor for BMD 11,12,14 .Previous studies found a positive correlation between SUA and BMD, but most of them revealed that this correlation was mostly present in older men and adolescents, and was not seen in women, especially postmenopausal women [15][16][17][18] .A study using the early NHANES data (1996-2006) showed a positive association between SUA and lumbar spine BMD in older adults.The association was in an inverted U-shaped curve in Black 15 .A prospective cohort study of fracture cases from the United States included 1,680 men, and the analysis group contained 387 men who had non-spine fractures (73 hips) and 1,383 randomized samples.The analysis discovered that higher SUA levels were associated with a reduced risk of non-spine fractures, i.e., uric acid had a protective effect on bone density 19 .However, an analysis of data from the NHANES (1996-2006) by Li et al. found no association between SUA and BMD in American men (β = − 0.003 95% CI − 0.007 to 0.002), even after correction for relevant confounding factors 20 .In addition, a cross-sectional study from China also reported a positive association between SUA and BMD in postmenopausal women (n = 4256) but not in men (n = 943) 21 .They found no correlation between SUA and BMD in men, which is different from the results of this study.
Several aspects can explain this difference.First, their studies were comparative studies from different SUA cohorts and might not have been subjected to standardized regression analysis and adjustment for confounding factors.Second, their studies performed subgroup analyses by different grouping methods, which may also www.nature.com/scientificreports/account for the difference.For example, Li et al. 20 divided the population into three groups by age i.e., < 40 years age group, 40-59 years age group, and ≥ 60 years age group, and they also divided the population into < 7 mg/dL and ≥ 7 mg/dL groups based on SUA levels.Third, their study had a limited number of participants, which may lead to unstable results.In conclusion, heterogeneity among these studies, such as differences in methodological design, included populations, stratified analysis methods, and controlled confounding variables, may explain the existence of controversy.This cross-sectional study demonstrated a positive association between SUA and BMD in American men.Several possible mechanisms were proposed to explain this association.First, individuals with high uric acid may live in better conditions and consume more nutrients than those with low uric acid 11,14 .Previous studies have considered SUA as a nutritional marker that can represent the nutritional status of an individual.The main Table 2.The association between SUA and Lumbar Spine BMD (g/cm 2 ) in men.Model 1: no covariates were adjusted.Model 2: age and race/ethnicity were adjusted.Model 3: age, race/Hispanic origin, drinking behavior, smoking behavior, physical activity, BMI, PIR, total protein, serum calcium, cholesterol, serum phosphorus, blood urea nitrogen.SUA: serum uric acid.PIR: poverty income ratio.BMI: body mass index.sources of uric acid are purine-rich foods such as meat, seafood, and purine-rich vegetables, all of which are also high in protein.In puberty when bone mass is needed for growth and development and in old age when bone mass is lost, people with better living conditions and better nutritional intake may have higher BMD than those with low uric acid 22 .A study from the United States has also suggested that nutritional supplements are key to preventing osteoporosis 10 .Second, the protective effect of SUA on BMD may be related to its antioxidant capacity.SUA is a powerful endogenous antioxidant that clears peroxyl alone (RO2.), hydroxyl alone (.OH), and singlet oxygen radicals, respectively 10,23 .Low SUA concentrations reduce the ability to resist oxidative stress, thereby promoting osteoclast differentiation and reducing osteoblast activity, resulting in enhanced bone resorption and bone loss 24 .Another study has shown that uric acid plays an important role in the expansion of human bone marrow mesenchymal stem cells and promotes osteogenic differentiation 25 .Third, lower SUA concentrations were associated with lower serum parathyroid hormone (PTH) levels and higher serum TRACP5b levels.This would counter support and explain the idea that lower SUA concentrations enhance bone resorption, as reflected by an increase in serum TRACP5b levels, and this leads to greater loss of BMD and elevated serum calcium levels, causing a decrease in serum PTH levels as a result of negative feedback 26 .Previous studies have also found that PTH, as a metabolic factor, may affect the clearance of SUA, leading to an increase in SUA levels, and is involved in the relationship between SUA and bone metabolism 27,28 .In addition, people with high SUA levels have more lean body mass without many changes in fat mass.Some studies suggest that there is a positive correlation between lean body mass and BMD 29 .Despite these speculations and findings, the exact mechanism underlying the positive association between SUA and BMD remains uncertain and requires further investigation.
It could be known from the results of smooth curve fitting (Fig. 2) and threshold effect analysis (Table 4) that when SUA levels were < 5 mg/dL, BMD increased with SUA and there was a positive correlation between SUA and BMD; but when SUA levels were ≥ 5 mg/dL, the positive correlation between SUA and BMD was no longer significant.Considering the influence of SUA on other diseases, this saturation effect suggests that raising the level of SUA moderately within its normal range may be beneficial to bone health and will not have adverse effects on other systems.Then, a stratified analysis based on race was conducted in this study, finding that the positive correlation between SUA and lumbar spine BMD was significant in non-Hispanic White but not in other races.The smooth curve fitting (Fig. 3) indicated that the relationship between SUA and BMD in non-Hispanic Black was an inverted U-shaped curve, which is the same as a previous study.Genetic factors, lifestyle differences, and other factors may provide possible explanations for ethnic differences in this relationship 15 .Figure 3A suggests that serum uric acid is negatively correlated with lumbar spine BMD in the 20-35 age group, where BMD is stable, whereas this relationship becomes reversed and unstable with saturation in the adolescent and elderly populations.Figure 3B suggests that in non-Hispanic blacks, serum uric acid and lumbar spine BMD show a clear inverted U-shaped curve, which correlates with the fact that blacks are more physically active and require more nutrition.
A large number of research samples are needed to explain the special relationship between SUA and BMD in the Black population.A stratified analysis based on age was also conducted, concluding that the positive correlation between SUA and BMD often occurred in people aged 12-19 years and ≥ 50 years.Teenagers need to grow and develop, while elderly individuals will experience rapid bone loss.Hence, both groups are in a period of extreme need for bone mass.Many studies have found that there is a positive correlation between SUA and BMD in male adolescents or elderly men, which is consistent with this research 15,16 .In the smooth curve fitting, it was found that the relationship between SUA and BMD in the male population > 50 years old was in a positive U-shape.The impact of age on this relationship implies that it is particularly important to adjust the level of SUA in youth and old age.To the best of our knowledge, this is the first study using the NHANES data (2010-2020) to obtain a positive correlation between SUA and BMD in American men.This study has several advantages.First, it was based on a large sample of data from the NHANES.The sample in this study was a multilayer random one with highly reliable and standardized data that could be representative of the general population in the United States.Second, because cancer, liver disease, thyroid disease, and rheumatoid arthritis affect bone metabolism, patients with these diseases were excluded during the inclusion of the population.Also, many important confounding factors, including age, race, drinking behavior, smoking behavior, BMI, serum urea nitrogen, and total protein, were excluded or controlled.Furthermore, the findings of this study are more reliable because the population in this study was not affected by menopause, estrogen, or pregnancy.Finally, to the extent of our knowledge, the present study is the first to analyze the NHANES (2011-2020) data to investigate the relationship between SUA and BMD in American men, with multiple regression analyses stratified by age and race.
Of course, there are also some shortcomings in this study.First, because it is a cross-sectional study, no causal relationship can be inferred, and more longitudinal studies are needed to verify this relationship.Second, this study only investigated the relationship between SUA and lumbar spine BMD and did not examine the relationship between SUA and femoral BMD due to the lack of data, resulting in an incomplete result.It has been shown that the relationship between SUA and BMD is different in two different skeletal sites, i.e., the femur and the lumbar spine, owing to the influence of mechanical factors 30 .Hence, more studies are needed in the future to explore the relationship between SUA and BMD at different skeletal sites in men.Finally, other potential confounders that were not adjusted in this study could still lead to bias.
Conclusions
This study showed a positive correlation between SUA and lumbar spine BMD in American men, but the correlation was no longer significant when SUA was ≥ 5 mg/dL.Confounding factors such as race and age may influence the positive correlation, and after the analysis stratified by age and race, the positive correlation between SUA and lumbar spine BMD was found to remain significant in the 12-19 years age group, the > 50 years age group, and non-Hispanic White.This gives clinicians some insight into how to monitor SUA levels to predict BMD levels during adolescence when bone is urgently needed for growth and development and during old age when bone loss is rapid.
Figure 2 .
Figure 2. Relationship between serum uric acid and lumbar spine bone mineral density.(a) Each black dot represents a sample.(b) The solid line indicates a smooth curve fit between the variables.The blue band indicates the 95% confidence interval of the fit.Adjusted for age, Race/Hispanic origin, drinking behavior, smoking behavior, physical activity, BMI, PIR, total protein, serum calcium, cholesterol, serum phosphorus and blood urea nitrogen.
Figure 3 .
Figure 3. (A)The association between serum uric acid and lumbar spine bone mineral density (stratified by age).Adjusted for race/Hispanic origin, drinking behavior, smoking behavior, physical activity, BMI, PIR, total protein, serum calcium, cholesterol, serum phosphorus, blood urea nitrogen.(B) The association between serum uric acid and lumbar spine bone mineral density (stratified by race/Hispanic origin).Adjusted for age, drinking behavior, smoking behavior, physical activity, BMI, PIR, total protein, serum calcium, cholesterol, serum phosphorus, blood urea nitrogen.
Table 1 .
Characteristics of the study population based on SUA quartiles.Mean ± SD for continuous variables: the P value was calculated by the weighted linear regression model.(%) for categorical variables: the P value was calculated by the weighted chi-square test.SUA: serum uric acid; PIR: poverty income ratio; BMD: bone mineral density; BMI: body mass index.
Table 3 .
The association between gout and lumbar spine BMD (g/cm 2 ) in men.Model 1: no covariates were adjusted.Model 2: age and race/ethnicity were adjusted.Model 3: age, race/Hispanic origin, drinking behavior, smoking behavior, physical activity, BMI, PIR, total protein, serum calcium, cholesterol, serum phosphorus, blood urea nitrogen.SUA: serum uric acid.PIR: poverty income ratio.BMI: body mass index.
Table 4 .
Analysis of threshold and saturation effects between SUA and lumbar spine BMD. | 2024-02-23T06:17:13.056Z | 2024-02-21T00:00:00.000 | {
"year": 2024,
"sha1": "0a7f86a576ec6b2138b70bee4fb5d316c3a82e7a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-024-52147-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "675e613c520a4da105396f9aeecc8d37145d045f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5843587 | pes2o/s2orc | v3-fos-license | Establishment and characterization of a human intrahepatic cholangiocarcinoma cell line derived from an Italian patient
Biliary tract carcinoma is a rare malignancy with multiple causes, which underlie the different genetic and molecular profiles. Cancer cell lines are affordable models, reflecting the characteristics of the tumor of origin. They represent useful tools to identify molecular targets for treatment. Here, we established and characterized from biological, molecular, and genetic point of view, an Italian intrahepatic cholangiocarcinoma cell line (ICC), the MT-CHC01. MT-CHC01 cells were isolated from a tumor-derived xenograft. Immunophenotypical characterization was evaluated both at early and after stabilization passages. In vitro biological, genetic, and molecular features were also investigated. In vivo tumorigenicity was assessed in NOD/SCID mice. MT-CHC01cells retain epithelial cell markers, EPCAM, CK7, and CK19, and some stemness and pluripotency markers, i.e., SOX2, Nanog, CD49f/integrin-α6, CD24, PDX1, FOXA2, and CD133. They grow as a monolayer, with a population double time of about 40 h; they show a low migration and invasion potential. In low attachment conditions, they are able to form spheres and to growth in anchorage-independent manner. After subcutaneous injection, they retain in vivo tumorigenicity; the expression of biliary markers as CA19-9 and CEA were maintained from primary tumor. The karyotype is highly complex, with a hypotriploid to hypertriploid modal number (3n+/−) (52 to 77 chromosomes); low level of HER2 gene amplification, TP53 deletion, gain of AURKA were identified; K-RAS G12D mutation were maintained from primary tumor to MT-CHC01 cells. We established the first ICC cell line derived from an Italian patient. It will help to study either the biology of this tumor or to test drugs both in vitro and in vivo. Electronic supplementary material The online version of this article (doi:10.1007/s13277-015-4215-3) contains supplementary material, which is available to authorized users.
Introduction
Biliary tract carcinoma (BTC) is a malignant neoplasm derived from cholangiocytes in different tracts of biliary tree. Recent studies suggest that BTC could originate from a combination of cholangiocytes, peribiliary gland around bile duct, and progenitor of hepatocytes or hepatocytes. It can be classified into intrahepatic cholangiocarcinoma (ICC), originating from the bile ducts within the liver, and extrahepatic cholangiocarcinoma (ECC) arising from bile duct outside the liver; ECC can be further classified as perihilar and distal BTC [1].
Incidence and mortality rates from BTC, including ICC (10 % of primary liver cancers in Western countries), are increasing worldwide [2][3][4][5]. Patients with unresectable disease (70-90 %) have a poor prognosis with a survival of less than 12 months following diagnosis. Conventional chemotherapy, gemcitabine alone or in association with platinum derivatives, and radiotherapy have not shown to be effective in improving long-term survival [6,7].
The lack of effective therapies against this tumor prompts to investigate its molecular pathogenesis. Literature data extensively described risk factor promoting BTC; in particular, the etiology is different according to geography and ethnicity: infestation of the liver flukes, Opisthorchis viverrini, and Clonorchis sinensis are common in Thailand, Vietnam and Laos [8]; hepatolithiasis in Asian countries; and primary sclerosing cholangitis (PSC) in the Western countries [9]. Other potential risk factors, which affect almost all the countries, include cirrhosis, hepatitis B (HBV), hepatitis C viral (HCV), and HIV infections, inflammatory bowel disease independent of PSC, alcohol, smoking, fatty liver disease, cholelithiasis, and choledocholithiasis [10][11][12]. Altogether, these factors contribute to the development of the BTC and they are on the basis of genetic (gene mutations, chromosomal aberrations) and molecular (transcriptional profiles, aberrant signaling pathways) variability of these tumors [2].
In this work, we describe a human Italian ICC cell line obtained from a patient-derived xenograft (PDX) [33]. This cell line could provide a new suitable model for preclinical studies of molecular pathogenesis or drug efficacy.
Establishment of ICC cell line from a PDX
The PDX was obtained from a tumor sample of a 60-year-old Italian woman who underwent surgical resection for ICC. Biological material was obtained from patient who has signed the informed consent, following institutional review boardapproved protocols (BPROFILING Protocol, no. 001-IRCC-00 IIS-10^approved by Comitato Etico Interaziendale of A.O.U. San Luigi Gonzaga, Orbassano, Torino, Italy). This institutional study provides molecular genetic analysis, set up of primary cultures and the creation of PDX from tumor biological samples (primary tumor, metastasis, tumor cells taken under paracentesis or thoracentesis procedures, and blood). The tumor was histopathologically classified as pT2b pN0, moderately differentiated (G2) intrahepatic bile duct carcinoma. Tumor sample was also evaluated for the presence of HBV or HCV markers, resulting negative. The primitive tumor was associated with chronic cholecystitis, but not associated with liver cirrhosis or chronic liver disease, primary sclerosing cholangitis, diabetes, obesity, alcohol consumption, and tobacco smoking.
For PDX establishment, non-obese diabetic (NOD)/Shi-severe combined immunodeficient (SCID) female mice (4-6 weeks old) (Charles River Laboratory) were maintained under sterile conditions in micro-isolator cages at the animal facilities of the IRCCS-Candiolo. All animal procedures were approved by the Institutional Ethical Committee for Animal Experimentation (Fondazione Piemontese per la Ricerca sul Cancro) and by the Italian Ministry of Health. Mice were subcutaneously (s.c.) grafted with a fragment of 4×4 mm of representative primary tumor. After 4 months, the primary tumor was successfully engrafted in mice at first generation and was named CHC001 PDX; after reaching a volume of about 200 mm 3 , tumor was explanted and re-implanted in new mice for a second generation. The stabilization was obtained in fourth generation. To obtain the cell line, tumor specimen derived from fourth generation of PDX, was enzymatically digested with collagenase (200 U/ml) (Sigma-Aldrich, St. Louis, MO, USA) for 3 h at 37°C. Collagenase was inactivated by two washes in fetal bovine serum (FBS) (all from Sigma-Aldrich, St. Louis, MO, USA). Single-cell suspension was obtained by filtering the supernatant through a 70-μm cell strainer (BD Biosciences, San Jose, CA). Cells were finally re-suspended in three different culture conditions: complete DMEM, RPMI, and Knockout/DMEM/F-12 media at a cell density of 300,000/mL in six-well tissue culture plates at 37°C in a humidified atmosphere of 95 % air and 5 % CO 2 . Media were replaced twice a week. When cells reached 70-80 % of confluence, they were propagated in the optimal culture condition (complete Knockout/DMEM/F-12 medium). The stable cell line was named MT-CHC01.
Cell lines
The
Mycoplasma detection
The presence of mycoplasma DNA was tested by the PCR kit VenorGeM (Minerva Biolabs) following the manufacturer's instructions. The primers are specific to the highly conserved 16S rRNA coding region in the mycoplasma genome. Detection requires 1-5 fg of mycoplasma DNA.
Population doubling time
The population doubling time at passage 25 was determined in hemacytometer chamber by staining with trypan-blue dye. Briefly, 1.4×10 5 cells were plated in 24-well plates in triplicate in optimal medium. Viable cells were counted at 24, 48, and 72 h after seeding. The average number of cells was calculated in three different experiments.
To calculate the population doubling time (DT), we used the following formula: DT=T ln2/ln(Xe/Xb), in which T is the time duration of culture, Xe is the cell number at the end of the incubation time and Xb is the cell number at the beginning of the incubation time.
Anchorage-independent growth assay
Anchorage-independent growth was assessed by colonyformation assay in soft agar culture. Briefly, 100,000 cells were suspended in 0.5 ml 0.3 % (w/v) soft agar layered over 0.3 % (w/v) base agar in 24-well plates in quadruplicate. Complete medium was added twice a week for the entire period of incubation (3 weeks).
In vitro motility
Motility was performed by wound-healing assay and by 8.0-μm transwell chambers (Costar, Cambridge, MA, USA). For wound-healing assay, cells were seeded in triplicate in sixwell tissue culture plates and allowed to grow until 100 % confluence. The cell layer was gently Bwounded.^Cell migration toward the scraped area was observed in nine randomly selected microscopic fields for each time point (up to 72 h). Images were acquired with a Leica DM13000B Inverted Microscope (Leica). The gap distance was analyzed using ImageJ software; 0 % represents the time of wound (T0). Wound closure was calculated in three different experiments. The percentage of closure was calculated as 100−(T/T0)× 100, where T represents the average wound closures at the different time points. Statistical analysis was performed using one-way ANOVA and multiple comparison test (GraphPad software). The assay was performed in three different experiments.
Motility was also performed by transwell chambers assay (1 cm 2 /well, BD Falcon). The upper and lower cultures were separated by an 8-μm pore size poly-vinyl-pyrrolidone-free polycarbonate filters (BD Falcon). The experiments were carried out in triplicates.
After the incubation period, filters were fixed with methanol and stained with 0.5 % crystal violet in 25 % methanol; cells on the upper surface of the filters were removed using cotton swabs. Cells invading the lower surface were counted in five random fields and expressed as number of invading cells per well.
Tumorigenicity in NOD/SCID Mice
MT-CHC01 cells at passage 35 were grown to 80 % confluence and trypsinized. For in vivo studies, NOD/Shi-SCID female mice (4-6 weeks old) of about 20-25 g/each (Charles River Laboratory) were maintained under sterile conditions in micro-isolator cages at the animal facilities of the IRCCS-Candiolo. All animal procedures were approved by the Institutional Ethical Committee for Animal Experimentation (Fondazione Piemontese per la Ricerca sul Cancro) and by the Italian Ministry of Health.
In three independent experiments, ten mice were subcutaneously (s.c.) injected into the right flank under anesthesia (mixture of isoflurane and nitrous oxide) with 3.0×10 6 MT-CHC01 cells in 50 % growth factor-reduced BD Matrigel (BD Biosciences, San Jose, CA). Tumor diameters were measured weekly after cell injection up to tumor engraftment until 35 days after injection.
Immunohistochemistry
The expression of CA19-9, alpha fetoprotein (AFP), and carcinoembryonic antigen (CEA) was evaluated. Tissues derived from primary tumor, PDX, MT-CHC01 cell line, and its xenograft were fixed in 10 % buffered formalin and embedded in paraffin. The sections (4 μm thick) were incubated in 0.01 mol/L citrate solution (pH 6) for 5 min three times at intervals of 20 min; then, the sections were soaked in 3 % H 2 O 2 to block endogenous peroxidase activity, followed by washing in PBS at pH 7.4. The sections were incubated with the primary antibodies for AFP, CEA, CA19-9 (all from DAKO Corporation) for 30 min. Rabbit anti-human polyclonal antibody was used for AFP and CEA, and mouse antihuman monoclonal antibody was used for CA19-9. Subsequently, sections were incubated with biotinylated antibody and peroxidase-labeled streptavidin. Staining was completed after incubation with the freshly prepared substrate-chromogen solution of 3,3′-diaminobenzidine (DAB). Finally, sections were counterstained with Meyer's hematoxylin.
Chromosome analysis
Established cell line was subjected to chromosomal analysis using G banding and Multi-Color FISH (M-FISH). Cells were trypsinized using trypsin/EDTA after colcemid treatment (10 μg/ml) for 1 to 3 h, and slides were prepared according to standard methods. Briefly, cells were incubated in 0.075 M KCl hypotonic solution for 20 min and fixed in methanolglacial acetic acid (3:1). G banding was performed using 2xSSC at 68°C for 2 min and Wright's stain for 2 min. Metaphase images were captured using an Olympus BX61 microscope (Olympus Corporation, Tokyo, Japan) and analyzed by CytoVision software (Leica Biosystems, Newcastle Ltd, UK). An average banding resolution of 300 bands was achieved. Aberrations were described according to the International System for Human Cytogenetic Nomenclature, 2013 [34].
M-FISH was performed with the aim of identifying complex chromosomal rearrangements. The probe cocktail containing 24 differentially labeled chromosome-specific painting probes (24xCyte kit MetaSystems, Altlussheim, Germany) was denatured and hybridized to denatured tumor metaphase chromosomes according to the manufacturer's protocol for the Human Multicolor FISH kit (MetaSystems). Briefly, slides were incubated at 70°C in saline solution (2xSSC), denatured in NaOH, dehydrated in ethanol series, air-dried, covered with 10 μl of probe cocktail (denatured) and hybridized for 2 days at 37°C. The slides were then washed with post-hybridization buffers, dehydrated in ethanol series, and counter-stained with 10 μl of DAPI/antifade. The signal detection and analysis of subsequent metaphases used the Metafer system and Metasytems' ISIS software (software for spectral karyotypes).
Comparative genomic hybridization array
Genomic DNA from MT-CHC01 was extracted by Qiamp DNA mini Kit (Qiagen), and high-resolution oligonucleotide comparative genomic hybridization array (CGH) was performed following standard operating procedures from Agilent Technologies. One thousand nanograms of DNA were digested by a double enzymatic digestion (AluIþRsa I), fragmented, amplified, and purified. After the quantification with NanoDrop, 2 μg of genomic DNA of both tumor DNA and control DNA from Promega (Human Genomic DNA Female N 30742202/male N 30993901) were labeled with CY5-dCTPs and CY3-dCTP, respectively, and hybridized on glass array 2 X105 K at 65°C for 40 h at 20 rpm. Slides were then washed and scanned on an Agilent 4000C dual laser scanner, and images were analyzed with Feature Extraction v10.5 software. Raw txt files were then loaded into Cytogenomics software for data processing and visualization.
Fish analysis
FISH analysis was performed using the following probes: ALK (2p23) Break Apart Rearrangement probe (Vysis, Downers Grove, IL, USA), dual color AURKA (20q13)/ CEN20 probe (Kreatech Diagnostics, Amsterdam, Netherlands), dual color EGFR (7p12)/CEP7 probe (Vysis), dual color HER2 (17q11.2)/CEP17 (DAKO, Glostrup, Denmark), dual color MET (7q31.2)/CEP7 probe (Vysis) and dual color TP53 (17p13.1)/CEP17 (Vysis) and Del(5q) Deletion probe (Cytocell, Cambridge, UK). FISH was carried out on interphase cells prepared according to standard techniques. Cells were incubated with the probe for 5 min at 75°C for codenaturation and placed in a humidified chamber at 37°C overnight for the hybridization step. After washing, chromatin was counterstained with DAPI II (Vysis). An average of 100 cells was analyzed using a Olympus BX61 microscope and CytoVision software. AURKA, EGFR, HER2, MET, and TP53 gene status was defined evaluating the ratio between the gene copy number (CN) and the relative centromere number (ratio CN/CEN) and the mean copy number of gene (ratio CN/nuclei). A gene amplification was defined when the ratio CN/ CEN was ≥2.0 and a gene/centromere gain when the mean copy number was ≥3.
Mutational analysis
Genomic DNA was extracted by using QIAamp DNA FFPE Mini kit (Qiagen, Milan, Italy) following the manufacturer's instructions. The kinase domain of EGFR coding sequence, from exons 18 to 21, was amplified by using primers and nested polymerase chain reaction (PCR) conditions previously described by Lynch et al. [35]. Exon 2 of K-RAS, exons 9 and 20 of PI3KCA, and exon 15 of B-RAF were amplified by PCR as previously described [36]. PCR products were then purified using Wizard ® SV Gel and PCR Clean-Up System (Promega, Italy), and sense and antisense sequences were obtained using forward and reverse internal primers respectively. Each exon was sequenced using the BigDye Terminator Cycle sequence following the PE Applied Biosystem strategy and Applied Biosystem ABI PRISM3100 DNA Sequencer (Applied Biosystem, Forster City, CA). Mutations were confirmed performing two independent rounds of PCR amplifications.
Generation and immunophenotyping of ICC cell line
For the cell isolation, tumor tissue derived from the PDX at the fourth generation was mechanically and enzymatically dissociated. Cells adhered to the dish 24 h after plating, but only 1 week later, they formed scattered colonies. After 1 month in the optimal cell culture conditions, Knockout/DMEM/F-12 medium in the presence of 10 % FBS and P/S, cells were detached for the first time. At the 25th passage, after about 5 months, a stable cell line was obtained and named MT-CHC01. It appears as a homogeneous culture of tumor cells with epithelial morphologic features ( Supplementary Fig. 1A and B) and is mycoplasma free (Supplementary Fig. 1C).
The expression of different stemness and pluripotency markers, already known to be typical of human biliary tree [37] Table 1). Most of the same markers, analyzed at early passages (Fig. 2a) were not expressed; only CD133 and CD49f were at high levels, while Nanog was expressed in about 30 % of the population versus 96.3 % of established MT-CHC01 cells (Fig. 2b). This suggests that a phenotypic selection occurred in cultured conditions.
Biological characterization of MT-CHC01 cell line
MT-CHC01 cells were able to grow in a monolayer in appropriate culture medium. After cell line stabilization, the population doubling time was determined. In optimal culture conditions, estimated as 1.4×10 5 cells/cm 2 , the population doubling time was about 40 h ( Supplementary Fig. 2).
Plating MT-CHC01 cells on soft agar medium for 3 weeks, they were able to grow in an anchorage-independent manner (Fig. 3a, b).
To confirm the stem cell status emerged from the expression analysis of several stemness and pluripotency markers, a sphereformation assay was performed. As shown in Fig. 3c, d, MT-CHC01 cells clearly formed spheres in low attachment and serum-free conditions.
The MT-CHC01 motility was investigated by using both wound healing and transwell chamber assays. Figure 4a-d shows that the wound in MT-CHC01 cells was significantly closed at 24, 48, and 72 h after the wound; the migration potential of MT-CHC01 is lower if compared to other commercial cell lines such as HuH28 (Fig. 4e), that closes the wound within 24 h as previously demonstrated [38]. This data was confirmed by transwell chamber assay (Fig. 4f-i), in which we observed that MT-CHC01 cells had a lower migration potential rate (Fig. 4f) compared with the WITT (Fig. 4g) and the HuH28 (Fig. 4h) cell lines.
Tumorigenicity in vivo
To verify if the established cell line retained the tumorigenicity in vivo, NOD/SCID mice were subcutaneously injected with 3.0×10 6 viable cells. In three independent experiments of ten mice each, MT-CHC01 cells were able to develop tumor in all animals. After 1-2 weeks, tumors reached a volume ranging from 245 to 500 mm 3 ; a cohort of mice (n=10) was monitored for other 4 weeks when tumors reached volumes ranging from 1500 to 2100 mm 3 (Supplementary Fig. 3); no evidence of local invasion nor distant metastasis was revealed. Hematoxylin/eosin staining and AFP, CA19-9, and CEA immunohistochemical analyses were also performed on primary and PDX tumors, on MT-CHC01 cells, and on MT-CHC01 xenograft, revealing the same tissue architecture and pathological phenotype of ICC [39] (Supplementary Fig. 4).
Mutational analysis
Mutational analysis of the kinase domain of EGFR coding sequence (exons 18 to 21), exon 2 of K-RAS, exons 9 and 20 of PI3KCA, and exon 15 of B-RAF was performed in primary tumor, and in MT-CHC01 cell line. As shown in Fig. 5, only the sequence of K-RAS exon 2 was mutated (G12D mutation) both in primary (Fig. 5b) and in MT-CHC01 cells (Fig. 5c).
FISH analysis on interphase nuclei
Interphase FISH was used to assess the status of ALK (2p23), AURKA (20q13), EGFR (7p12), HER2 (17q11.2), MET (7q31.2), and TP53 (17p13.1) genes with relation to the observed chromosomal aberrations. ALK, EGFR, and MET genes showed a trisomic pattern of signals reflecting the near-triploid chromosomal assessment. As shown in Fig. 6c, a low level of HER2 gene amplification (ratio CN/CEN, 2.1) was observed, consistent with 17q instability and its involvement in various rearrangements with multiple partner chromosomes. Moreover, a loss of TP53 was present if compared to chromosome 17 centromere (TP53 mean copy number 2.2 vs CEN17 mean copy number 3.1) as a consequence of 17p loss detected in the karyotype. We observed a gain of AURKA gene (mean copy number 5.3) and chromosome 20 centromere (mean copy number 4.1) not detectable in the karyotype, as only three copies of chromosome 20 were always present. Finally, to confirm the deletion on chromosome 5 detected by CGH array, we performed a FISH analysis with a dual color probe specific for 5p15.31 and 5q31.2 regions. As expected, a loss of 5q31.2 region was detected if compared to the copy number of 5p15.31 region (5p15.31 mean copy number 5 vs 5q31.2 mean copy number 4) (Fig. 6d).
Discussion
Biliary tract carcinoma is a complex and heterogeneous disease, characterized by different incidence, etiology, genetic, and molecular profiles. In Italy, BTC shows progressive increase in incidence mainly in intrahepatic cholangiocarcinoma [40].
The importance of identifying preclinical models able to closely reflect the characteristics of tumor of origin is crucial. To date, all models available have limits and consist of cell lines derived from oriental patients. None of these models derived from patient living in Europe and in particular in Italy.
In this work, we characterized a preclinical model represented by a cell line obtained from an Italian intrahepatic cholangiocarcinoma PDX.
The MT-CHC01 cell line was extensively characterized from biological, molecular, and genetic point of view.
The MT-CHC01 cells grew as an adherent monolayer with a population doubling time of approximately 40 h. The morphology was clearly epithelial, and this status was confirmed by immunophenotypic analysis, demonstrating that they expressed epithelial cell markers, EPCAM, CK7, and CK19. In soft agar medium, MT-CHC01 cells revealed the ability to grow in anchorage-independent manner as cellular aggregates; in low attachment and serum-free culture conditions in stem cell medium, MT-CHC01 originated spheroid structures. A peculiar characteristic of cancer cells is their ability to migrate; thus, we investigated the in vitro motility of the MT-CHC01 cells. As demonstrated by wound healing and transwell chamber assays, the MT-CHC01 cells showed a low migration potential compared to other BTC cell lines, reflecting the reduced metastatic potential of this tumor.
Immunophenotyping characterization demonstrated that MT-CHC01 cells expressed, at high levels, some of stemness and pluripotency markers such as SOX2, Nanog, CD49f, CD24, PDX1, FOXA2, and CD133. Their expression was increased during cell stabilization; at early passage of culture, the other tested markers, with the exception of CD133, CD49f, and Nanog, were scarcely represented. This could suggest that during cell propagation, a clone selection with putative cancer stem cell phenotype occurred, as demonstrated by Rowehl et al. in a colorectal cancer cell line model [41]. The ability to grow in suspension, to form spheres, to migrate, and to be tumorigenic if s.c. injected in mice recipient, further supported this hypothesis. Cancer stem cells represent a small population which sustain tumor growth and have the capabilities of self-renewal and multilineage differentiation; they also play an important role in carcinogenesis and in resistance to therapies (chemotherapy and target therapy) [42][43][44]. Our cell line, which has characteristics of putative stem cells, could be an useful preclinical model to identify specific molecular targets of this subpopulation and evaluate the effectiveness of combination therapies designed to eradicate cancer stem cells and cancer.
Further, genetic characterization was focused on mutational status of the hotspot of EGFR (from exons 18 to 21) and its principle signal transducers (exon 2 of K-RAS, exons 9 and 20 of PI3KCA, exon 15 of B-RAF); only a mutation of KRAS (G12D) was found. Other genomic analyses revealed loss of TP53, ALK, EGFR, and MET genes showed a trisomic pattern of signals, low level of HER2 gene amplification, and a gain of AURKA gene.
In conclusion, we established a human intrahepatic cholangiocarcinoma cell line, derived from an Italian patient, which could represent a useful tool to better characterize this disease in relation to etiology and ethnicity and to develop new therapies for ICC patients. MET, and TP53 genes (red signals) and relative centromeres (green signals). The white arrow shows the TP53 loss on metaphase. d. FISH analysis with a dual color probe specific for 5p15.31 and 5q31.2 Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2017-08-03T02:41:26.440Z | 2015-10-20T00:00:00.000 | {
"year": 2015,
"sha1": "b7fe2cf6bed3a7e6eab1d9c180075db613bdee1b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13277-015-4215-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b7fe2cf6bed3a7e6eab1d9c180075db613bdee1b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252077995 | pes2o/s2orc | v3-fos-license | From a drought to HIV: An analysis of the effect of droughts on transactional sex and sexually transmitted infections in Malawi
Each year there are over 300 natural disasters globally with millions of victims that cost economic losses near USD$100 billion. In the context of climate change, an emerging literature linking extreme weather events to HIV infections suggests that efforts to control the HIV epidemic could be under threat. We used Demographic and Health Survey (DHS) data collected during the 2015–2016 harsh drought that affected several areas of Malawi to provide new evidence on the effect of an unanticipated economic shock on sexual behaviours of young women and men. We find that amongst women employed in agriculture, a six-months drought doubles their likelihood of engaging in transactional sex compared to women who were not affected by the drought and increases their likelihood of having a sexually transmitted infections (STI) by 48% in the past twelve months. Amongst men employed outside of agriculture, drought increases by 50% the likelihood of having a relationship with a woman engaged in transactional sex. These results suggest that women in agriculture experiencing economic shocks as a result of drought use transactional sex with unaffected men, i.e. men employed outside agriculture, as a coping mechanism, exposing themselves to the risk of contracting HIV. The effect was especially observed among non-educated women. A single drought in the last five years increases HIV prevalence in Malawi by around 15% amongst men and women. Overall, the results confirm that weather shocks are important drivers of risky sexual behaviours of young women relying on agriculture in Africa. Further research is needed to investigate the most adequate formal shock-coping strategies to be implemented in order to limit the negative consequences of natural disasters on HIV acquisition and transmission.
Introduction
While the lives of girls and women have dramatically improved over the past quarter century, progress toward gender equality has been limited in poorest countries. Girls and women who live in extreme poverty, in remote areas or who are marginalised continue to lag behind their male counterparts (World Bank, 2016). Globally, women are vulnerable to extreme poverty because they face a greater burden of unpaid work and have limited access to productive assets (Wanjala, 2021). Extreme poverty of women seems to interrelate strongly with infectious diseases, a fact that has motivated several studies to address the link between poverty and risky sexual behaviours through cash transfers targeting adolescent girls (Baird, 2012;Pettifor, 2016). Women are disproportionately affected by sexually transmitted infections (STIs) and HIV/AIDS is currently the leading cause of death worldwide for women aged 15-44 years. Young women aged 15-24 are three times more likely to be infected with HIV than their male counterparts and comprise 31% of all new HIV infections in Sub-Saharan Africa (UNAIDS, 2017). In addition, STIs not only affect women and their sexual partners, they are a special concern during pregnancy and pose important public health threats to unborn children since they are associated with adverse pregnancy outcomes that include spontaneous abortion, stillbirth, prematurity, low birth weight, and multiple sequelae in surviving neonates including HIV and STI (Adachi et al., 2018;Silver et al., 2014;Gomez et al., 2013;de Attayade et al., 2011). Hence reducing STIs and HIV in women will likely translate into large societal benefits that outweigh the costs of such interventions.
While almost 2 million persons become newly infected with HIV each year (UNAIDS, 2017), recent literature has developed an entire conceptual framework in order to better understand the structural drivers of the HIV epidemic among women (Gafos et al., 2020). There are a growing number of studies suggesting that transactional sexdefined as non-marital non-commercial sexual relationships motivated by the implicit assumption that sex will be exchanged for material support or other benefits -is associated with risk of HIV acquisition (Dunkle et al., 2004;Kilburn et al., 2018;Ranganathan et al., 2016). This increased risk can be explained by the partner age difference and the number of different partners of young women engaging in such behaviour (Ranganathan et al., 1999;Collinson et al., 2007). Further, when women are in a situation of monetary dependence with respect to a sexual partner, their bargaining power is reduced, which increases the risk of HIV infection (Ranganathan et al., 2017). Therefore, transactional and commercial sex, alongside with biological susceptibility, are key behavioural practices responsible for gender inequalities in HIV/AIDS (Stoebenau et al., 2016;UNAIDS, 2017). In a recent literature review, Cust et al. (2021) showed that economic shocks are likely to affect both the extensive margin (the number of people engaging in transactional and commercial sex), the intensive margin (the number of sex acts within a relationship) and the degree of riskiness of sexual intercourses. In fact, a number of economic studies have shown that women who engage in transactional and commercial sex may adopt risky sexual behaviours, like unprotected sex, in order to cope with negative income shocks (Burke et al., 2015;De Walque et al., 2014;Dupas & Robinson, 2012;Robinson & Yeh, 2011. This is explained by the fact that since men have a stronger preference for unprotected sex acts, there is a positive premium for unprotected sex acts (Arunachalam & Shah, 2013;Rao et al., 2003). As for women engaging in transactional sex, Ranganathan et al. (2017) highlighted that when they are relying on material and financial support of their partner, this power imbalance leads to a higher risk of HIV infection. In addition, women experiencing income shocks may be unable to afford treatment for STIs, making them more vulnerable to HIV since the presence of any STI increases both the risk of new infections among HIV-negative people and the risk of transmission from HIV-positive people (Galvin & Cohen, 2004). Economic shocks may also affect immune system directly through increased stress (Segerstrom & Miller, 2004), making them more vulnerable to STIs and HIV. Income volatility, as opposed to income level and poverty, has only been examined as a structural driver of HIV in recent years and only six published (Burke et al., 2015;Dinkelman et al., 2008;Dupas & Robinson, 2012;Robinson & Yeh, 2011Wilson, 2012) and two unpublished studies (De Walque et al., 2014;Tenikue & Tequame, 2018) have investigated the sexual response of women to negative economic shocks, and among these, three focused on behaviours of sex workers (Dupas & Robinson, 2012;Robinson & Yeh, 2011. Among these, the study of Burke et al. (2015) used data from 19 African countries and showed that rainfall shocks (droughts) increase HIV prevalence by 11%. The magnitude of this effect is meaningful, considering that droughts are common in Africa. Based on their model, authors estimate that drought-induced income shocks lead to a 17% increase in HIV prevalence over a 10-year period. If the size of the effect is taken at face value, it would mean that income volatility is one of the main drivers of HIV in Africa. However, despite the important policy implications of such results, there is still a weak understanding of the channels through which a drought affects HIV. In fact, while it is generally assumed that transactional sex is an important transmission channel, the authors could not test this given that information on transactional sex was only added in DHS survey after 2015. The present article seeks to make an important contribution to the literature. It contributes to a growing body of evidence that extreme weather events can increase HIV prevalence, by analysing the effects of drought on risky behaviours of men and women. Precisely, it explores whether a main transmission channel between drought and HIV is transactional sex, which has not yet been shown in the literature.
Theoretical framework
Following Burke et al. (2015), 1 we consider the relation between droughts and transactional sex and finally between droughts and HIV prevalence.
We first consider how this relationship operates for women. First, droughts can generate economic shocks for individuals whose primary income source is agriculture, by affecting agricultural outputs. Droughts may also generate economic shocks more widely through the channel of increasing food prices, but empirical evidence suggests that the effects of recent droughts in Malawi have been concentrated amongst individuals with agricultural incomes (Pauw et al., 2011;Loevinsohn, 2015;World Bank, 2016). We therefore follow Burke et al. (2015) in hypothesizing that the incomes of women working in agriculture are more sensitive to drought than women working outside agriculture. Second, we hypothesize that the supply curve for transactional sex shifts outwards in response to an economic shock because it allows women to smooth consumption by raising money quickly (Lo Piccalo et al., 2012). Finally, the risk of HIV is increasing due to potential increase in multiple concurrent sexual partners, disassortative sexual mixing and unprotected sex (Downs & De Vincenzi, 1996;Hertog, 2007;Mah & Halperin, 2010, Ranganathan et al., 1999. Women engaging in transactional sex are more likely to have multiple concurrent partners, some or all of whom provide them with material support (Okigbo et al., 2014;Ranganathan et al., 2017). The sexual partners of women engaging in transactional sex may also be more likely to be infected with HIV than average since they are older men, with greater material resources, or the clients of sex workers (see Kilburn et al., 2018). Power relations and the premium on condom-less sex make unprotected sex acts more likely within transactional relationships than regular relationships (MacPherson et al., 2012;Ranganathan et al., 2017).
We next consider how the relationship operates for men, in order to understand behaviours of the demand side of the market. It should be noted that men might also engage in transactional sex, albeit with a lower prevalence than women (Wamoyi et al., 1999). However, the data available in the Malawi DHS 2015-16 focuses only on men's consumption of transactional sex.
Again, we consider the sequential relation going from drought to economic shocks to transactional sex and finally to HIV prevalence. First, for men employed outside agriculture, the effect of drought on income will be smaller than the effect of drought for women employed in agriculture. It may also be that the effect of drought will be smaller for men than for women in the same type of employment, if men are better insured against shocks (Dercon & Krishnan, 2000;Gong et al., 2019). If this is the case, men working in agriculture could also increase their consumption of transactional sex in equilibrium. Second, a decrease in income will cause an inwards shift in the demand curve for transactional sex. The equilibrium outcome for consumption of transactional sex depends on the relative sizes of this inwards shift, and the outwards shift in the supply curve. Indeed, the increase in supply is likely to push prices down which in turn may maintain or raise the demand for transactional sex by men (even if their own income is reduced), this depending on the price elasticity of transactional sex. Finally, although transactional sex is primarily considered a risky behaviour for women, engaging in transactional sex may also increase men's risk of contracting HIV, given that they are more likely to have multiple concurrent partners or are less likely to use condoms in transactional relationships (Chopra et al., 2009;Maganja et al., 2007;Serwadda et al., 1992).
The new equilibrium outcomes expected following a drought-related economic shock are as follows: 1 A formalised version of the theoretical framework based on the equations presented in Burke et al. (2015) can be found in the appendices.
H1.
Women employed in agriculture increase their supply of transactional sex in areas affected by droughts.
H2.
Men employed outside agriculture increase their consumption of transactional sex in areas affected by droughts.
H3.
Rates of STI infections are higher amongst men and women in drought-affected areas.
The STI-related predictions on employment type are ambiguous. We expect women employed in agriculture and men employed outside agriculture to be more likely to be infected with a STI as a result of transactional sex, but these individuals will simultaneously expose their other present and future sexual partners who may be employed in both types of occupation.
Study setting
Malawi is a small country in Southern Africa, with a population of approximately 17.6 million people. In 2016, the HIV prevalence rate amongst adults aged 15-49 was 9.2% -slightly higher than the regional average of 7.0% (NSO, 2017). The practice of transactional sex has been well documented in Malawi; women may exchange sex for material support, employment, cash or to pay off debts (Loevinsohn, 2015;MacPherson et al., 2012;Swidler & Watkins, 2007). From 2014 to 2016, Malawi experienced major droughts due to unusually strong El Niño conditions. In 2015, Malawi experienced one of the worst flood in its history that was followed by a drought that started in October 2015 and ended in March 2016, coinciding with the timing of data collection of the DHS. The "state of national disaster" was declared in October 2015 and USD 149.36 millions were mobilized to provide financial assistance to affected districts.
As Table 1 shows, during the rainy seasons of November 2014-April 2015 and November 2015-April 2016, approximately 90% of the population experienced a drought of various intensities.
Individual data
DHS data collection took place between October 2015 and February 2016, in Malawi. DHS surveys administer interviews on a wide range of topics in population and health. In 2015, a question on transactional sex practices was introduced into the interviews in Malawi for the first time among a subsample of unmarried women aged 15 to 24. The coincidence of the timing of the drought and survey allows us to investigate the channel of transmission between drought and HIV, by comparing the transactional sex practices of individuals affected by drought with those who were not affected.
The 2015-16 DHS survey in Malawi collected data from interviews with a sample of 24,562 women and 7,478 men. To allow for construction of nationally representative data, the sample was stratified by the 28 districts of Malawi, and then by urban and rural areas, resulting in 56 strata. Within these, the survey clusters of households were randomly sampled based on population census enumeration areas, and then households within the sampled clusters were randomly selected for interview. All women aged 15-49 in the selected households were eligible for individual interviews. A random sample of one third of these households was also eligible for individual interviews of male residents aged 15-54; and collection of blood samples from men and women for voluntary HIV testing.
All women aged 15-24 who ever had sex, and were not married or living with a partner at the time of the survey, were asked the question "In the past twelve months have you had sex or been sexually involved with anyone because he gave you or told you he would give you gifts, cash, or anything else" (NSO, 2017). 1,863 women answered this question. All men aged 15-54 who ever had sex were asked if, in the last twelve months, they had "given any gifts or other goods in order to have sex or to become sexually involved with anyone", and "paid anyone in exchange for having sexual intercourse". The former question may be considered a conservative estimate of transactional sex, and we combined both outcomes in the empirical analysis.
Women and men age 15-49 years who had worked in the past twelve months were asked to declare their main occupation over that period. 2 We use the answer provided to this question to determine if each Notes: Percentages are presented in this table.
Estimates are weighted to be representative at the national level. professional/technical/managerial, clerical, sales and services, skilled manual, unskilled manual, domestic service, agriculture and other. respondent work in or outside the agricultural sector. Data on HIV prevalence was collected by taking voluntary blood samples from all adults in the eligible third of households. In total, HIV testing was carried out on blood samples from 7,718 women and 6,593 men. By employing cluster-specific inverse-probability sampling weights, the HIV prevalence rates estimated are representative at the national level. Testing for HIV was carried out using the DHS anonymous linked protocol, which allows for the merger of HIV test results with the sociodemographic data collected in individual interviews after the removal of all information which could potentially identify an individual (NSO, 2017). Testing was carried out anonymously, and therefore individuals cannot be provided with their results, but all respondentswhether they volunteered to provide blood samples or notwere provided with educational materials and offered referrals for free voluntary counselling and testing. Fig. 1 summarizes the samples for which different information is available.
DHS surveys do not collect data on households' income. However, longitude and latitude information about the location of each cluster is recorded, making it possible to match responses to data on local weather conditions. Variation in weather is frequently used in economics literature as a proxy for variation in income at the local level, particularly in SSA where the majority of individuals depend on agriculture for their livelihood (Davis et al., 2010).
Weather data
We used data from the Global Precipitation Climate Centre monthly precipitation dataset to built the Standardised Precipitation Index (SPI) (Schneider et al., 2011), which is a measure of precipitation that expresses observed precipitation in terms of deviation from the long-term climatological average at a given location. It is therefore, by construction, orthogonal to confounding variables correlated with absolute rainfall levels. The data were available at a 0.5 • resolution until 2016 and a 1.0 • resolution from 2017 to 2018, and matched to the longitude and latitude of each DHS cluster. The SPI was obtained by fitting a gamma probability density function to long-term precipitation for a chosen time scale, which was then transformed to a normal distribution. The SPI was calculated over a 6-month time scale (November-April) during Malawi's annual rainy season, when main crop-growing activities take place (World Bank, 2016). This allows capturing precipitation of relevance to agricultural output. We use a reference period of almost 60 years when constructing the SPI (a minimum of 30 years is recommended by Guttman (1999)). The weather level data was constructed using the reference period 1951-2018, in order to capture recent climatic changes (Burke et al., 2015). Fig. 2 shows the distribution of the SPI across DHS clusters during the 2014-15 and 2015-16 seasons. Table 1 displays the duration in months of the moderate droughts during the 2015-2016 rainy season based on DHS information.
Following McKee and Doesken (1995), we define a drought when precipitation falls to one standard deviation below the long-term average, and we consider it ends when it returns to a positive value. This enables us to compute the length of the drought in number of months. Every cluster where precipitation fell to this level during the six months period going from November to April is considered to have experienced a drought during the (agricultural) year. Through the properties of the normal distribution, an SPI of minus one is equivalent to an event that occurs with approximately 16% probability during the reference period. As such, our measure of drought is comparable with the definition used by other authors examining the effect of economic shocks on HIV and transactional sex of a crop-year rainfall realisation below the 15% quantile of the local rainfall distribution (Burke et al., 2015;Low et al., 2019). This threshold was selected as meaningful by Burke et al. (2015) because realizations below the 15th percentile appear to be the most harmful to maize yields which is the main staple food crop in Malawi (World Bank, 2016). In our robustness checks, we show that our results are unchanged when considering the effect of severe drought, which is defined as starting at -1.5 standard deviations from the mean (McKee & Doesken, 1995).
By construction, each cluster has an equal chance of experiencing drought in any given year. It is essential for the identification strategy to use a measure of a drought relative to local conditions, as opposed to an absolute measure. Using an absolute measure of drought would mean that clusters with lower or more variable rainfall would expect more shocks; these clusters could differ in other observable and unobservable ways that affect transactional sex and HIV. Using a relative measure also captures plausibly the rainfall realizations that constitute economic shocks. For example, farmers in areas with lower historical rainfall may have adapted to this by growing crops such as sorghum that yield less than maize, but are more resistant to drought.
In the context of climate change, there could be concerns that some parts of Malawi have recently begun to experience lower or more variable rainfall than others and that this trend is not reflected in the longterm mean. If recent climatic changes were correlated with transactional sex or HIV, this could introduce bias into the results. However, by virtue of its small size and limited geographic diversity, Malawi is unlikely to be unevenly affected by climate change. Fig. 3 shows that all regions of the country experienced a drought in the rainy seasons between 2011 and 2016. In addition, it shows that the duration of the drought also seems to vary over time, since we can see that in 2011 and 2013 the north of the country experienced a long drought while between 2014 and 2016, it was mostly the regions located in the South that experienced a longer drought. In addition, the Appendix, Table A1 shows that there is limited positive correlation between the recent droughts in Malawi, as well as frequent negative correlations. Together, this information builds confidence that drought in Malawi is a random event occurring in any part the country.
Effect of drought on transactional sex
In order to understand whether current drought affects current transactional sex practices (and to test hypotheses H1 and H2 from the theoretical framework section), we consider as the main variable of interest the ongoing drought during the survey period (i.e. occurring during the rainy season of November 2015 to April 2016) and which overlapped with the twelve months prior to interview (i.e. occurring during the rainy season of November 2014 to April 2015). We estimate these relationships by regressing the transactional sex variable on current droughts as indicated in the following equation: where Y is a binary outcome equal to 1 if individual i in cluster j engaged in transactional sex in the past twelve months. S t j is a categorical variable equal to one if cluster j experienced a moderate drought lasting between one and five months, equal to 2 if the drought lasted six months and equals to 0 if there was no drought experienced in year t. We also explore the timing of the relationships between the outcomes of interest and drought, by running two alternative regressions where S t j is a dummy variable taking value one if there was a six-months drought during the 2015-2016 and 2014-2015 rainy seasons respectively. a i refers to the individual's age and r j indicates whether the individual lives in a rural cluster or not. ω is a vector of interview-month fixed affects, and ε 1ij is a mean-zero error term. We estimate these equations separately for men and women working in and outside agriculture, in order to test the predictions that increases in transactional sex are concentrated in women in agriculture and men outside agriculture.
We estimate linear probability models allowing for correlation of the error term across individuals in the same weather grid (cluster). We use inverse-probability sampling weights to make the results representative at the population level.
Effect of drought on STI and HIV status
In order to confirm whether current drought increases current risk of HIV through the channel of transactional sex, we also explore the effect of these droughts on whether an individual experienced an STI in the last twelve months using the same identification strategy as the one used for transactional sex. Indeed, treatable STIs are a proxy for current exposure to risk of contracting HIV and also increase biological susceptibility to infection (Galvin & Cohen, 2004).
Furthermore, the link between current behaviours and current HIV infection may be difficult to identify for the following reasons: (i) the probability of being infected with STI and HIV is small for each unprotected sex act, (ii) there is a delay between risky behaviours occurring, and manifesting in statistically identifiable increases in HIV infections at the population level 3 and (iii) a HIV infection could have been acquired at any time in the past.
We therefore followed Burke et al. (2015) to explore the impact of droughts up to ten years in the past on present health status, and thus regress HIV status on the number of six-months droughts in the past two, five and ten years: where Y is a binary outcome equal to 1 if individual i in cluster j is HIV positive. S n j is a continuous variable indicating the number of six-months droughts the cluster j experienced in the last n years. As previously we estimate these equations separately for men and women, and then for men and women by occupation type, in order to test the predictions presented in section 2.
Again, we estimate linear probability models allowing for correlation of the error term across individuals in the same cluster. We use inverseprobability sampling weights to make the results representative at the population level. Table 2 presents descriptive statistics of our sample on sociodemographic characteristics, drought outcomes, sexual risk and infection and alternative coping mechanisms breaking down by gender. On average, women with information on transactional sex were 19 years old and none of them was married. They live in household of six persons and 15% are the household head. Seventy-seven percent lived in a rural area and 31% were relying on agriculture. Eighty-six percent were living in an area affected by a moderate drought in 2015-16 during a period of six months. Given the large proportion of participants who were affected by a drought, we create a categorical variable capturing the presence and the length of the drought. In this specific sample of women, almost 8% were HIV positive, 2% had an STI over the last year and 5% reported to engage in transactional sex. Table 2 also reports descriptive statistics for i) the sample of all women regardless of whether they answered to the transactional sex question or not and ii) the men sample. For this latter, we can note that almost 11% of men declare that they have given cash, gifts or other goods in exchange of sex in the last twelve months.
Effect of drought on transactional sex
Results presented in Table 3 show that women relying on agriculture in an area affected by a drought of 1-5 months were more likely to engage in transactional sex by almost 5 percentage points and those affected by a six-months drought were 6 percentage points more likely to engage in a transactional relationship in comparison to women living in area who were not affected by a drought. Given the share of individuals declaring such activity in this survey, suffering from a current sixmonths drought doubles the likelihood of engaging in transactional sex. No effect is detected for women whose main livelihood is not agriculture, either employed or whose household head is employed in another sector. We can note that similar results are found when considering a severe drought instead of a moderate drought (cf. Panel D). Women working outside agriculture are less likely to engage in transactional sex if exposed to a drought between November 2014 and April 2015 (cf. Panel C). We explore whether this result was caused by a Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals. Notes: Estimates are weighted to be representative at the national level. ∓ A moderate drought is defined by the SPI reaching more than one standard deviation below the long term mean in the 6 month period of interest. † We have the information on head of HH employment sector only if the interviewed person is the head or his/her spouse. We can then input the head of HH occupation for other members of the HH. ⋄ Women who answered to the transactional sex question.
switch in occupation in the places affected by a drought in the past. and we find no evidence of such mechanism (see Table A4 (Panel B) in the Appendices). While our results might be biased in the presence of migration, for instance if women living in drought-stricken areas who have the opportunity to migrate in order to find alternative sources of income following a drought do migrate, we do not find any evidence of such selection problem as the share of women born in their current place of residence and working in the agriculture sector does not decrease with past extreme climate events (cf. Table A4 in the appendices).
As for results presented in Table 4, they show that a drought increases transactional sex relationships for all men but the effect is slightly stronger among men working outside agriculture. Precisely, we find that men living in areas affected by a six-months drought are 5 percentage points more likely to engage in a transactional relationship with a woman than men living in areas unaffected by the drought. It is interesting to note however that there is no effect of shorter droughts (1-5 months) on men's engagement in a transactional sex relationship. Similar results are obtained when considering severe droughts (cf. Panel D in Table 4) or when comparing individuals living in areas where a sixmonths moderate drought occurred (cf. Panel B in Table 4).
Exploring characteristics of women engaging in transactional sex
After showing the impact of droughts on transactional sex, we now turn to the exploration of the characteristics of women who decide to engage in transactional sex. To do so we consider two different subsamples: women living in areas (i) where there has been at least one month of drought and (ii) where there has been at least six months of drought. We run a logistic regression including a set of individual (level of education, agriculture occupation, current employment status) and household (wealth index, number of children below five, household size, electricity access, type of floor) characteristics (Gichane et al., 2020;Lépine & Strobl, 2013) and controlled for correlation of the error term across individuals living in the same cluster.
From Table 5 we can note that the level of education decreases the likelihood of engaging in transactional sex. This result may reflect different labour opportunities, which is in line with previous evidence (NSO and ICF, 2017), which indicates that women and men with no primary or secondary education most often work in agriculture. Fig. 4 displays the probability of engaging in transactional sex depending on the number of years of education of women. The results were estimated using regressions presented in Table 5. Table 6 indicate that currently experiencing a drought significantly increases the likelihood of suffering from a STI symptom in the last twelve months, especially for women who work in the agriculture sector (Panel B). More precisely, the likelihood of suffering from STIs increases by 10 and 12 percentage points (i.e. 43% and 44% increase) among women who work and whose household's head work in the agriculture sector respectively. Such impact of current droughts on STIs is observed among men whose household head works outside the agriculture sector (cf. Table 7). Table 8 presents the effects of past droughts on HIV prevalence to take into account the cumulative risk of contracting HIV following repeated risky sexual behaviours. To do so we estimate the effect of drought on HIV in the last two, five and ten years. It is interesting to note that an additional six-months drought in the past five years increases the HIV prevalence rate by 1.5 and 1.3 percentage points among women and Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals.
Results presented in
men respectively. Such figures correspond to an increase of 14% and 18% of HIV prevalence respectively. This increase in HIV prevalence seems to be concentrated among individuals working outside the agricultural sector.
Discussion
The results presented in this paper confirm the role of transactional sex as an important transmission channel from a drought to HIV. We find that a drought doubles the likelihood of engaging in transactional sex amongst unmarried young women who work in agriculture but has no effect on sexual behaviour of women working outside agriculture. We were able to identify that this increase in the supply encounter an increased demand for transactional sex that came from both men who worked inside and outside the agricultural sector, although the increase was larger for men outside agriculture, probably because they were less economically affected by the drought than men relying on agriculture.
In addition, we showed that the increase in risky sexual practices translated into a greater chance of reporting STIs symptoms in the past twelve months especially for women in agriculture and men outside agriculture.
Our results have an important policy implication. Results highlight the additional vulnerability faced by women relying on agriculture, in particular women who have low education and limited labour opportunities and confirm the results of Chishimba and Wilson (2021) who stress the importance of access to education, assets and non-agricultural opportunities to be able to mitigate the extreme weather events and smooth their consumption. Not only these women have harder living conditions and worse access to health care that are both important drivers of HIV (Gafos et al., 2020), but they are also more affected by natural disasters. The results from this paper suggests that insuring women relying on agriculture against crop failure could be an effective way to prevent them from engaging in risky sexual practices.
This study suffers however from a number of limitations. Firstly, transactional sex was self-reported by male and female participants. It is thus possible that both groups under-declared it, given the sensitive nature of this behaviour. Secondly, while the transactional sex question has been introduced for the first time in the 2015-2016 Malawi DHS, enabling us to study whether natural disasters may increase risk sexual behaviours, this question was not asked to all individuals interviewed but only to unmarried young women sexually active. We acknowledge that such selection in the respondents may lead on the one hand to conservative estimates as men may also engage in transactional sex as suppliers (Quaife et al., 2022) and on the other hand to an overestimation of the effect of drought on transactional sex given the characteristics of the women selected for the transactional sex section (Gichane et al., 2020). In addition, this design reduces substantially the size of our sample and prevent us from investigating whether married or Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. older women also engage in such practice. Married women were asked instead about multiple partnerships. Only around 1% of married women acknowledged having several partners pointing to the social desirability bias inherent in this type of behaviour. Furthermore, the data available prevent us from disentangling the « quantity » (number of acts) and « quality » (unprotected sex) of transactional sex, distinction which is likely to have different impacts on the HIV epidemics. Thirdly, while Austin et al. (2021) highlighted the mechanism that goes from droughts to HIV burden through the food insecurity faced by women, it would have been interesting to investigate the heterogeneity of risky behaviours adopted by women depending on the type of crops they grow (cash crops versus self-consumed crops). Unfortunately, the DHS data at hand does not allow us to explore this dimension. Finally, the data available prevent us from investigating alternative or occasional activities individuals may have in particular during the lean season.
Understanding the potential of shock-coping strategies to help women avoid engaging in risky sexual practices during droughts is a global priority. Over the last years, sub-saharan Africa is experiencing droughts and other weather shocks with increasing frequency (Niang et al., 2014) and 74% of households in Africa report experiencing a weather shock in the last five years (Niles & Salerno, 2018). In addition to weather shocks, at least 60% of African households report large and sudden losses in income every year (Nikoloski et al., 2018), and most of the poorest are not protected by social safety programmes (World Bank, 2016).
The Malawi government implemented cash transfers programs in reaction to the adverse climatic events faced by its population in 2014-2015 and in 2015-2016 and is thinking in implementing a weather index insurance in order to help individuals to face the growing number of climate adverse events (Makaudze, 2018). DHS data indicate that unfortunately this policy had almost no impact as it hardly reached 3% of the population (only 27 out of 807 interviewed households in our sample received the cash transfer in 2014-2015). While different shock coping strategies (cash transfers, food transfers and insurance schemes) could be mobilized by governments, adverse weather events such as droughts impact communities reducing the ability of insurers to offer insurance against these risks and limits the ability of Governments to protect people from these covariant shocks due to the high associated costs required to provide cash transfers to affected communities (in the 2015-2016 drought that hit Malawi 90% of individuals suffered from such natural disaster).
In such context and given the concomitance of shocks (Lazzaroni & Wagner, 2016), individual precautionary saving (Jones & Gong, 2021) or health insurance coverage have more potential to protect people from shocks that can be insured and that are frequent, hence reducing the negative impact of other concomitant covariant economics shocks. Additional evidence is urgently required regarding their effectiveness to prevent HIV. Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals. Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals.
Conclusion
We used DHS data collected during the 2015-2016 harsh drought that affected several areas of Malawi to study the effect of an unanticipated economic shock on sexual behaviours of young women and men. We find that amongst women employed in agriculture, the drought doubles the likelihood of engaging in transactional sex. Furthermore, each drought increases HIV prevalence by 15%. While transactional sex is concentrated among women working in the agriculture sector, the consequences of unprotected sex the also spread to women not working in this sector, certainly via the behaviour of the men in their households. Overall, the results suggest that economic shocks are important drivers of risky sexual behaviours among women in Africa. Further research is needed to investigate the most adequate formal shock-coping strategies to be implemented in order to limit the negative consequences of natural disasters on HIV acquisition and transmission.
Author statement
Carole Treibich: Methodology, Formal analysis, Writing-Reviewing and Editing.
Financial disclosure statement
None.
Declaration of competing interest
None.
Data availability
Data will be made available on request.
Acknowledgements
We would like to thank participants of the iHEA Conference 2019, of the EuHEA Conference 2022 and of AMSE seminar for useful comments. This work benefited from the funding of UKRI Future Leaders Fellowship (grant number : MR/S031790/1). All authors declare that they have no conflicts of interest.
Formalised theoretical framework
Following Burke et al. (2015), we consider the following relationship between drought-related economic shocks, transactional sex, and HIV prevalence: where HIV is local HIV prevalence, S is a measure of drought relative to local conditions, p is a measure of sexual risk and z is a measure of income.
We first consider how this relationship operates for women: δHIV δp represents how HIV infection rates change in response to sexual risk. The risk of HIV is increasing in factors including multiple concurrent sexual partners, disassortative sexual mixing and unprotected sex (Downs & De Vincenzi, 1996;Hertog, 2007;Mah & Halperin, 2010). Women engaging in transactional sex are more likely to have multiple concurrent partners, some or all of whom provide them with material support (Okigbo et al., 2014). The sexual partners of women engaging in transactional sex may also be more likely to be infected with HIV than average (older men, with greater material resources, or the clients of sex workers) (Kilburn et al., 2018). Power relations and the premium on condom-less sex make Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals.
unprotected sex acts more likely within transactional relationships (MacPherson et al., 2012).
δp δz represents how sexual risk changes in response to changes in income. We hypothesize that the supply curve for transactional sex shifts outwards in response to an economic shock because it allows women to smooth consumption by raising money quickly (LoPiccalo et al., 2012). δz δS represents how income changes in response to drought. Droughts can generate economic shocks for individuals whose primary income source is agriculture, by affecting production. Droughts may also generate economic shocks more widely through the channel of increasing food prices, but empirical evidence suggests that the effects of recent droughts in Malawi have been concentrated amongst individuals with agricultural incomes (World Bank, 2016;Loevinsohn, 2015;Pauw et al., 2011). We therefore follow Burke et al. (2015) in hypothesizing that the incomes of women working in agriculture are more sensitive to drought than women working outside agriculture.
We next consider how the relationship operates for men, in order to understand the demand side of the market. It should be noted that men might also engage in transactional sex on the supply side, albeit with a lower prevalence than women (Wamoyi et al., 2019). However, the data available in the Malawi DHS 2015-16 focuses only on men's consumption of transactional sex. δHIV δp : Although transactional sex is primarily considered a risky behaviour for women, engaging in transactional sex may also increase men's risk of contracting HIV, particularly if they have multiple concurrent partners or are less likely to use condoms in transactional relationships (Chopra et al., 2009;Maganja et al., 2007;Serwadda et al., 1992). δp δz : A decrease in income will cause an inwards shift in the demand curve for transactional sex. The equilibrium outcome for consumption of transactional sex depends on the relative sizes of this inwards shift, and the outwards shift in the supply curve. Indeed, the increase in supply is likely to push prices down which in turn may maintain or raise the demand for transactional sex by men (even if their own income is reduced), this depending on the price elasticity of transaction sex. δz δS : For men employed outside agriculture, the effect of drought on income will be smaller than the effect of drought for women employed in agriculture. It may also be that the effect of drought will be smaller for men than for women in the same type of employment, if men are better insured against shocks (Dercon & Krishnan, 2000). If this is the case, men working in agriculture could also increase their consumption of transactional sex in equilibrium.
Table A1
Pairwise correlations between droughts over the years - Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals. Clustered standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01. ⋄ Droughts from November to April are considered. † work in agriculture as self-employed or employee. ‡ work outside agriculture: professional/technical/managerial work, clerical, sales, household and domestic, services, skilled and unskilled manual work). Individuals who do not work are excluded from the sub-group analysis which explains that the number of observations of (1) is different from the addition of observations in (2) and (4). Occupation of the HH head was not available for all individuals. Notes: Coefficients of independent regressions at the cluster level are reported in this Table. Standard errors in parentheses. *p < 0.10, **p < 0.05, ***p < 0.01. | 2022-09-05T15:31:51.078Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "2683826056f1a7ee47a0d9a7a4f8201fd976ded5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ssmph.2022.101221",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbb808169167cd7a5fa6f94e0be1de6efc475f8a",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9502160 | pes2o/s2orc | v3-fos-license | Sequential Changes of Plasma C-Reactive Protein, Erythrocyte Sedimentation Rate and White Blood Cell Count in Spine Surgery : Comparison between Lumbar Open Discectomy and Posterior Lumbar Interbody Fusion
Objective C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) are often utilized to evaluate for postoperative infection. Abnormal values may be detected after surgery even in case of non-infection because of muscle injury, transfusion, which disturbed prompt perioperative management. The purpose of this study was to evaluate and compare the perioperative CRP, ESR, and white blood cell (WBC) counts after spine surgery, which was proved to be non-infection. Methods Twenty patients of lumbar open discectomy (LOD) and 20 patients of posterior lumbar interbody fusion (PLIF) were enrolled in this study. Preoperative and postoperative prophylactic antibiotics were administered routinely for 7 days. Blood samples were obtained one day before surgery and postoperative day (POD) 1, POD3, and POD7. Using repeated measures ANOVA, changes in effect measures over time and between groups over time were assessed. All data analysis was conducted using SAS v.9.1. Results Changes in CRP, within treatment groups over time and between treatment groups over time were both statistically significant F(3,120)=5.05, p=0.003 and F(1,39)=7.46, p=0.01, respectively. Most dramatic changes were decreases in the LOD group on POD3 and POD7. Changes in ESR, within treatment groups over time and between treatment groups over time were also found to be statistically significant, F(3,120)=6.67, p=0.0003 and F(1,39)=3.99, p=0.01, respectively. Changes in WBC values also were be statistically significant within groups over time, F(3,120)=40.52, p<0.001, however, no significant difference was found in between groups WBC levels over time, F(1,39)=0.02, p=0.89. Conclusion We found that, dramatic decrease of CRP was detected on POD3 and POD7 in LOD group of non-infection and dramatic increase of ESR on POD3 and POD7 in PLIF group of non-infection. We also assumed that CRP would be more effective and sensitive parameter especially in LOD than PLIF for early detection of infectious complications. Awareness of the typical pattern of CRP, ESR, and WBC may help to evaluate the early postoperative course.
INTRODUCTION
The reported incidence of acute postoperative spinal infections ranges from less than 1% to 13%. The type and length of the procedure influences the infection rate. Infection rate of lumbar open discectomies without fusion is known to be 1-2%, microdiscectomy 5%, instrumented posterior lumbar fusions 2-6%, to protect a living body from invading bacteria. The CRP is a too sensitive marker to distinguish the case of early infection from postoperative inflammation.
The purpose of this study was to assess the normal range and pattern of the change in plasma CRP, ESR, and WBC according to operative method and extents of spinal surgery. Inclusion criteria included those 18 years-old older with surgical indication for elective surgery, either single level discectomy or PLIF and hospital length of stay of 7 days or more after the surgery. We excluded the patients who had previous spinal surgery, transfusion during the surgery, anemia, history of immunodeficiency, autoimmune disease such as rheumatoid arthritis, and any history of infection within 6 months of the surgery.
MATERIALS AND METHODS
All patients had blood draws for CRP, ESR, and WBC count 1 day before surgery and postoperative day (POD) 1, POD3, and POD7. The CRP level in serum was determined with the latex agglutination methods. The normal CRP value in the authors' hospital was less than 0.3 mg/dL. ESR is a manual test in which the level of blood collected in a Western sedimentation rate tube is read by a technologist after a specific time period. Normal ESR range is less than 20 mg/dL. The WBC counts were obtained with an automatic cell counter. Preoperative and postoperative prophylactic antibiotics were administered intravenously for 7 days.
Statistical analysis was performed with using repeated measures ANOVA, and changes in effect measures over time and between groups over time were assessed. All data analysis was conducted using SAS v.9.1.
Changes of CRP, ESR, and WBC in decompression and posterior lumbar interbody fusion with stand-alone cage group (Group B) The mean CRP value of preoperative was 0.67 mg/dL (0.5-1.6 mg/dL), which increased to 1.55 mg/dL (0.5-5.1 mg/dL), at postoperative first day. At postoperative third and seventh day, it decreased to 1.52 mg/dL (0.1-7.6 mg/dL), 1.03 mg/dL (0.1-4.2 mg/ dL), respectively. Change rates of CRP value, comparing with preoperative CRP, were 131.4% increase at postoperative first day, 126.8% increase at postoperative 3rd day and 53.7% increase at postoperative 7th day.
Overall differences CRP, ESR, and WBC between two groups
Mean ESR level changes in both group A, B showed decrease on the 1st postoperative day, and then clearly elevated above its preoperative levels on the 3rd postoperative day, showing steady increase until 7th postoperative day. However, mean ESR chang-es in Group B showed steeper inclination than Group A. Mean ESR value at postoperative third day increase more rapidly in Group B than Group A.
Both group A, B of mean WBC count elevated rapidly on 1st postoperative day, and then decreased to normalize after 3rd postoperative day. Mean WBC changes of group A showed faster normalization than that of group B at POD 7th day. Fig. 2, 3 show the scattered pattern of CRP, ESR, and WBC.
Statistical analysis
Changes in CRP, within treatment groups over time and between treatment groups over time were both statistically significant F(3,120)=5.05, p=0.003 and F(1,39)=7.46, p=0.01, respectively. Most dramatic changes were decreases in the LOD group on POD3 and POD7.
Changes in ESR, within treatment groups over time and between treatment groups over time were also found to be statistically significant, F(3,120)=6.67, p=0.0003 and F(1,39)=3.99, p= 0.01, respectively. These differences appear to be in part due to dramatic increases in ESR in the PLIF group on POD3 and POD7. Changes in WBC values also were statistically significant within groups over time, F(3,120)=40.52, p<0.001, however, no significant difference was found in between groups WBC levels over time, F(1,39)=0.02, p=0.89.
DISCUSSION
CRP, ESR, and WBC have been widely used for estimating the status of infection. Several studies have reported the usefulness of this predictor and mentioned that the combination of normal CRP, ESR, and WBC levels reliably predicts the absence of infection after elective surgery 5,10,12,18) .
C-reactive protein was discovered and named for its reactivity to the C polysaccharide in the cell wall of S. pneumonia, which is produced by the liver in response to inflammation, infection, malignancy, and tissue damage with relatively high sensitivity, response speed, and range compared to other acute-phase reactants 1) . Normal CRP concentration in healthy human serum is usually lower than 1 mg/dL, increasing slightly with aging. Higher levels are found in late pregnancy, mild inflammation, and viral infections (1-4 mg/dL); active inflammation and bacterial infection (4-20 mg/dL); severe bacterial infections and burns (>20 mg/dL) 3) . CRP levels normally rise within 2 to 6 hours after surgery and then go down by the third day after surgery 2) .
The ESR has been the measure of the quantity of RBC that precipitate in a tube in a defined time and is based upon serum protein concentrations and RBC interactions with these proteins. Inflammation causes an increase in the ESR. The ESR increases in pregnancy or rheumatoid arthritis, and decreases in polycythemia, sickle cell anemia, hereditary spherocytosis, and congestive heart failure 2) . Multiple factors such as patient's age, gender, RBC morphology, hemoglobin concentration, and serum levels of immunoglobulin influence the ESR. The sample must be handled appropriately and processed within a few hours to assure test accuracy. While the ESR is not a diagnostic test, it can be used to monitor disease activity and treatment response and signal that inflammatory or infectious stress is present. For example, in rheumatoid arthritis, the ESR correlates well with disease activity; however normalization of ESR often lags behind successful treatment that causes resolution of the inflammatory state.
Immediately after surgery, lymphocytes decrease due to apoptosis but neutrophils increase causing overall rise in WBC count. Therefore, the change in the peripheral WBC count, especially neutrophil count, over time serves as a useful marker of postoperative progress 17) .
There have been several studies that compared CRP, ESR and WBC in regards to early detection of infection. CRP has been known to be more reliable than ESR for monitoring the effectiveness of treatment for acute hematogenous osteomyelitis 19) , septic arthritis in children 15) , infected total joint arthroplasty 6) , and spine infection 7) . CRP changes more quickly than ESR and therefore CRP may be a better reflection of current inflammation. Khan et al. 7) also mentioned that serial CRP level measurement may be helpful in monitoring the treatment of postoperative wound infections after spinal surgery. However, ESR does not appear to be as a sensitive marker of infection resolution.
For the explanation of prompt response of CRP, they assumed that CRP, unlike the ESR, is a fairly stable serum protein whose measurement is not time-sensitive and is not affected by other serum components. The magnitude of inflammation directly relates to the concentration of CRP 20) .
Numerous studies have described the normal kinetics of CRP as a rapid rise with peak value on POD2 or POD3, followed by an initial sharp fall and then a gradual decrease with normalization by POD14 to POD21 [6][7][8]15) . Larsson et al. 10) analyzed the prospective study of 89 patients who underwent uncomplicated spinal surgery showed that CRP rises within the first two posto- perative days and then sharply declines. ESR shows more irregular peak at POD5 and then gradually returns to normal after several weeks 5,7,10,18) . Comparing with our analysis, in LOD group, CRP was normalized at postoperative 3rd day, however, in PLIF group, tendency of consistent high CRP with steady declination was detected. However, there literatures did not design the study as limit to the single procedure. In current study, we compared the changes of CRP, ESR, and WBC according to the procedures (LOD and PLIF). It is hard for us to find the literatures asserting about changes of inflammatory markers according to the procedure. For that reason, we assert that our current study is valuable for that point.
Two different surgical procedures have varying inflammatory peak responses. In spinal surgery, maximum postoperative CRP levels depend on the length and type of surgery performed 9,10,14) . In general, the peak response is affected by the amount of iatrogenic tissue injury at surgery. Our study shows that the peak CRP level is low in LOD group than PLIF's one.
There are a few literatures which measures and showed the normal ranges of CRP, ESR, and WBC in uninfected normal cases according to the procedures. Understanding the determinants of peak value is important because values substantially higher than expected may indicate an abnormal inflammatory response due to a complication. Deviations from normal kinetics of ESR and CRP may indicate an infectious complication. For spine surgery, reported peak values vary across studies. Kock-Jensen et al. 8) reported peak CRP of 28.5 mg/L after lumbar disc surgery. There have been several reports about value of CRP, ESR, and WBC according to infection. Chung et al. 2) reported that the postoperative ESR in uncomplicated cases remained elevated 1 week after surgery, while the CRP peaked at day 2 and normalized by day 7 after spine surgery. Mun et al. 13) reported that, if there is persistent elevation or second rise around a week after surgery, the wound infection should be considered. Takahashi et al. 16) found that the post-operative CRP reached its peak on day 2 and remained abnormally elevated even after six weeks. And, Mok et al. 12) tried to derive the normal kinetics of CRP, opening the way for recognition of deviations that may indicate a complication. The kinetics seems to be conserved regardless of operation, magnitude, or region. A second rise or failure to decrease as expected has a high sensitivity, indicating that an infection will cause a positive result.
CRP and ESR reflect the degree of the inflammatory and surgical injuries as CRP has shown to have a higher sensitivity and specificity and is more reliable than ESR for detecting postoperative infection 16) . In additions, some literatures asserted that CRP is a predictable and responsive serum parameter in postoperative monitoring of inflammatory responses in patients undergoing spine surgery 9,11) .
Knowledge of the postoperative peak value, either by routine measurement on POD2 or 3 or estimation of the peak value based on the analysis above, allows calculation of expected values at subsequent time points because the kinetics are defined. If a patient presents with findings concerning for infection days or even weeks after surgery, CRP can be measured with a simple blood draw at that time and compared with the value expected if the patient adhered to normal kinetics 12) .
In current study, we excluded the patients who had transfusion during the surgery. We tried to minimize the influence of blood transfusion to CRP, WBC, ESR changes. Enright et al. 4) mentioned in his prospective study of CRP concentrations preand post-transfusion, serum CRP levels were analyzed before and after transfusion, using a fluorescence polarization immunoassay. He found that small rises in CRP concentrations occurred following 55.6% of transfusions, however, there was no statistically significant difference between pre-and post-transfusion CRP values, and these increases also failed to reach clinical significance. He added that CRP post-transfusion of greater than 100 mg/L occurred on only one occasion, and was more likely to be due to underlying infection 4) . However, ESR and WBC may be affected by transfusion. In current case, we excluded the case of spinal surgery which needed transfusion, and that point made more reliable results than other literatures, reporting the CRP, ESR, and WBC with spinal surgery.
We had very critical shortage that we used the average, median value and limitation of statistical analysis due to small number of prospective study cases. However, we tried to enumerate all the measured value of this study in the table.
We presumed our prospective series might be the valuable report of comparison between surgical techniques without influence of transfusion.
We assume that normalization of CRP at postoperative 3rd day of LOD may give us relief that surgery is not infected and checking and worrying about non-normalization of CRP at postoperative 7th day of PLIF with stand-alone cage might be the unnecessary work.
For ESR, only the thing that we could find is both procedures showed steady increase until 7th postoperative day except steepness of inclination, which might mean that ESR does not help for monitoring the infection after surgery.
For WBC, we presumed that normalization of WBC at postoperative 3rd day of LOD may provide us the non-infected state of surgery, and normalization of WBC at postoperative 7th day of PLIF with stand-alone cage probably may assure of non-infection.
CONCLUSION
We tried to aware the typical pattern of CRP, ESR, and WBC in non-infected postsurgical patients and knowing the pattern may help to evaluate the early postoperative course. We found that in LOD group of non-infection, dramatic decrease of CRP was detected on POD3 and POD7 and dramatic increase of ESR on POD3 and POD7 in PLIF group of non-infection. Both group's ESR level showed typical kinetics, not decreased until POD7 and WBC level maintain nearly equal to POD7. However, the CRP level was sharply decreased at POD3 in LOD group that of PLIF. So, we assumed that CRP would be more effective and sensitive parameter than ESR, WBC for early detection of infectious complications, especially in LOD than PLIF. | 2018-04-03T01:25:00.191Z | 2014-09-01T00:00:00.000 | {
"year": 2014,
"sha1": "0ca24ac17169a67f5335ec07cd10641adf3ce136",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3340/jkns.2014.56.3.218",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ca24ac17169a67f5335ec07cd10641adf3ce136",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104154372 | pes2o/s2orc | v3-fos-license | Efficient and straightforward click synthesis of structurally related dendritic triazoles
A simple, rapid and efficient copper-catalyzed 1,3-dipolar cycloaddition reaction is described for the synthesis of a novel family of twelve triazolic dendrimers structurally related. The products were the result of the click reaction of three cores and four different azides in tetrahydrofuran applying a homogeneous copper catalysis. The reaction intermediates and products were obtained in very good to excellent yields using straightforward and simple work-up procedures. This new family of compounds contain electroactive moieties such as carbazole and triphenylamine which may turn them into excellent candidates for the development of optoelectronic organic materials.
Introduction
"The act of simplifying" is probably the idea which is the essence of the "Click chemistry" paradigmatic concept, introduced by Sharpless and co-workers more than a decade ago. 1 The main criterion that identied this click philosophy is the application of modular and orthogonal processes that allows to obtain in a simple and efficient manner the desired compound. In order to be consider a click-type reaction, the procedure should be wide in scope, high yielding, easy to purify, stereo-specic, simple to perform and can be done in benign and/or easily removable solvents. 2 This type of chemistry rapidly found a widespread application in several research areas such as dendrimers, drug discovery, biochemistry and chemical modications of surfaces and nanostructures. 3 Probably, materials and polymers sciences are areas in which this concept has found a broad implementation holding the promise to develop unprecedented functional materials for organic optoelectronic applications.
Among the click reactions that are implemented in material synthesis and functionalization, the copper-catalyzed azidealkyne [3 + 2] cycloadditions (CuAAC) became the prevalent click procedure, oen being referred to as the "click reaction".
In 2002, Meldal and Sharpless' groups independently reported a copper(I) catalyzed 1,3-dipolar cycloaddition of azides and terminal alkynes (CuAAC) that were conducted at low temperatures with high rates, efficiency and regiospecicity. 4 This signicant event, not only improved the understanding of 1,3-dipolar cycloaddition studied by Huisgen in the 1960's, 5 but also generated an explosive growth of click chemistry as a valuable concept in materials science. The replacement of the original method for one that uses simple and robust chemistry, avoiding harsh conditions, rapidly attracted the interest of the community of scientists belonging to the eld of macromolecular and material science. Aer the rst implementation in 2004 of the copper(I) catalyzed reaction in dendrimer synthesis, 6 CuAAC has been widely used to design and prepare polymeric structures with macromolecular architectures which proved to be very difficult or almost impossible to get before. Dendrimer synthesis oen requires excess of reactants, high cost catalysts and tiresome chromatographic separations with the generation of considerable waste. The possibility of high efficiency and easy purication without excess of reactants make CuAAC very attractive for the synthesis of dendrimers and aer 2010 several review articles describe the application of CuAAC in dendrimers synthesis or functionalization. 7 Triazole unit is a non-natural heterocycle and its origin is from chemical synthesis and the copper-catalyzed 1,3-dipolar cycloaddition is an orthogonal coupling reaction which is widely tolerated for several functional groups that allowed to obtain this type of ring. 8 This scaffold has been widely used in many areas such as biology, drug design, supramolecular chemistry, nano-optoelectronics, macromolecular and materials sciences. Particularly in the last two applications, the triazole ring acts simply as linkage. 9 In organometallics chemistry, N(2) and N(3) of the triazole can coordinate with different metals when the macromolecules contain some spatially closed triazole rings. 10 Many new materials have been successfully obtained by CuAAC, but the most intriguingly aspect is the nature and the function of the 1,2,3-triazole structure within the resulting material properties. This aromatic ring oen affects the properties of the molecules to which it belongs. For example, triazole p-system provides aromatic character for monomers, conjugated polymers and uorescent probes. 11 On the other hand, the triarylamine core has been widely used in the development of electro-optical materials such as organic light emitting diodes (OLEDs), organic eld-effect transistors (OFETs), non-linear materials and xerography, due to its good electron donating and transporting capabilities. Furthermore when it is used as core gives rise to star-shape molecular architectures. 12 Hence, the electronic characteristics of triarylamines combined with the interesting properties that the triazole moiety can confer to polymers and dendrimers, such as solubility, swelling, metal adhesion, macromolecular architecture, among others, makes the CuAAC reaction a convenient tool for the development of novel dendritic organic materials.
Herein, we describe the design and synthesis of structurally related families of triazole derived dendrimers (Fig. 1). These families of compounds were efficiently synthesized in high yields, with no need for chromatographic separations. The new set of macromolecules with electroactive moieties such as triphenylamine and carbazole are promising targets for optoelectronic applications.
General considerations
Melting points were taken on a Leitz Wetzlar Microscope Heating Stage, Model 350 apparatus and are uncorrected. 1 H and 13 C NMR spectra were recorded on Bruker Avance-300 spectrometer with Me 4 Si as the internal standard and dimethylsulfoxide-d6 or chloroform-d as solvents. Abbreviations: s ¼ singlet, d ¼ doublet, t ¼ triplet, and m ¼ multiplet expected but not resolved. Mass spectra of dendrons were recorded in a Shimadzu QP2010 Plus instrument, ion source temperature ¼ 300 C, and detector voltage ¼ 70 kV. High resolution mass spectra of the dendrimers were analyzed by electrospray ionization (ESI) using a Bruker micrOTOF-Q II instrument, positive ion polarity or by ultraviolet matrix assisted laser desorption-ionization mass spectrometry (UV-MALDI MS) and by ultraviolet laser desorption-ionization mass spectrometry (UV-LDI MS) performed on the Bruker Ultraex Daltonics TOF/ TOF mass spectrometer. Mass spectra were acquired in linear positive and negative ion modes. Stock solutions of samples were prepared in chloroform. External mass calibration was made using b-cyclodextrin (MW 1134) with nHo as matrix in positive and negative ion mode. Sample solutions were spotted on a MTP 384 target plate polished steel from Bruker Daltonics (Leipzig, Germany). For UV-MALDI MS matrix solution was prepared by dissolving GA (gentisic acid, 1 mg mL À1 ) in water and dry droplet sample preparation was used according to Nonami et al. 13 loading successively 0.5 mL of matrix solution, analyte solution and matrix solution aer drying each layer at normal atmosphere and room temperature. For UV-LDI MS experiments two portions of analyte solution (0.5 mL Â 2) were loaded on the probe and dried successively (two dry layers). Desorption/ionization was obtained by using the frequencytripled Nd:YAG laser (355 nm). The laser power was adjusted to obtain high signal-to-noise ratio (S/N) while ensuring minimal fragmentation of the parent ions and each mass spectrum was generated by averaging 100 lasers pulses per spot. Spectra were obtained and analyzed with the programs Flex-Control and FlexAnalysis, respectively. Reactions were monitored by TLC on 0.25 mm E. Merck Silica Gel Plates (60F254), using UV light (254 nm) and phosphomolybdic acid as developing agent. Flash column chromatographies using E. Merck silica gel 60H were performed by gradient elution of mixture of n-hexane and increasing volumes of dichloromethane or ethyl acetate. Reactions were run under an argon atmosphere with freshly anhydrous distilled solvents, unless otherwise noted.
Yields refer to chromatographically and spectroscopically homogeneous materials, unless otherwise stated.
General procedure for the synthesis of the triazolic dendrimers are described, the synthetic protocols for the preparation of cores 4, 5 and 6 and dendrons 7, 8, 9 and 10 are fully described in the ESI. † General procedure for the synthesis of the dendrimers 1a-d, 2a-d and 3a-d Core (0.12 mmol) was dissolved in anhydrous tetrahydrofuran (4 mM) under Ar atmosphere. Azide (3.6 equiv.), Cu(PPh 3 ) 3 Br (21% mole) and N,N-di-isopropylethylamine (3 equiv.) were added and the reaction was reuxed. The completion of the reaction was controlled by TLC until consumption of the cores. Solvent was evaporated and the resulting crude product suspended in ethanol. The solid was ltered, washed with ethanol and suspended in methyl-tert-butyl ether. Solid was ltered again and dried affording pure dendrimer as coloured solid. Numbering of each structure is arbitrary and is described in the ESI. †
Results and discussion
Three families of structurally related triazolic dendrimers were prepared by the combination of three different cores and four different aromatic azides, the structures of these compounds are depicted in Fig. 2.
With the exception of core 4 and dendron 7, the rest of the cores and dendrons contain electroactive moieties such as triphenylamine (cores 5 and 6, dendron 8) and carbazole (dendrons 9 and 10). The cores were synthesized from the corresponding aryl halide through a Sonogashira coupling reaction with trimethylsilylacetylene (TMSA) and further desilylation to get the free alkynes. In Scheme 1 are described the synthetic routes for every target core. The synthesis of core 4 and intermediate 15 has been previously reported by our laboratory. 14 There are two common steps in each synthesis: Sonogashira coupling with TMSA and basic desilylation. Core 4 was obtained aer these two steps with excellent yields. Synthesis of core 6 was accomplished from commercial triphenylamine 12 by formylation under Vilsmeier-Haack conditions 15 in two steps furnishing tri-aldehyde derivative 13 in moderated yield. A Wadsworth-Horner-Emmons olenation reaction used to couple 13 with phosphonate 14 afforded tri-iodide 15 in good yield. 16 Further Sonogashira coupling reaction with TMSA followed by desilylation procedure furnished core 6 in very good yield. For the synthesis of 5, triphenylamine 12 was treated with a mixture of KI : KIO 3 in glacial acetic acid to give tri-iodide derivative 16 in moderated yield. 17 Aer Sonogashira and desilylation reactions sequence, core 5 was obtained in good yield. 18 Cores 4, 5 and 6 present three terminal alkyne functionalities arranged in a "star-shape" architecture which it will ultimately dene the shaped of the resulting dendrimers. The synthesis of azides 7-10 are described in Scheme 2. All azides derivatives were obtained in very good yields through a diazotization reaction to transform the amino group into the azide one. It is worth to mention that azides showed to be highly photosensitive, therefore all the azides were used as crude products without purication and handled protected from light. Products 7, 8, 9 and 10 were characterized by 1 H and 13 C NMR and FT-IR spectroscopy. The last one was very useful to show the distinctive signal of azide group between 2160-2120 cm À1 . 19 Phenyl azide 7 was prepared according to literature protocol 20 via diazotization of freshly distilled aniline 18 and further treatment with sodium azide affording 7 in excellent yield. Triphenyl azide 8 was prepared by nitration of triphenylamine 12, 21 the mono nitrate derivative was reduced using hydrazine monohydrate and Pd/C to furnish amine 20, 22 subsequently diazotization and treatment with sodium azide afforded the expected product 8. 21 The preparation of azides 9 and 10 were achieved from commercial carbazole 21a and 3,6di-tert-butyl carbazole 21b which was previously synthesized in our laboratory. 23 Ullmann coupling of 21a and 21b 24 with 4nitroiodobenzene afforded 22a and 22b. The nitro derivatives were reduced with hydrazine 22 yielding amines 23a and 23b, which were submitted to diazotization and further treatment with sodium azide to give the corresponding azides 9 and 10. 25 It is noteworthy to mention that all the reaction steps were accomplished with excellent yields and only one column chromatography purication was needed for the nitro derivatives 22a and 22b, all the others compounds were solids and puried by crystallization.
Once all the components needed for the click reaction were synthesized, we looked for a general coupling condition that could allow us to attach all the cores with the corresponding azides in a convergent manner, regardless their chemical structures, to obtain triazolic dendrimers with star-shaped architecture. In a typical procedure for a CuAAC reaction, the Cu(I) cation used as catalyst was generated in situ from a copper(II) salt and a reducing agent (most oen sodium ascorbate). These systems are efficient but limited to substrates with high water tolerance. 26 Sometimes, in order to apply milder reaction conditions or increase the reaction applicability, addition of ligands to protect Cu(I) centers is a common strategy. In our case, the use of sodium ascorbate and CuSO 4 as source of Cu(I) in different proportions, afforded mixtures of products and partial recovery of starting materials. We assumed that incomplete reactions were a consequence of only partial solubility of starting material in the aqueous solvents mixture. Hence, we looked for homogeneous conditions for the dipolar cycloadditions and we found that the used of Cu(PPh 3 ) 3 Br and DiPEA as organic base in tetrahydrofuran allowed us to obtain the expected products in very good yields. Cu(PPh 3 ) 3 Br is a crystalline solid, stable to air and ambient moisture and it is prepared by the addition of triphenylphosphine to a hot methanolic solution of CuBr 2 . 27 This procedure avoids the need to reux moisture-sensitive CuBr with triphenylphosphine. Furthermore, this complex is soluble in several organic solvents and can be stored under air for prolonged times, without any visible decomposition. 28 In a general procedure, 0.10-0.12 mmol of core were dissolved in THF (3 to 4 mM), azide (3.6 equivalents), 0.06 equivalents of Cu(I) catalyst and 3 equivalents of DiPEA were used to afforded the expected products showed in Fig. 3.
The twelve triazolic dendrimers (Fig. 3) can be classied in three families according to the identity of the core. For the dendrimers 1a-d the common core was benzene, 2a-d triphenylamine and 3a-d conjugated triphenylamine. All the products were obtained applying the same reaction conditions with high yields. The coloured solid dendrimers were puried by suspension of the solids in two different solvents or a mixture of both: ethanol and methyl-tert-butyl ether. This treatment allowed removing excess of azides, copper salts and any other by-product formed in the reaction and avoids the use of chromatography column step to isolate pure triazoles. All these compounds are very polar and therefore soluble in polar aprotic solvents such as dimethylsulfoxide, tetrahydrofuran and slightly soluble in dichloromethane. The last one is better to dissolve tert-butylated derivatives 1d, 2d and 3d. As far as we know, there are not literature references related to the preparation of structurally related families of aromatic triazoles. A few of triazolic dendritic structures has been published and they showed that this type of compounds have a high potential on new material design. 29,30 Additionally, the purication procedure was simple and similar in each case with no need for column chromatography and afforded pure samples.
Conclusions
In this work we were able to synthesize twelve dendritic aromatic triazoles structurally related using four different aromatic azides as dendrons and varying the chemical identity of the core. A standard reaction procedure was developed and applied in all cases with very good results. The homogeneous CuAAC reaction afforded solid compounds and their purication only included a suspension in different washing solvents avoiding any chromatography column technique. All the dendrimers presented a starburst molecular shape and includes triarylamines as electroactive moiety which turns them into excellent candidates for the development of novel organic materials. The structural changes introduced in every family of compounds were selected in order to establish the relationship between chemical structure-optoelectronic properties. The optoelectronic studies are being carried out and will be published in due course.
Conflicts of interest
The authors declare no competing nancial interest. | 2019-04-09T13:06:24.563Z | 2017-10-06T00:00:00.000 | {
"year": 2017,
"sha1": "d273669e5c7c32c337fb51cb38e79580b584b009",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/ra/c7ra09558a",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8fee0d14f6304098a867508952e66c31a93ca7ff",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
11104766 | pes2o/s2orc | v3-fos-license | KIR2DL5: An Orphan Inhibitory Receptor Displaying Complex Patterns of Polymorphism and Expression
A recently developed anti-KIR2DL5 (CD158f) antibody has demonstrated KIR2DL5 expression on the surface of NK and T lymphocytes, making it the last functional KIR identified in the human genome. KIR2DL5 belongs to an ancestral lineage of KIR with Ig-like domains of the D0-D2 type, of which KIR2DL4, an HLA-G receptor, is the only other human member. Despite KIR2DL4 and KIR2DL5 being encoded by genes with similar domain usage, several KIR2DL5 functions resemble more closely those of KIR recognizing classical HLA class I molecules – surface-expressed KIR2DL5 inhibits NK cells through the SHP-2 phosphatase and displays a clonal distribution on NK and T lymphocytes. No activating homolog of KIR2DL5 has been described in any species. The genetics of KIR2DL5 is complicated by duplication of its gene in an ancestor of modern humans living ∼1.7 million years ago. Both KIR2DL5 paralogs have undergone allelic diversification; the centromeric gene is most often represented by alleles whose expression is silenced epigenetically through DNA methylation, thus providing a natural system to investigate the regulation of KIR transcription. The role of KIR2DL5 in immunity is not completely understood, in spite of different attempts to define its ligand. Here we revisit the most relevant characteristics of KIR2DL5, an NK-cell receptor possessing a unique combination of genetic, structural, and functional features.
INTRODUCTION
KIR2DL5 (CD158f) is the most recently described human KIR expressed on NK and T lymphocytes , for which no ligands have yet been identified. It belongs to an ancestral lineage of KIR with Ig-like domains of the D0-D2 type, whose only other member is KIR2DL4, an HLA-G receptor (Rajagopalan, 2010;Rajagopalan and Long, 2012). Although the KIR2DL5 and KIR2DL4 genes encode proteins with a similar domain organization, distinct structural features make several KIR2DL5 functions resemble more closely those of KIR recognizing classical HLA class I molecules ( Table 1).
The 9.3-kbp KIR2DL5 gene was identified in 2000 by amplification of genomic DNA with oligonucleotide primers recognizing conserved KIR regions (Vilches et al., 2000b) and analysis of the first sequenced KIR haplotype (Wilson et al., 2000). Exonwalking and RACE strategies isolated the complete KIR2DL5 coding region, an open reading frame of 1128 bp encompassing eight exons organized similarly to those of KIR2DL4 -they both lack the fourth exon coding for the D1 Ig-like domain in all other KIR, and encode cytoplasmic tails 20-39 amino acids longer than other human inhibitory KIR .
This structure is conserved in KIR2DL5 orthologs identified in common and pigmy chimpanzees, gorillas, and orangutans (Khakoo et al., 2000;Rajalingam et al., 2004;Guethlein et al., 2012). Genomic and complementary DNA clones isolated from other Old World primates resemble human KIR2DL5 in part of its sequence or in the domain organization, but true functional orthologs appear to be restricted to hominoids (Hershberger et al., 2001(Hershberger et al., , 2005Sambrook et al., 2005;Bimber et al., 2008;Abi-Rached et al., 2010;Palacios et al., 2011). No activating homolog of KIR2DL5 has been described in any species, human KIR2DS5 being homologous to HLA-C-specific KIR.
GENETIC ORGANIZATION: TWO KIR2DL5 GENES SUBJECTED TO EXTENSIVE COPY NUMBER VARIATION
KIR2DL5 is highly polymorphic, like other KIR, and it epitomizes the copy number variation that is a hallmark of the KIR complex. Non-mendelian inheritance and different relative locations of the two most common variants seen in Caucasoids demonstrated that KIR2DL5 alleles belong to two series encoded by different loci (Vilches et al., 2000a;. These loci, designated officially KIR2DL5A and KIR2DL5B, are now often referred to with the suffixes T and C, for their location in the telomeric and the centromeric intervals of the KIR complex, respectively (Marsh et al., 2003;Pyo et al., 2010;Parham et al., 2012).
LINKAGE TO KIR2DS3S5 IN KIR -B HAPLOTYPES
Both the centromeric and the telomeric KIR2DL5 loci are followed by the paralogs of a duplicated KIR2DS3S5 gene, each of which encodes different alleles of the activating KIR 2DS3 and 2DS5, now considered allotypes of each other (Ordóñez et al., 2008;Hou et al., 2010;Pyo et al., 2010). Thus, the centromeric www.frontiersin.org and the telomeric parts of many KIR-B haplotypes are marked by different KIR2DL5-KIR2DS3S5 clusters (Figure 1). The common centromeric sequence KIR2DL5B * 002 is associated with KIR2DS3 * 001, whereas other KIR2DL5B alleles (see below) tend to associate in Black populations with several KIR2DS5 alleles (Hou et al., 2010). On the telomeric side, the predominant KIR2DL5A alleles, * 001, and * 005, are linked with KIR2DS5 * 002 and KIR2DS3 * 002, respectively. At its 5' end, KIR2DL5B is normally flanked by KIR2DL2, whereas KIR2DL5A is preceded by KIR3DS1 (Vilches et al., 2000a;Pyo et al., 2010).
DUPLICATION OF KIR2DL5 IS SPECIFIC TO HUMANS
The KIR2DL5 duplication has not been seen in other primates, and is possibly specific to humans. Pyo et al. (2010) estimated that an ancestral KIR2DL5-KIR2DS3S5 group duplicated ca. 1.7 million years ago, and proposed several models for subsequent diversification through point mutation and recombination. The duplication, seen in all races, is now fixed in our species. However, not every human carries two (or one) KIR2DL5-KIR2DS3S5 clusters, because each is subjected to presence/absence variation, with all A haplotypes and one centromeric B haplotype lacking these genes (Figure 1).
EXPANDED AND CONTRACTED KIR HAPLOTYPES GENERATED BY RECOMBINATION IN THE KIR2DL5-KIR2DS3S5 CLUSTER
On the other hand, presence of two highly homologous sequence segments in two different parts of the KIR complex has facilitated subsequent asymmetric (i.e., non-allelic) homologous recombination resulting in contracted and expanded haplotypes (one of them with a third KIR2DL5 locus), often carrying fusion genes or alleles, as represented in Figure 1 ( Gómez-Lozano et al., 2003Martin et al., 2003;Ordóñez et al., 2008Ordóñez et al., , 2011Hou et al., 2012). In contracted haplotypes lacking the central framework KIR genes, assignment of KIR2DL5 and KIR2DS3S5 to the centromeric or the telomeric sides is somewhat arbitrary.
KIR2DL5 ALLELIC POLYMORPHISM THE KIR2DL5 CODING REGION
KIR2DL5 is represented in the Immuno Polymorphism Database (v2.4.0) by 15 KIR2DL5A and 25 KIR2DL5B alleles (Robinson et al., 2010). Nineteen polymorphic sites have been found within the 1125-bp coding region, of which 11 are non-synonymous. Twelve nucleotide substitutions occurring in exons 3 and 5 create seven amino acid replacements in the extracellular Ig-like domains ( Table 2), which may reflect balancing selection having favored polymorphisms that could modulate avidity or specificity in the interaction of KIR2DL5 with unknown ligands. Du et al. (2008) pointed out, however, that many polymorphisms fall out of predicted ligand-interacting loops of the Ig-like domains. Of note, a single polymorphism in exon 1 distinguishes all KIR2DL5A from all KIR2DL5B alleles, whilst many substitutions are shared by alleles of both loci ( Table 2). An extensive exchange of genetic material between the centromeric and the telomeric KIR2DL5 loci has taken place during human evolution, as eloquently illustrated by two allele pairs (one from each locus) and a four-allele group (two from each locus) encoding identical mature polypeptides and differing only in their signal peptides. Among 65 additional polymorphisms occurring in KIR2DL5 introns (not shown), none alters its splicing sites.
POLYMORPHISM IN THE KIR2DL5 PROXIMAL PROMOTER REGION
The regulatory regions upstream of the KIR2DL5 genes are even more polymorphic -the three first known KIR2DL5 alleles are distinguished by 20-32 nucleotide substitutions in the 1.2-Kbp region immediately 5 of their start codon (1.6-2.5% variation). A neighbor-joining phylogenetic tree based on the nucleotide sequences of this region sorts KIR2DL5 alleles into three well-differentiated lineages. One of them includes all and only KIR2DL5A alleles; a second lineage comprises multiple KIR2DL5B alleles, of which 2DL5B * 0020101 is the prototype; and the third cluster is formed by KIR2DL5B * 003 and * 00602 (Du et al., 2008). We will refer herein to these clusters as promoters of types I, II, and III. The origin of this divergence, of profound functional importance (alleles controlled by type II promoter are not transcribed), has not been explained.
DISTRIBUTION OF KIR2DL5 ALLELES
KIR2DL5 is present in all human populations at frequencies ranging from 26 to 86%, but the distributions of the two paralogs and their allotypes are uneven. Whereas KIR2DL5A and Frontiers in Immunology | NK Cell Biology KIR2DL5B predominate in Mongoloid and Black populations, respectively, they have similar frequencies in Caucasoids. Alleles KIR2DL5A * 001, B * 002, and A * 005 are widely distributed, accompanied by B * 006 in Blacks, who retain the highest KIR2DL5 diversity, and constitute the only human group in which KIR2DL5 alleles controlled by the third type of promoter are not rare (Vilches et al., 2000a;Gómez-Lozano et al., 2007;Du et al., 2008;Middleton et al., 2008;Mulrooney et al., 2008;Hou et al., 2010;González-Galarza et al., 2011 and our own unpublished results).
KIR2DL5 AND DISEASE
Data on possible implication of KIR2DL5 copy number variation and polymorphism in susceptibility to disease are scarce. Complex polymorphism and strong linkage disequilibrium with www.frontiersin.org neighboring KIR genes complicates evaluating the individual role of KIR2DL5 as a risk or a protective factor. Search of the PubMed database with the term "KIR2DL5" in June 2012 retrieved 16 citations describing significant deviations of the gene frequency in different diseases and clinical situations (not shown). Among them, only an association between ankylosing spondylitis and presence of KIR2DL5 in the genome of Asian patients has been replicated (Díaz-Peña et al., 2008;Jiao et al., 2008Jiao et al., , 2010.
KIR2DL5 GENOTYPING
KIR2DL5 polymorphism has been explored using PCR with sequence-specific primers (SSP) or oligonucleotide-probe hybridization (2008), methods that reliably identify common alleles (Gómez-Lozano and Gómez-Lozano et al., 2007;González et al., 2008). Sequence-based typing (SBT) and mass spectrometry methods that enable studying the entire KIR2DL5 sequence have led to identification of multiple new alleles (Houtchens et al., 2007;Du et al., 2008;Mulrooney et al., 2008;Hou et al., 2010). However, existence of two KIR2DL5 loci poses extra difficulties to genotyping: firstly, because a person having the two loci on both chromosomes may have up to four different KIR2DL5 sequences; secondly, because the alleles of each locus share many single-nucleotide polymorphisms (SNPs).
Knowing the phase of KIR2DL5 SNPs is essential for locus/allele assignment, but this is hindered by the hundreds or thousands of base-pairs separating many individual polymorphism (e.g., the only locus-specific SNP in exon 1 is ca. 3 Kbp apart from those in exon 5). The published methods can make tentative assignments of reasonable reliability on samples derived from populations in which the KIR2DL5 allele distribution has been previously investigated in depth, but none of them can assign unambiguously all possible KIR2DL5 genotypes. Separation of KIR2DL5 alleles by locus-specific long-range PCR, followed by probe hybridization or enzymatic sequencing, and long reads of individual DNA molecules by second-generation sequencing are promising strategies for accurate KIR2DL5 genotyping, which remains currently a challenge.
KIR2DL5 EXPRESSION GENE TRANSCRIPTION
The fact that highly similar KIR2DL5 coding sequences are controlled by three structurally divergent forms of promoter has profound functional consequences, constituting a valuable natural experiment that provides major insight into the complex regulation of KIR transcription. Of the KIR2DL5 alleles whose transcription has been investigated, those controlled by type I or type III promoters feature variegated patterns of expression; whilst mRNA of alleles controlled by type II promoters is undetectable (Vilches et al., 2000a;Gómez-Lozano et al., 2007). No single exception to this rule has ever been described; furthermore, the KIR3DP1 pseudogene, also controlled by a type II promoter, is transcriptionally silent too, with a key exception: as the empirical rule predicted, KIR3DP1 * 004, which gained a type I promoter through recombination with KIR2DL5A, is an expressed allele (Vilches et al., 2000b;Gómez-Lozano et al., 2005).
Frontiers in Immunology | NK Cell Biology
Consistent with the epigenetic regulation of KIR genes (Santourlidis et al., 2002;Chan et al., 2003), lack of transcription of the silent KIR2DL5B * 002 allele correlates with a hypermethylated status of CpG islands in its promoter. Furthermore, pharmacological DNA demethylation of cultured NK cells suffices for restoring KIR2DL5B * 002 transcription, demonstrating that only an epigenetic mechanism prevents its expression . In agreement with this are studies of transiently transfected promoters controlling a reporter gene, an in vitro situation in which epigenetic regulation is not relevant. In this setting, the promoters of naturally silent KIR2DL5 alleles tend to show similar or higher activities than functional KIR alleles Mulrooney et al., 2008).
Among the sequence patterns that distinguish the three types of KIR2DL5 promoter, only two linked SNPs at nucleotides 97 and 84 upstream of the start codon correlate completely with the expression pattern: GA is seen in transcribed alleles, and AG in silent ones ( Table 3). Nucleotide −97G lies within a TGTGGT motif that provides a core binding site for the RUNX family of transcription factors (Vilches et al., 2000a). RUNX3 is recruited from nuclear extracts of NK cells by probes derived from KIR2DL5 alleles having an intact motif, but not by those carrying the −97G > A mutation . In support of an essential role for RUNX in KIR expression is conservation of its binding motif in all human KIR with clonal transcription (Trompeter et al., 2005;van Bergen et al., 2005;Presnell et al., 2006); and demonstration that two redundant RUNX binding sites, highly conserved in primates, are possibly essential for expression of KIR2DL4, a gene that is transcribed ubiquitously in NK cells (Presnell et al., 2012).
KIR transcription is controlled not only by a proximal promoter, but also by the complex interaction of additional regulatory elements (Cichocki et al., 2011). In brief, a distal, non-tissuespecific promoter element located ∼1.1 Kbp upstream of the KIR start codon has been suggested to induce histone modifications that facilitate subsequent function of the proximal promoter. The latter is actually bidirectional -reverse transcripts derived from it have been proposed to repress KIR expression and favor epigenetic silencing, whilst predominance of forward transcription would result in KIR expression. Finally, an additional reverse promoter element in intron 2 appears to function in early NK-cell progenitors. It has been suggested that the RUNX role might be to down-regulate the antisense promoter activity during NK-cell ontogeny, thus favoring a local open chromatin conformation at the bound KIR gene, and its subsequent expression in the mature cell (Davies et al., 2007;Cichocki et al., 2011). Consistent with this hypothesis is that the only reverse KIR transcripts detected in CD56 bright NK cells (possible precursors of KIR + CD56 dim cells) are those derived from genes with promoters lacking the RUNX binding site -KIR2DL5B * 002 and KIR3DP1 (Davies et al., 2007).
Other locus-and allele-specific polymorphisms of these regulatory elements influence KIR transcription and may also help us understand mechanisms controlling KIR2DL5 expression. For instance, a Ying Yang-1 (YY1) binding site conserved in many proximal KIR promoters is mutated in KIR 2DL1, 2DS1/S3/S5, and all KIR2DL5 alleles, which correlates with enhanced reverse transcription (Davies et al., 2007;Li et al., 2008). This phenomenon may induce low forward activity of the KIR2DL5 promoter, which might be associated with the receptor being generally expressed at low levels on the surface of only small proportions of NK cells . Likewise, disruption of the Sp1 site in the promoter of the expressed allele KIR2DL5B * 003 (−27C > T, Table 3) decreases its forward activity in vitro and has been proposed to reduce its expression on NK cells , which needs experimental confirmation.
Of possible interest, none of the transcriptionally silent KIR2DL5B alleles bear structural abnormalities in their reading frames (in contrast with other human KIR, no null KIR2DL5 alleles have yet been identified; Vilches et al., 2000a;Robinson et al., 2010). The fact that KIR2DL5B generally retains an intact structure could mean that inactivation of its expression is evolutionarily recent (the mutated RUNX site is not seen in other hominoids, personal communication of Libby Guethlein, Stanford University); or that the gene still serves an unknown function.
CELL SURFACE EXPRESSION
Generation of a specific monoclonal antibody (clone UP-R1) enabled us to characterize KIR2DL5 surface expression . Like most other KIR (and contrasting with the nonclonal expression of KIR2DL4), KIR2DL5 features a variegated pattern on the surface of CD56 dim NK cells and on T lymphocytes from peripheral blood, in agreement with the clonal distribution seen by reverse transcription (RT) PCR in mRNAs isolated from NK-and T-cell clones (Vilches et al., 2000b;Estefanía et al., 2007). The proportion of NK cells expressing KIR2DL5 tends to be lower than 10% in most healthy individuals. That proportion is even lower in T lymphocytes, of the CD8 subset in their vast majority. The receptor density on the surface, as assessed by the median fluorescence intensity (MFI) value in flow cytometry with mAb UP-R1, is also lower than for several other KIR in resting lymphocytes, but it increases, to a lesser extent, upon expansion in presence of IL-2 and lymphoblastoid cell lines (our own observation). These features might owe to weak promoters controlling the transcription of functional KIR2DL5 alleles. Interestingly, higher numbers of KIR2DL5 + cells (20% of total NK lymphocytes) have been reported in a TAP-deficient woman. Furthermore, the phenotype of this patient also differed from that of most TAP-deficient individuals in her resting NK cells retaining cytotoxic capacity against allogeneic targets without pre-activation (Zimmer et al., 2009). The exact mechanisms determining this behavior remain to be ascertained.
Analysis of bulk and cloned NK cells by flow cytometry and RT-PCR reveals no coordinated expression of KIR2DL5 with other KIR, but rather an apparently random distribution (Vilches et al., 2000b;Estefanía et al., 2007). Importantly, a minority of NK cells expresses KIR2DL5 but neither other inhibitory KIR, nor the inhibitory lectin-like receptor NKG2A. Existence of this subpopulation is consistent with a capacity of KIR2DL5 to license NK cells, but this has not been demonstrated functionally. Also lacking are studies on possible patterns of co-expression of KIR2DL5 and LILRB1, the third lineage of inhibitory MHC class I receptors expressed by human NK cells.
Allelic polymorphism is essential for understanding the different patterns of KIR2DL5 expression ( Table 2). Transcriptionally silent alleles are, obviously, undetectable on the cell surface by definition. Furthermore, only allele KIR2DL5A * 001 has been formally demonstrated to be expressed on the cell surface. In contrast, NK cells transcribing allele KIR2DL5A * 005 are not stained by mAb UP-R1 in flow cytometry . Whether this is due to a lack of surface expression, to the UP-R1 epitope being altered in KIR2DL5A * 005 by its D2domain polymorphisms, or to a combination of both factors, has not yet been elucidated. Surface expression and recognition by UP-R1 of other transcribed KIR2DL5 alleles has, to the best of our knowledge, never been evaluated. Amongst other transcriptionally active KIR2DL5 alleles, A * 012 and B * 00602 code for mature polypeptides identical to A * 001, therefore they are predictably surface-expressed and detected by UP-R1; whereas expression and UP-R1 recognition of KIR2DL5B * 003, A * 014, and A * 15 (each bearing one amino acid replacement in the Ig-like domains in comparison with A * 001) needs to be tested empirically ( Table 2).
KIR2DL5 FUNCTION KIR2DL5 INHIBITS NK CELLS
KIR2DL5 is predominantly expressed on the cell surface as a glycosylated monomer of ∼60 kDa . Its cytoplasmic tail contains one canonical (VxYxxL) immunoreceptor tyrosine-based inhibitory motif (ITIM) separated by 24 amino acid residues from an atypical ITIM sequence (TxYxxL) similar to the immunoreceptor tyrosine-based switch motifs (ITSM) seen in 2B4, SLAM, and other receptors (Vilches et al., 2000b;Yusa et al., 2004). The latter motif, not seen in other human KIR, does not confer upon KIR2DL5 the capacity to recruit and signal through the SLAM-associated protein (SAP), but it is conserved in KIR2DL5 orthologs of other hominoids (Rajalingam et al., 2004;Yusa et al., 2004).
Since the KIR2DL5 ligand is unknown, its actual inhibitory character in physiological conditions has not been explored. Crosslinkage of naturally expressed KIR2DL5 inhibits NK-cell cytotoxicity against mAb-coated P815 target cells to an extent comparable to that seen with the "classical" KIR 3DL1 . This result is in agreement with that obtained previously using NK92 cells transduced with a chimera containing a KIR3DL1 ectodomain fused to the KIR2DL5 cytoplasmic tail; such chimera, however, displayed a lower capacity to inhibit NK92-target conjugation than full-length KIR3DL1 (Yusa et al., 2004). Based on the results obtained with a mutated KIR3DL1/2DL5 chimera, Yusa et al. (2004) proposed that the canonical KIR2DL5 ITIM, but not its ITSM-like motif, is essential for its inhibitory capacity in transduced NK92 cells.
Experiments performed independently on transduced NK92 cells and on NK cells expressing endogenous KIR2DL5 demonstrated that the phosphorylated receptor recruits the Src homology region 2-containing protein tyrosine phosphatase 2 (SHP-2) preferentially over SHP-1 in comparison with other KIR (Yusa et al., 2004;Estefanía et al., 2007). Furthermore, the inhibitory effect of the KIR2DL5 tail in transduced NK92 cells was prevented by a dominant-negative (DN) SHP-2, but only to a lesser extent by DN SHP-1 (Yusa et al., 2004). The importance of KIR2DL5 using predominantly a SHP-2-dependent pathway for its function has not been explored in depth.
KIR2DL5, AN ORPHAN RECEPTOR
Demonstration that KIR2DL5 is a surface-expressed glycoprotein capable of inhibiting cytotoxic lymphocytes suggested that this molecule participated in NK-cell mediated defense according to the missing-self model. Such a possibility was reinforced by identification of NK cells which express KIR2DL5 and lack all other detectable inhibitory KIR and NKG2A, and it implies existence of a cellular ligand, possibly expressed in physiological conditions. Enhanced KIR2DL5 expression and retention of NK-cell cytotoxicity in a TAP-deficient patient (Zimmer et al., 2009) suggest that she possibly expressed a ligand capable of licensing KIR2DL5 + NK cells.
As a first approach to investigate expression of a KIR2DL5 ligand, we made a fusion protein containing the KIR2DL5 ectodomains and the Fc of human IgG1. The fusion protein, along with positive and negative controls (KIR2DL1-, KIR2DL2-, and non-fused Fc constructs kindly donated by Dr. Eric Long), was produced in human embryonic kidney (HEK)-293T cells, and used in indirect flow cytometry experiments on multiple cell lines grown in vitro. In these experiments, we observed a dull staining of, essentially, every human cell line of hematopoietic origin. Such staining seemed independent of the cells HLA allotypes; furthermore, it was apparently not affected by lack of surface HLA expression in mutant cell lines (results not shown). However, the variably low signal-to-noise ratios with which the positive controls often stained cells expressing their known ligands, and the variable behavior of different batches of fusion proteins of known specificity indicated that the method did not attain sufficient sensitivity and consistency in our hands to allow screening for an unknown ligand in a series of heterogeneous cell types.
As an alternative approach of possibly higher sensitivity, we tried to apply the MHC-tetramer technology to build KIR2DL5 forms of higher avidity. The first codons of a KIR2DL5 cDNA were adapted by site-directed mutagenesis to the codon usage bias of Escherichia coli (Nakamura et al., 2000), for higher protein yield; and the construct encoding the Ig-like and stem regions of KIR2DL5 was subcloned into the pGMT7 plasmid (a kind gift of Dr. Veronique Braud), which provided an in-frame recognition sequence for the BirA biotinylase at the carboxy-terminal end of the construct. Upon IPTG induction, the recombinant KIR2DL5 protein was efficaciously produced in strain BL21(DE3)pLysS. After purification from inclusion bodies, the KIR2DL5 ectodomain was solubilized in concentrated urea, refolded in an arginine/gluthation buffer, and biotinylated with BirA. However, the labeled KIR2DL5 protein could not be quantitatively recovered after molecular exclusion chromatography, apparently due to aggregation, even in presence of mild detergents like Chaps and Octyl-b-dglycopyranoside.
The ability to identify KIR2DL5 with a novel specific monoclonal antibody opened new possibilities for studying the outcome of the interaction of KIR2DL5-positive NK lymphocytes with potential target cells. For instance, we attempted to study differential degranulation (assessed by CD107a expression) of KIR2DL5-positive and -negative NK cells against different target cells. However, several hindrances made this approach unpractical, including: low levels of degranulation induced in freshly isolated NK cells by many targets, which made it difficult to evaluate any further reduction attributable to inhibition through KIR2DL5; the low proportions of NK cells expressing KIR2DL5 in most donors, which do not readily increase during in vitro NK-cell expansion in response to lymphoblastoid cell lines. These studies indicated that use of cells homogeneously expressing KIR2DL5, and of a positive readout (rather than inhibition of another signal) are more promising approaches for screening the interaction of KIR2DL5 with potential ligand molecules. Knowing such interactions is essential for understanding the role of KIR2DL5 in immunity, and its importance for human health. | 2016-06-17T02:29:33.074Z | 2012-09-17T00:00:00.000 | {
"year": 2012,
"sha1": "2a74c21a495b2e6dd110e81ddca1108e0b57df89",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2012.00289/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1abf73d86be0700d72930eb9c20b2b72d89da5a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234112037 | pes2o/s2orc | v3-fos-license | Water hydrochemistry of Lake Boudaroua in the Moroccan Prerif west Mediterranean region
. In order to assess the quality of surface water of Boudaroua Lake, located in the Moroccan Pre-rif.The water quality parameters was used to evaluate the potential presence of toxicity of this ecosystem. To this end, samples and hydrochemical analyzes were carried out for five permanent stations around the Lake, during the study period (July 2019, October 2019, January 2020). The study was based on 11 parameters, namely, turbidity (TUR), dissolved oxygen (O2), total hardness (DT), calcium ( Ca 2+ ), magnesium (Mg 2+ ) , sodium (Na + ), potassium ( K + ), ammonium ( NH 4+ ), chloride (Cl − ) , sulfate ( SO 42− ), nitrate (NO 3− ) were considered. The results obtained of these physicochemical parameters have been compared with the Moroccan standard (MS) for surface water and with the World Health Organization (WHO). The results indicated that the values of the physicochemical parameters varies significantly seasonally due to precipitation rate variation. In addition the impact of Agricultural pollution resulting from the excessive use of fertilizers that enter the lake through waterways, such as ammonium NH 4+ , and dissolved oxygen (O2) its value reaching respectively 1.09 mg/L ,12 mg/L remains above standards (MS) and (WHO) which could harm the ecosystem of the lake.
Introduction
Water plays a very important role for all living beings.However, the rapid evolution in different sectors including agriculture, industry and urban activities, especially around lakes, has caused significant changes in the quality and quantity of these water resources [1].Eutrophication caused by nutrient enrichment of lakes is considered one of the major environmental problems in the world [2].Water quality analysis can be useful to prevent pollution of lakes and cede correct information to the decision maker for the management of the aquatic environment [3].Morocco, where the climate is semi-arid like most of the Mediterranean countries, is also affected by climate change appearing by the irregular precipitation and longer dry seasons [4], which cause the difficulty of renewing water resources.There is a need to properly assess the spatial and temporal pattern of water quality in lake systems, which is influenced by various factors [5].However, the waters of these systems influencing different pollution problems especially small lakes such as, Boudaroua lake, which is considered the only natural park in Ouazzane city, where it is easy to access by tourists to practice their anthropic activities which harms the quality of water.
The objective of this work is to determine the quality of surface water in this Lake by comparing the results of Moroccan standards for surface water (MS) [6] and those of the World Health Organization (WHO) [7].In addition, study the level of eutrophication of Boudaroua Lake.
Study area
The boudaroua lake (34 ° N 47 'and 05 ° W 27') is located approximately 3 km from Ouazzane city (Figure 1).The first role of the construction this lake, during the protectorate around 1936 is it to supply the town of the city with drinking water.Currently, it has become a natural park for fishing.
The body of water is located at an altitude of 210 m and reaches an area of 13 ha at its maximum extension.The perimeter of the water body is 2 km, the water depth is variable and it can reach up to 8 m [8].
Lake water comes mainly from springs and rainfalls in addition to waterways.The loss of water is ensured by evaporation during dry seasons and also by downstream weir allowing the evacuation of excess water during floods.
On the geological side, the Ouazzane region is connected by the Mediterranean Alpine system, resulting in the three series of tectonic movements, the first causing the formation of numerous Jurassic scales, covered by marls [9].
ICIES'2020
During hot periods, temperatures vary between 19 °C and 32 °C and in cold periods, they vary between 6 °C and 14 °C.The average annual precipitation is 800 mm with an irregular distribution.
Material and methods
In order to evaluate the water quality of the lake, five sites monthly sampling were used.The samples were collected in 2L pre-cleaned polyethylene bottles for June 2019, October 2019, and January 2020.Before sampling, the entire sample container was washed and rinsed thoroughly with lake water.
The physical parameters of the waters such as temperature, dissolved oxygen (DO), were measured in situ.On the other hand, samples of the water were transported in a cooler at 4 ° C to the laboratory (Agrilabs) based in Larache to analyze them within an hour following the sampling according to the techniques described by Rodier [10].
Dissolved oxygen
The dissolved oxygen content measures the concentration of dissolved oxygen in the water.Oxygen is the most important gas for all living things, especially aquatic organisms (respiration and Photosynthesis) and in addition to the mineralization of biomass (Figure 2).The measurements carried out show that the oxygenation rate is higher in all the stations (> 7 mg/L) with a significant flow in the winter month and this increase is due to the decrease in the water temperature and the greater mixing of the waters by the wind.The variation of dissolved oxygen in the water sample taken was 5.8 to 10 mg/L (June 2019, summer); 6.6 to 9 mg/L (October 2019, autonomous); 9.5 to 12 mg/L (January 2019, winter) respectively.
Figure. 2.
Spatio-temporal evolution of the dissolved oxygen of Boudaroua.Lake
Turbidity
Turbidity reflects, on the one hand, the presence in water of particles in mineral or organic, living or detrital and on the other hand the capacity to diffuse or absorb the incident light from the atmosphere (Figure 3).The measurements carried out show that the turbidity of the body of aquatic water varies very little in the space of order 0.54, on the other hand, the seasonal variation is very important of order 3.22, this variation is explained on the one hand by the precipitation and runoff which suspends interface sediments by homogenization of the body of water during January and on the other hand to the mineralization of organic matter after the death of aquatic vegetation [11].
Total hardness
Total hardness expresses the quantities of cations dissolved in water, in particular calcium and magnesium.Total hardness measurements for the entire water sample are much lower.Generally, the water of Boudaroua Lake is very soft according to standards proposed by Manivasakam [12] (Table 1) (Figure 4).
Ammonium
The variation of ammonium in the water sample taken was from 1.04 to 2.14 mg/L (October 2019, autonomous); from 0.32 to 1.14 mg/l (January 2019, winter) respectively (Figure 5).These very important contents (higher than the Moroccan standards 0.5 mg/L) let predict that this element constitutes a risk of pollution for the surface waters of Boudaroua Lake.
Figure .5.
Spatio-temporal evolution of ammonium of Boudaroua Lake.
Figure .6.
Spatio-temporal evolution of nitrate of the water of Boudaroua Lake.
The major cations
In the water of lake Boudaroua, the tendency of the predominant cations was of the order of Na + > Ca 2+ > Mg 2+ > K + with Sodium being a dominant cation.The distributions of cations concentrations in the water of Boudaroua Lake were not significant fluctuations during different periods except sodium, this variation reflects the climatic influence by dilution of water during the winter season.The variation of sodium in the water sample taken was 270.6 to 290.65 mg/L (June 2019, summer); from 211.97 to 227.72 mg/L (October 2019, autumn); from 227.35 to 231.45 mg/L) respectively (Figure 7).These values remain above the limit value of (WHO).The variation of potassium in the water sample taken was 2.82 to 6.85 mg/l (June 2019, summer); 3.41 to 4.54 mg/L (October 2019, autumn); 4.09 to 4.59 mg/L (January 2020, winter) respectively.Potassium is less than sodium due to its greater resistance to weathering and the formation of clay minerals (Figure 10).
Calcium is essential from rock, limestone, and industrial waste.The main source of magnesium is an influx of wastewater and minerals are generated by soil erosion [13].
Major anions
The predominant anionic tendency was of the order of Cl − > 3 − > 4 2− , chloride being the dominant anion.Generally associated with sodium ion, its sample content is between 218.4 to 240 mg/L (June 2019, summer); 297.06 to 320.92 mg/l (October 2019, autonomous); 420 .84 to 423.86 mg/L (January 2019, winter) respectively (Figure 11).The values remain higher than the limit value considered (250 mg/L, WHO).This increase can explain, on the one hand, a strong dilution of water and on the other by the contamination resulting from leaching of agricultural land around the lake with the first precipitation in autumn.
Bicarbonates result from the physicochemical balance between rock, water, and carbon dioxide according to the following general equation [14] (Figure 12).
Conclusion
This work was carried out to determine the health status of Lake Boudaroua., based on the results of physicochemical analyzes, which were carried out for five permanents sites during a study period spanning three months (summer 2019, October 2019, January 2020) and then the water quality index calculation.
Physicochemical analyzes show that these waters are of potassium-chlorosodium type with a high dilution due to climatic variations from one season to another.Also, biodegradable and non-biodegradable substances can be brought by agricultural and anthropogenic activities favor which promotes the eutrophication of this body of water and consequently the deterioration of their quality.The results also showed that most parameters do not exceed the maximum values specified by Moroccan standards (MS) and world health organization (WHO) except for ammonium (NH 4 + ) and dissolved oxygen (DO).Sensitizing the population and the companies producing the waste contributes to the conservation of aquatic plans.
For a better understanding of the hydrogeochemical functioning of Lake Boudaroua vis-a-vis the climatic and anthropic impact, the study must be completed by a seasonal monitoring of the physico-chemistry of the water at the level of the water column up to at the watersediment interface.
Figure . 14 .
Figure .14.Graphical representation of the water chemical composition of Lake Boudaroua according to Piper diagram.
Figure . 15 .
Figure .15.Graphical representation of the water chemical composition of lake Boudaroua according to Schoeller Berkaloff diagramm.
Table . 1
. Water quality vs hardness And do not forget the influence the use of chemical fertilizers.It varies for the water sample taken from 53.53 to 54.89 mg/L (June 2019, summer); 40.01 to 44.74 mg/L (October 2019, autonomous); 42.21 to 44 .36mg/L (January 2019, winter) respectively.These values remain below the Moroccan standard (<250 mg/L). | 2021-05-11T00:05:20.369Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "46b6c2d6d6a3d08d40c4bf667d40673fd92b665b",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/10/e3sconf_icies2020_00078.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e72cfe475ef030d146df14c3e1fedb45ec7e4f9d",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
258247422 | pes2o/s2orc | v3-fos-license | Comparison of a Tobacco-Specific Carcinogen in Tobacco Cigarette, Electronic Cigarette, and Dual Users
Background 4-(Methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) is known as a lung carcinogen. The objective of this study was to investigate associations of urine NNAL concentrations and smoking status. Methods This was a cross-sectionally designed study based on data from the 2016–2018 Korean National Health and Nutrition Examination Survey. A total of 2,845 participants were classified into past-smoker, electronic cigarette (e-cigar) only, dual-user, and cigarette only smoker groups. All sampling and weight variables were stratified and analysis was conducted accounting for the complex sampling design. Analysis of covariance was used to compare the geometric mean of urine NNAL concentrations and log-transformed urine NNAL level among smoking status with weighted survey design. Post hoc paired comparisons with Bonferroni adjustment was performed according to smoking status. Results The estimated geometric mean concentrations of urine NNAL were 1.974 ± 0.091, 14.349 ± 5.218, 89.002 ± 11.444, and 117.597 ± 5.459 pg/mL in past-smoker, e-cigar only, dual-user, and cigarette only smoker groups, respectively. After fully adjusting, log-transformed urine NNAL level was significantly different among groups (P < 0.001). Compared with the past-smoker group, e-cigar only, dual-user, and cigarette only smoker groups showed significantly higher log-transformed urine NNAL concentrations in post hoc test (all P < 0.05). Conclusion E-cigar only, dual-user, and cigarette only smoker groups showed significantly higher geometric mean concentrations of urine NNAL than the past-smoker group. Conventional cigarette, dual users, and e-cigar users can potentially show harmful health effects from NNAL.
INTRODUCTION
Most tobacco cigarette smokers continue to smoke because of the addictive nature of nicotine. 1 Tobacco cigarette is a harmful nicotine delivery product that exposes smokers to more than 6,000 chemicals, many of which are hazardous to human health. 1 Complete cessation is necessary to reduce health risks associated with smoking. However, questions remain about the potential risk of alternative tobacco products such as electronic cigarettes (e-cigars) among continuing smokers who are unable or unwilling to quit smoking. 2 In United State, 84% of e-cigar users are current or past tobacco cigarette smokers. 3 Approximately 23% of smokers are "Dual Users" who use both tobacco and e-cigars. 4 Switching from tobacco cigarettes to e-cigars as a strategy for harm reduction is controversial. E-cigars are electronic nicotine delivery systems (ENDS) that contain many trace elements due to thermal degradation of nicotine derived from tobacco. 5 The United States Food and Drug Administration (FDA) recommends reporting harmful and potentially harmful constituent levels in e-cigar liquids and aerosols. 5 Although most public health experts agree that e-cigars are less harmful than tobacco cigarette, the number of dual users who have switched to e-cigars and continue to use combustible tobacco cigarettes is not small. 6 Several studies 7 -9 have shown that e-cigar aerosols typically contain fewer toxic substances in laboratory yields than tobacco cigarettes smoke. However, laboratory results do not necessarily reflect clinically actual individual smokers' exposure to toxic substances.
The chemical, 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK), has been reported by the International Agency for Research on Cancer (IARC) as a group 1 human carcinogen. 10 NNK and its metabolite 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) are carcinogenic and contribute to the development of lung cancer in tobacco cigarette smokers. 11,12 NNAL is a reliable marker for detecting exposure to tobacco smoke and estimating the amount of NNK in the body. 13 -15 The purpose of this study was to understand the risk of tobacco cigarette, electronic cigarette, and dual users by comparing urine NNAL levels according to smoking status using data of the Korea National Health and Nutritional Survey (KNHANES).
Data source
The KNHANES is a representative cross-sectional survey conducted by the Korea Centers for Disease Control and Prevention (KCDC). Twenty-three households from each of 192 districts are sampled every year. A total of about 10,000 household members aged 1 year or older were surveyed. The KNHANES used a complex multi-step probability sample design consisting of stratification, clustering, and systematic sampling based on sex, age, and geographic area to achieve a representativeness of the Korean population using household registries. The KNHANES contains a Health Interview Survey, a Health Examination Survey, a Health Behavior Survey, and a Nutrition Survey known to provide a variety of information about health status, health-related lifestyle information, socioeconomic status, and laboratory results. Trained interviewers conducted face-to-face interviews with the participants. Participants are allowed to refuse to participate in this survey. A written agreement was obtained from the participant upon consent. 16 The Disease Control and Prevention Agency of Korea provided detailed information about KNHANES in previous studies. 16,17 total of 24,269 participants, individuals younger than 19 years of age (n = 4,883), individuals who did not respond to the smoking-related questionnaire (n = 836), individuals who responded that they had never smoked cigarettes in lifetime (n = 11,263), individuals with missing urine NNAL concentration values (n = 4,439), and individuals with urine NNAL concentrations below the detection limit (n = 6) were excluded. Finally, 2,845 participants were included for analysis in this study (Fig. 1).
Participants were categorized into the following groups: past-smoker, e-cigar only, dual-user, and cigarette only smoker groups. Those who responded that they 'do not currently smoke or have not smoked e-cigars' were classified as past-smoker group. Those 'who currently smoke and have not smoked e-cigars' were classified as cigarette only smoker group. Those 'who currently smoke and have smoked an electronic cigarette in the recent one month' were classified as dual user group. Those who answered that they were not currently smoking but had smoked e-cigars in the recent one month were classified as e-cigar only group. The packyears (PYRS) were calculated for each group by multiplying the amount of cigarette smoking (number of cigarette packs smoked per day) by the number of years. Subjects with urine NNAL concentrations more than the median value (21.520105 pg/mL) were defined as the high NNAL group.
Anthropometric measurements
Well-trained staff followed standard procedures to obtain anthropometric measurements. Trained staff measured the participants' weight and height to the nearest 0.1 kg and 0.1 cm, respectively, while wearing light clothing but no shoes. Body mass index (BMI, kg/m 2 ) was calculated as weight (kg) divided by squared height (m 2 ). Blood pressure was obtained using a standard mercury sphygmomanometer with participants in a seated position and measured by a specialized investigator. Participants' blood samples were obtained from the antecubital vein in the morning after an overnight fast. Serum cholesterol and fasting plasma glucose (FPG) levels were measured enzymatically using a Hitachi Automatic Analyzer 7600-210 (Hitachi, Tokyo, Japan). Urine NNAL concentrations were measured by the HPLC-MS/MS method using an Agilent 1200 Series and Triple Quadrupole 5500 (AB Sciex, Framingham, MA, USA). Exclusion Individuals younger than years of age (n = , ) Individuals did not respond to the smoking-related questionnaire (n = ) Individuals respond that they had never smoked in lifetime cigarettes (n = , ) Individuals with missing urine NNAL concentration values (n = , ) Individuals with urine NNAL levels below the limit of detection (n = )
Definition of variables
Health-related lifestyle information was obtained from data collected using a self-reported questionnaire. Average monthly household income was calculated by dividing gross monthly house income by the square root of the family size. Sufficient physical activity was defined as individuals who engaged in moderate-intensity activity for ≥ 150 minutes per week or vigorous-intensity activity for ≥ 75 minutes per week. Drinking more than seven cups for men and five or more for women twice a week or more was classified as heavy alcohol intake. Occupational categories include: 1) manual workers, clerk, service and sales worker, person who works for skilled agricultural, forestry, and fishery field, person who operates or assembles craft, equipment, and machine, and elementary worker; 2) office workers, manager, professional, administrator; and 3) others, unemployed persons including housekeepers and students. Education was classified into less than 6 years, 6 to < 9 years, 9 to < 12 years and ≥ 12 years according to educational duration. Marital status was categorized into "married and not separated" and "single", which included not married, separated, divorced, and widowed individuals. Chronic diseases were consisted of hypertension, diabetes mellitus, dyslipidemia, cardiovascular, and cerebrovascular disease based on the self-reported questionnaire.
Statistical analysis
Continuous and categorical variables are expressed as percentages and mean ± standard error (SE), respectively. All sampling and weight variables were stratified. Statistical analysis is performed using survey analysis of SAS to account for complex sample designs and maintain national representativeness. Analysis of covariance (ANCOVA) was used to compare geometric mean concentrations of urine NNAL and log-transformed urine NNAL level among smoking status with weighted survey design (adjusted for age, sex, pack-years, BMI, cholesterol, FPG, systolic blood pressure [SBP], household income, marital status, education duration, occupation, sufficient physical activity, heavy alcohol-drinking, and chronic disease). Post hoc paired comparison with Bonferroni adjustment was performed using log-transformed NNAL levels according to smoking status. All statistical analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA). Two-tailed P value less than 0.05 were considered statistically significant.
Ethics statements
This study was approved by the Institutional Review Board of Chungbuk National University Hospital (CBNUH-2022-03-016). It was conducted following guidelines of the Declaration of Helsinki (1975). The Ethics Committee waived the requirement for informed consent because data were anonymized at all stages, including data clearing and statistical analysis. Table 1 shows the characteristics of the participants according to smoking status. Unweighted numbers of past-smoker, e-cigar only, dual-user, and cigarette only smoker groups were 1,438, 18, 134 and 1,255, respectively. These groups had mean ± standard deviation ages of 51.7 ± 0.5, 36.9 ± 2.0, 35.7 ± 1.1, and 44.0 ± 0.5 years, respectively. All variables except for cholesterol and physical activity showed significant differences between groups. Tables 2 and 3 shows estimated geometric mean concentrations of urine NNAL according to smoking status. Geometric mean concentrations of urine NNAL in past-smoker, e-cigar 4/9
NNAL and Smoking Status
only, dual-user, and cigarette only smoker groups were 1.974 ± 0.091, 14.349 ± 5.218, 89.002 ± 11.444, and 117.597 ± 5.459 pg/mL, respectively. Even after adjusting for various variables (age, sex, BMI, cholesterol, FPG, SBP, household income, marital status, education duration, occupation, sufficient physical activity, heavy alcohol-drinking, and chronic disease), logtransformed urine NNAL levels showed significant differences among groups (ANCOVA P value < 0.001). Compared with the past-smoker group, log-transformed urine concentrations of NNAL in e-cigar only, dual-user, and cigarette only smoker groups were significantly higher in post hoc test with Bonferroni adjustment (All P values < 0.05). 12.0 ± 1.2 16.8 ± 0.5 0.001 All data are presented as mean ± standard errors or unweighted number (%). SAS survey was adopted for statistical analysis to account for the complex sampling design and to maintain national representativeness. P values were determined by analysis of variance for continuous variables or χ 2 test for categorical variables with weighting of survey design. BMI = body mass index, FPG = fasting plasma glucose, SBP = systolic blood pressure, N/A = not applicable, PYRS = pack-years. a Household income: total monthly house income/square root of number of family members. b Marital status: married and not separated or single (not married, separated, divorced, or widowed). c Occupation: office workers (general managers, government administrators, professionals, and office workers), manual workers (clerks; service and sales workers; skilled agricultural, forestry, and fishery workers; persons who operate or assemble crafts, equipment, or machines; and elementary workers), and Others (unemployed persons, housekeepers, and students). d Sufficient physical activity: moderate-intensity activity ≥ 150 minutes /week or vigorous activity ≥ 75 minutes/week. e Heavy alcohol-drinking: more than 7 cups for men and 5 cups for women more than twice a week. f Chronic diseases: self-reported hypertension, diabetes, dyslipidemia, cardiovascular, or cerebrovascular disease.
DISCUSSION
This study showed that geometric mean urine concentrations of NNAL varied according to smoking status using nationally representative data. Past-smoker had lower geometric mean levels of urine NNAL than all other group. Dual-user and cigarette only smoker groups showed significantly higher geometric mean levels of urine NNAL in past-smoker and e-cigar only group.
In Korea, the male tobacco smoking rate decreased from 71.7% in 1992 to 39.7% in 2016 and the female smoking rate decreased from 6.5% in 1992 to 3.3% in 2016. 18 Although the smoking rate is decreasing, crude incidence rate of lung cancer has steadily increased in both women and men. 19 In 2017, the crude lung cancer incidence rate was 52.7 per 100,000 population and age-standardized incidence rate of lung cancer was 27.5 per 100,000 population. 19 The lung cancer incidence rate increased with age in both men and women, especially in those over 65 years of age. 19 According to the Central Cancer Registry in Korea, lung cancer was the leading cause of cancer death in both men and women (crude rate: 53.5 per 100,000 in men and 19.0 per 100,000 in women) in 2019. 20 Carcinogens in tobacco smoke include polycyclic aromatic hydrocarbons (PAH), heterocyclic amines, tobacco-specific N-nitrosamines, aldehydes, cadmium, and many other organic compounds. 21 -23 Among these carcinogens in tobacco smoke, tobacco specific N-nitrosamines and PAH are known to be most closely related to the risk of lung cancer. 24 Tobacco-specific N-nitrosamines include N-nitrosodimethylamine (NDMA) and N-nitrosopyrrolidine as well as tobacco specific nitrosamines such as N′-nitrosonornicotine (NNN) and NNK. 25 NNK is present in all tobacco products and is considered as a strong lung carcinogen. 14 Because NNK is converted to urinary metabolite NNAL, NNAL is considered as a specific biomarker of tobacco exposure. 22,26,27 Detection of urine NNAL indicates that the individual has been exposed to certain lung carcinogens with a potential of developing lung cancer in the future. 22,26,27 After adjusting for several variables, e-cigar, dual-user, and cigarette only smokers had higher urine NNAL concentrations than past-smokers in this study. E-cigars are often considered a less harmful alternative to conventional cigarettes because e-cigar aerosols contain fewer toxic substances than conventional cigarettes. But, some studies 28 - 30 have reported that some trace metallic substances in e-cigar aerosols are higher than those in conventional cigarettes. The e-cigar device consists of a battery and an atomizer containing a chamber and head, which structure has the potential for some trace metallic components to be delivered into the aerosol. 31 The E-cigarette or Vaping Use-Associated Lung Injury (EVALI), which can be said to be a representative harmful case of e-cigars, was prevalent from August 2019 to 2020. According to the United States Centers for Disease Control and Prevention (CDC) report, 2,807 hospitalized cases and 68 deaths were reported to be related to EVALI during the period. 32 EVALI has been particularly problematic among individuals who use e-cigars that contain tetrahydrocannabinol (THC), the psychoactive component of cannabis. 32 Vitamin E acetate additives were detected in the bronchoalveolar lavage fluid in most EVALI patients. However, according to a 2019 report on harmful components suspected to be present in liquid e-cigars in Korea, 33 THC was not detected in the liquid e-cigars distributed in Korea; only very trace amounts of vitamin E acetate were detected. Additionally, according to a previous study in 2021 in Korea, 34 there may have been no EVALI patients in Korea during 2013-2019. During the period considered in our study (2016-2018), THC was not detected in any e-cigars investigated by the Ministry of Food and Drug Safety in Korea. The toxic effects of e-cigars are still controversial due to the lack of long-term clinical evidence of whether e-cigars are harmful. 28 More studies are needed to investigate the harmful effect of e-cigars on human health.
There are several potential limitations to the interpretation of the results of this study. First of all, it was difficult to conclude a causal relationship in this study due to the cross-sectional design. Second, since the data in this study were based on self-reported questionnaires, the possibility of false reporting or recall bias could not be ruled out. Third, since KNHANES is a secondary data, other factors that may affect the concentration of urine NNAL could not be considered. The NNAL levels can also be affected by exposure to environmental tobacco smoke. Workers who harvest tobacco and produce tobacco products may also be exposed through the skin and lungs. These environmental factors could not be considered as data limitations. Additionally, a previous study 35 reported that secondhand smoke may increase NNAL concentrations, but secondhand smoke was not considered in this study. Fourth, the operational definition of e-cigar users may be inaccurate. Fifth, because the number of female smokers was too small to be analyzed statistically, sex could not be investigated separately. Instead, we tried to adjust for the effect of gender by analyzing sex as a confounding variable. In addition, although e-cigars vary in shapes, sizes, and type of device, it was not possible to distinguish between various types of e-cigar devices. We could not further stratify for e-cigar types such as heat-not-burn (HNB) tobacco due to data limitations. Finally, since the number of e-cigar users was relatively small, there may be a possibility of selection bias.
Despite these limitations, our study used a population-based sample to generate estimates representative of Korean population. In addition, this is the first study in Korea to investigate urinary NNAL concentration according to smoking status using national representative data. Measuring the actual concentration of NNAL is also a strength of this study.
In conclusion, estimated geometric urine concentrations of NNAL were significantly different according to smoking status. E-cigar only, dual-user, and cigarette only smoker groups showed significantly higher urine NNAL concentrations than the past-smoker group after adjusting for several variables using national representative data. Conventional cigarette users, dual users, and e-cigar users can potentially suffer from harmful health effects of NNAL. Thus, preventive management might be needed for conventional cigarette smokers and e-cigar users through NNAL monitoring. | 2023-04-21T15:11:52.390Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "487ff484eec4775f1223a9a77b20ec685ee0a5ea",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3346/jkms.2023.38.e140",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93a5a2453e1fbca7328db242deae3f73ea329181",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
56095037 | pes2o/s2orc | v3-fos-license | Heating Through Phonon Excitation Implied by Collapse Models
We calculate the rate of heating through phonon excitation implied by the noise postulated in mass-proportional-coupled collapse models, for a general noise power spectrum. For white noise with reduction rate $\lambda$, the phonon heating rate reduces to the standard formula, but for non-white noise with power spectrum $\lambda(\omega)$, the rate $\lambda$ is replaced by $\lambda_{\rm eff}=\frac{2}{3 \pi^{3/2}} \int d^3w e^{-\vec w^2} \vec w^2 \lambda(\omega_L(\vec w/r_c))$, with $\omega_L(\vec q)$ the longitudinal acoustic phonon frequency as a function of wave number $\vec q$, and with $r_C$ the noise correlation length. Hence if the noise power spectrum is cut off below $\omega_L(|\vec q| \sim r_c^{-1})$, the heating rate is sharply reduced.
There is increasing interest in testing wave function collapse models [1], by searching for effects associated with the noise which drives wave function collapse when nonlinearly coupled in the Schrödinger equation. A recent cantilever experiment of Vinante et al. [2] has set noise bounds consistent with the enhanced noise strength [3] needed to make latent image formation a trigger for state vector collapse, and reports a possible noise signal. Various other suggested experiments [4] focus on noise-induced motions or heating of small masses or collections of oscillators, assuming a white noise spectrum. Since recent experiments on gamma ray emission from germanium [5] have shown that with the enhanced noise strength of [3], a white noise spectrum is experimentally ruled out, it becomes important to take the effects of a cutoff in the noise spectrum into account. In this paper we focus on noise-induced heating, motivated by the astute observation of Vinante [6] that since the noise wave number density is peaked near | q| ∼ r −1 c , heating effects will be reduced if the noise spectrum cuts off below the longitudinal acoustic phone frequency associated with the wave number peak. Our aim is to give a quantitative calculation of this effect; its application to possible experiments involving bulk heating effects will be given elsewhere [7] Consider a system in initial state i with energy E i = ω i at time t = 0, acted on by a perturbation V which at time t leads to a transition to a state f with energy E f = ω f . Working in the interaction picture, the transition amplitude c f i (t) is given by with ω f i = ω f − ω i . For V we take the noise coupling in the mass-proportional continuous spon- * Electronic address: adler@ias.edu taneous localization (CSL) model, where we have followed the notation used in [8]. Here x ℓ are the coordinates of atoms of mass m ℓ , g( x) is a spatial correlation function, conventionally taken as a Gaussian and the non-white noise has expectation E with γ(ω) = γ(−ω) related to the reduction rate parameter λ(ω) by We wish now to calclulate the expectation E[E(t)] of the energy attained by the system at time t, given by Substituting Eqs. (1) -(5), carrying out integrations, and using the formulas we find in the large t limit the formula for the energy gain rate The next step is to evaluate the matrix element appearing in Eq. (8) by introducing phonon physics, following the exposition in the text of Callaway [10]. We consider first the simplest case of a monatomic lattice with all m ℓ equal to m A , independent of the index ℓ, and write the atom coordinate x ℓ as with R ℓ the equilibrium lattice coordinate and with u ℓ the lattice displacement induced by the noise perturbation. Writing we note that since the Gaussian in Eq. (8) restricts the magnitude of q to be less than of order of r −1 c , with r c ∼ 10 −5 cm, whereas the magnitude of the lattice displacement is much smaller than 10 −8 cm, the exponent in e i q· u ℓ is a very small quantity. So we can Taylor expand to write The leading term 1 does not contribute to energy-changing transitions, so we have reduced the matrix element in Eq. (8) to the simpler form The approximation leading to Eq. (12) is a phonon analog of the electric dipole approximation made in electromagnetic radiation rate calculations.
We now substitute the expression [10] for the lattice displacement in terms of phonon creation and annihilation operators, where the sum on j runs over the acoustic phonon polarization states, and where Ω and N are respectively the lattice unit cell volume, and the number of unit cells. Taking the initial state i to be the zero phonon state, only the a † j term in Eq. (13) contributes, and we can evaluate the sum over lattice sites ℓ in Eq. (12) using the formula [10] Carrying out the k integration, noting that q · e (j) ( q) selects the longitudinal phonon with frequency ω L ( q), defining w = r c q, writing M = N m A for the total system mass, and assembling all the pieces, we arrive at the answer In the white noise case, where λ(ω) is a constant λ, we can pull it outside the w integral and use to get the standard formula [11] t When the noise spectrum has a cutoff below ω L ( q) for | q| ∼ r −1 c , the energy gain rate is sharply reduced.
Although we have derived the result of Eq. (15) for the case of a monatomic lattice and a zero phonon initial state, the result is more general. For a multi-atom unit cell, the same answer holds, with m A the sum of masses in the unit cell, and with ω L ( q) again the longitudinal acoustic phonon frequency. In the multi-atom case the formula of Eq. (15) neglects optical phonon contributions, but these are the "internal excitations" that are neglected in the derivation of the center-of-mass energy gain formula of Eq. (17). When the initial state is constructed from n-phonon states, as in a thermal ground state, the a † term in Eq. (13) contributes a term proportional to (n + 1)ω L to the energy gain, while the a term in Eq. (13) contributes a corresponding term proportional to −nω L to the energy gain; the sum of the two terms is proportional to (n + 1 − n)ω L = ω L , so n drops out and the formula of Eq. (15) is recovered. This simplification could have been anticipated from our earlier analysis of the noise-induced energy gain by an oscillator [12], which showed that the rate of energy gain is a constant independent of the number of oscillator quanta that are present. I wish Andrea Vinante for an email that stimulated this paper, and to thank Angelo Bassi for helpful conversations.
Added Note
Apart from updating Ref. [7], the preceding body of this paper is identical to the version posted on arXiv on Jan. 1, 2018. Andrea Vinante has called our attention to a paper by M. Bahrami [13] posted on Jan. 11, with an update on Jan. 14, in which a similar calculation is done. For a monatomic lattice, Bahrami's result and ours are in agreement. In his Jan. 14 posting, Bahrami gives a formula for the case of a multi-atom unit cell, which he notes disagrees with our statement that this gives the same result as the monatomic case. Bahrami's multi-atom formula is incorrect, as a result of his using the wrong normalization for the phonon polarization vectors, and does not reduce to the standard formula in the white noise case when λ(ω) is a constant λ. In this version of our paper, we have added an Appendix giving a brief derivation of the correct result in the multi-atom case.
Later Added Note
Bahrami agrees, and will revise his posting.
Appendix: Brief derivation of the formula for the multi-atom case
In the monatomic case, focusing only on the atomic mass factors and longitudinal phonon polarization vectors, Eqs. (12) and (13) give a factor After the ≃ sign we have used the fact, noted after Eq. (10), that the correlation length r C allows only contributions from phonon wavelengths that are long on a lattice scale, corresponding to k ≃ 0. In the multi-atom case, focusing only on acoustic phonons, 1 the left-hand side of Eq. (18) is replaced by with C a constant, we see that the longitudinal polarization vectors are no longer unit normalized, as in the monatomic case. Instead, the normalization is given in Eq. (1.1.18a) of [10], which on substituting Eq. (20) gives and implies for small k which when squared gives a factor κ m κ = m cell , which is the total atomic mass in the unit cell. Thus the only change from the monatomic to the multi-atomic case is the replacement of m A by m cell , and since N m cell = M , the total system mass, the monatomic formula of Eq. (15) is unchanged. Heuristically, the reason for this is that, as emphasized by Callaway, for k = 0 acoustic phonons Eq. (20) implies that all "...particles in each unit cell move in parallel with equal amplitudes", and so behave as a single particle with mass m cell . | 2018-01-24T16:30:11.000Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "d4fed4df2f2a706ca9a9df348afa8703480986e4",
"oa_license": null,
"oa_url": "https://eprints.soton.ac.uk/421687/1/Bulk_heating_revision.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d4fed4df2f2a706ca9a9df348afa8703480986e4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236925312 | pes2o/s2orc | v3-fos-license | Preparation of rice fermented food using root of Asparagus racemosus as herbal starter and assessment of its nutrient profile
The popularity of traditional fermented food products is based on their healthiness. The addition of a starter brings consistent, desirable, and predictable food changes with improved nutritive, functional, and sensory qualities. The addition of a mixture of plant residues as a starter or source of microbes is an age-old practice to prepare traditional fermented food and beverages, and most of the reported data on traditional foods were based on the analysis of the final product. The contribution of an individual starter component (plant residue) is not experimentally substantiated for any traditional fermented food, but this data are very essential for the formulation of an effective starter. In this study, Asparagus racemosus, which used as a common ingredient of starter for preparation of rice fermented food in the Indian sub-continent, was used as a starter for the preparation of rice fermented food under laboratory scale, and its microbial and nutrient profile was evaluated. The fermented product was a good source of lactic acid bacteria, Bifidobacterium sp., yeast, etc. The food product was acidic and enriched with lactic acid and acetic acid with titratable acidity of 0.65%. The content of protein, fat, minerals, and vitamins (water-soluble) was considerably improved. Most notably, oligosaccharide (G3-matotriose), unsaturated fatty acids (ω3, ω6, ω7, and ω9), and a pool of essential and non-essential amino acids were enriched in the newly formulated food. Thus, the herbal starter-based rice fermented food would provide important macro- and micronutrients. They could also deliver large numbers of active microorganisms for the sustainability of health. Therefore, the selected plant part conferred its suitability as an effective starter for the preparation of healthier rice-based food products. Graphic abstract
Introduction
Fermented foods and beverages have long been manufactured with or without starter cultures. The addition of a starter culture brings consistent, desirable, and predictable changes of the food with improved nutritive value and sensory qualities [1]. Traditional methods of starter culture for fermented food preparation include backslopping, mixing a small amount of aged ferment, using the special container, and adding specific natural products containing active microorganisms [2]. These traditional methods facilitated the preparation of individual/varieties of fermented foods and beverages and are still practiced for small-to mid-scale production units, particularly in household-type product manufacturing [3]. In contrast, naturally fermented foods are prone to slow or failed fermentations, contamination, and inconsistent quality.
In some Asian, African, and East European countries, plant residues are used as a starter to prepare varieties of fermented food and beverages [4][5][6][7][8]. Over the generations, this pioneering practice was followed by the native people to prepare many foods for sustained nutrition to meet nutritional needs. Recent scientific focus indicated that a group of wild microbes in plant parts (as endophytic organisms) could participate in the multi-stage and multi-species fermentation process. Ghosh et al. [9] mentioned the use of 3-7 plant parts (mostly tuber) to prepare haria, a very popular rice fermented mildly alcoholic beverage in Central, Eastern, and North-Eastern India. The initial stages (up to 2 days) of fermentation of haria is facilitated by the molds that saccharify the boiled rice and decomposed it; therefore, an anaerobic condition is created that favors the growth of lactic acid bacteria and yeasts at the latter stage of fermentation [10]. The beverage contains a mine of nutraceuticals, including phenolics, flavonoids, oligosaccharides, fatty acids, minerals, bioactive peptides, and many others [10][11][12]. They also believed that this beverage confers protection, particularly against several gastrointestinal disorders, skin, eye, hair, 1 3 and heart diseases. A similar type of microbial dynamics in different phases of fermentation was also noted during the preparation of tarhana, a popular herbal-based wheat fermented food in Turkey and other neighboring countries [4]. This unique multi-stage fermentation and microbial dynamics cannot be possible by adding old ferment or selective microbes. The tribal people, by their ancestral experience and mastery, selected few plant parts as a starter and added a desirable amount for the preparation of many fermented foods/beverages. To till day, no study has been undertaken to elucidate the role of individual herb/plant residues during food fermentation.
Considering this perspective, the present study dealt with the preparation of a rice fermented food using the root (rhizome) of Asparagus racemosus (local name satamuli), a well-recognized ethnomedicinal plant [13] and used extensively for the preparation of many rice-based traditional foods/beverages [9]. This study aimed to explore the potentialities of single plant residues towards preparing nutrient-rich functional food and simultaneously scientifically support the age-old culture of using herbal residues as an adjunct for traditional food preparation. The microbial composition and whole spectra of the nutrient profile were reported to establish this fermented food as health beneficial.
Plant collection and optimization of fermented food preparation
The rhizome of Asparagus racemosus was collected from the different locations in lateritic forest of the Jangalmahal area of West Bengal, India. It was repeatedly washed with sterile water, dried under sunlight, and then grounded using a mixture grinder. Whole grain rice (Oryza sativa) was procured from the local market and cooked under boiling water for about 45 min (rice grain becomes tender but without any residual water). The rice grain is sun-dried up to its moisture content of around 80%. After that, 100 g of semi-dried rice was kept in an Erlenmeyer flask and autoclaved. Fresh root dust (0.5%, w/w) was mixed with sterilized boiled rice (100 g) and fermented at room temperature for 4 days. Parametric optimization of the fermentation condition was studied following the OVAT (one variable at a time) approach for considerable scale-up of the process. The fermentation process was optimized by varying the amount of starter (0.1, 0.5, 1.0 and 2.0%, w/w), the incubation period (2nd to 5th day), and collected starter from three distinct locations (Belpahari, 22 [2]. The fermented rice has been designated as a test sample (Ts). Similarly, a control sample (Cs) was prepared by autoclaving after the addition of an herbal starter.
Analysis of microbial community
The quantity of the prevalent group of microbes in the food samples (direct sample) was enumerated based on colonyforming units (cfu). The dominant microbes in the fermented food were enumerated using strain-specific selective media (HiMedia, India) [14,15]. Briefly, 1.0 g of the sample was mixed with 10 ml of phosphate buffer saline (pH 7.2) and used as stock for the microbial count. Total aerobic bacteria were quantified using Plate Count Agar (PCA) media and incubated at 37 °C. The lactic acid bacteria (LAB), Bifidobacterium sp. and total anaerobic bacteria were cultured in Rogosa SL agar (supplemented with 0.132% acetic acid), Bifidobacterium agar supplemented with Bifidobacterium Selective Supplement and reduced Wilkins chalgren agar media, respectively, and plates were incubated in a CO 2 incubator (5% CO 2 ), at 37 °C. Yeast and mold were enumerated using yeast and mold agar and Potato Dextrose Agar (PDA) media, respectively, and incubated at 30 °C. The mycelial and round convex colonies were recorded for the mold and yeast counts, respectively.
pH and titratable acidity (TA)
The pH of the fermented product was determined by homogenizing it with sterile distilled water in a ratio of 1:10, followed by shaking for 5 min. The pH of the fermented substrate was then measured by a glass probe digital pH meter (ELCO, India). The standard titration procedure determined titratable acidity according to the method of AOAC [16]. 10 g of sample was dissolved in 90 ml of carbon-dioxide-free distilled water and then titrated by 0.1 N NaOH using phenolphthalein (0.1% w/v in 95% ethanol) as an indicator. The percentage of titratable acidity was calculated as a percentage (%, w/v) of lactic acid content, according to the following formula:
Proximate analysis
Analysis of moisture, protein, fat, carbohydrate, and ash content of the fermented samples was done using the method employed by AOAC [16].
3
Moisture content was analyzed by weight difference method (Method No. 925.10) Ash content was analyzed by combustion method (Method No. 930.05). Crude protein (N × 6.25) contents were estimated in following the Kjeldahl method (Method No. 978.04). Crude fat was determined in accordance with the Soxhlet extraction method (Method No. 930.09). Fibre content was analyzed by method no. 962.09. Total carbohydrate content was calculated from the above parameters, and total caloric value was evaluated using the "Atwater factor."
Analysis of oligosaccharides by TLC and HPLC
The method described by Dreisewerd et al. [17] was employed with slight modification to detect oligosaccharides using thin-layer chromatography (TLC). A commercially available silica gel TLC plate (Merck, India) of 0.2 mm in diameter was used. The sample/standard of 5 µl was charged onto the plate, and a solvent system (mobile phase) consisting of n-propanol/acetic acid/water in the ratio of 3:2:2 was run on it. After that, 1% arsenal (50% H 2 SO 4 , v/v) was sprayed and kept at 110 °C for 15 min for color development. The quantities of malto-oligosaccharides in the food samples were also determined through HPLC described by Ghosh et al. [10]. Food samples were diluted with 50 mM Tris HCl (pH 8.8) at the ratio of 1:3 (W/V) and kept at 4 °C for 1 h followed by centrifugation at 10,000 rpm for 20 min. The collected supernatant was filtered in a 0.22 mm pore size filter and then used for HPLC analysis (Agilent Technology, 1200 infinity series, USA). For this purpose, the carbohydrate-NH 2 column was used. The mobile phase was acetonitrile and water at the ratio of 75:25 with a constant flow rate of 1 ml/min. The temperature of the column was maintained at 30 °C. Different concentrations of commercial malto-oligosaccharides (Sigma) and glucose were used as standards in both the chromatography.
Analysis of fatty acids
Control and test samples (5 g) were mixed intimately with 30 ml of hexane for 5 min and sonicated for 5 min at room temperature. The samples were then vortex for a few seconds and centrifuged at 7000 rpm for 5 min. After that, the upper hexane layer was separated and kept in another tube, and the remaining part was again mixed with hexane, and a similar step was followed. All collected supernatants were next evaporated under a stream of nitrogen. The dry product was dissolved in 1 ml of hexane solution and again kept for drying under the stream of nitrogen. Using triethylamine and 2-bromo-2′-acetophenone solutions, free fatty acids were derivatized to fatty acid vinyl ester [18]. The reaction was done by incubating the reaction mixture for 15-20 min at 85 °C, followed by the addition of acetone. The extracted fatty acid was separated using an RP-HPLC column (5 μm, 250 × 4.6 mm) that was thermostated at 35 °C [18]. The mobile phase was prepared with a combination of methanol and water at a ratio of 75:25. The flow rate was set to 1 ml/ min, and the wavelength was set to 250 nm. Fatty acid standards (Sigma) were also run.
Analysis of amino acids by HPLC
The amino acid composition in the rice fermented product was analyzed following the method of Das et al. [19] with some modification. At first, food samples were kept in a hydrolysis tube mixed with 6 m HCl containing 0.1% phenol and held at 110 °C for 24 h. After that, the residual acid was dried off in a vacuum oven. Then the samples were again suspended in 100 mm HCl and filtered (0.22 µm size syringe filter). Next to that, 100 μl sample, 900 μl of borate buffer (1 m, pH 6.2), and 1 ml of fluorenyl-methyl-oxycarbonyl chloride were mixed for derivatization. The mixture was kept for 2 min, and then 4 ml of n-pentane was added by vortexing for 4-5 min. The upper layer was separated and discarded. The lower layer was collected and filtered (0.22 µm size syringe filter).
Amino acids were analyzed using the HPLC system (Agilent Technology, 1200 infinity series) equipped with a Zorbax SB-C18 column (5 µm beads size; 4.0 × 250 mm). The injection volume was 20 μl, and the wavelength detector was used at 265 nm (UV range). The column oven temperature was maintained at 30 °C. The mobile phase used was (A) acetate buffer-acetonitrile (9:1) and (B) acetate buffer-acetonitrile (1:9) with a flow rate of 1.0 ml/min. Different commercially available amino acid standards (SRL, India) were prepared (1 mg/ml) using methanol as a solvent and detected side by side to estimate the unknown concentration [19].
Organic acid content
Water/salt-soluble extracts of fermented samples were prepared by slightly modifying the method described by Hor et al. [20]. Briefly, 10 g of product was diluted with 30 ml of 50 mM Tris-HCl (pH 8.8), kept at 4 °C for 1 h, and centrifuged at 16,000g for 20 min. The supernatant containing the water/salt-soluble fraction was filtrated through a 0.22-mm pore size filter. The extracts were then analyzed by High-Performance Liquid Chromatography (HPLC) system (Agilent Technology, 1200 infinity series) equipped with a Zorbax SB-C18 column. The elution was carried out using 10 mM H 2 SO 4 as a mobile phase with a flow rate of 0.5 ml/ min at 60 °C.
Estimation of hydrosoluble vitamins
Hydrosoluble vitamins present in the fermented product were extracted following the method of Hor et al. [20]. The extracted water soluble vitamins were then analyzed by reverse-phase High-Performance Liquid Chromatography (RR-HPLC) system (Agilent Technology, 1200 infinity series) equipped with a Zorbax SB-C18 column [20] and the mobile phase was 0.05 M KH 2 PO 4 (pH 2.5) and acetonitrile [20]. The temperature was kept at 15 °C, and with a constant flow rate of 1 ml/min. The effluent from the column was monitored by a UV detector at 250 nm.
Analysis of volatile compounds
Volatile compounds in the fermented food were extracted by dichloromethane and analyzed by Gas chromatography (Agilent Technology) that equipped a manual injector and a flame ionization detector (FID) [20]. A capillary column HP 5 (30 m × 0.25 mm i.d., 0.25 µm film thickness) was used. The temperature of the injector and detector was both set to 250 °C. The oven temperature was fixed at 50 °C for 5 min, then rise from 50 to 220 °C, at 3 °C/min, and finally 250 °C for 10 min. Nitrogen was used as a carrier gas, and the split vent was set to 13 ml/min. Quantification of volatiles was made by comparing retention time indices of pure standard compounds using Chem Station software [20].
Free mineral content
The contents of free minerals in the water extracts of the fermented rice product were measured by atomic absorption spectrophotometer (AAS) [Shimadzu Analytical (India) Pvt. Ltd]. Briefly, 5 g of fermented rice sample was dissolved in 25 ml deionized distilled water and homogenized, followed by centrifugation at 12,000 rpm for 10 min. The clear water extracts of the product (supernatant) were subjected to metal analysis by following the standard protocol [21] with some modifications.
Statistical analysis
Each experiment was carried out in triplicate, and data were represented as mean ± SE. The significant difference was analyzed by one-way ANOVA, and the t-test was done in all possible pairs using Sigma plot 11.0 (USA) statistical software.
Microbial loads in the fermented rice
For standardization of the batch fermentation, few unique parameters (for traditional food preparation considering the age-old practice) like the amount of starter, incubation period, and location of collected herbal residues were selected and optimized. The sensory attributes, particular flavor, and aroma (mild acid-alcoholic) of rice fermented beverages are good indicators of mature fermented products [2]; therefore, this property has been selected to optimize fermentation conditions. It was noted that 0.5% (w/w) herbal starter and 4th day of fermentation at room temperature were most suitable for rice fermentation. The starter plant from different locations had no significant impact on the quality and consistency of the final food product. The herbal starter instigated rice fermented food was enriched with a group of indicator microbes including total anaerobes (3.79 ± 0.20 log 10 cfu/g), total aerobes (5.64 ± 0.16 log 10 cfu/g), yeast (4.66 ± 0.15 log 10 cfu/g), mold (4.01 ± 0.11 log 10 cfu/g), LAB (3.89 ± 0.10 log 10 cfu/g), and Bifidobacterium sp. (5.82 ± 0.17 log 10 cfu/g) after 4th day of fermentation at room temperature with 0.5% (w/w) herbal starter (Fig. 1). It was observed that no notable textural and sensorial changes as well as microbial growth in the control (Cs) sample. After the 4th day of fermentation, the number of anaerobic organisms like yeast, total anaerobes, LAB, and Bifidobacterium sp. was surprisingly higher, and these are populated due to the establishment of anaerobic conditions that may create by molds decomposing the rice [10]. Tamang et al. [22] proposed that fermented food containing both functional and non-functional microbes and functional microbes participate in the itemization of the substrate, facilitated the bioavailability of nutrients, produced a mine of bioactive substances that alter the sensory qualities, extended the shelf-life, and promoted the probiotic functions. Thus, fermented food not Fig. 1 Enumeration of dominant microbes in the 4th day of fermented rice. Selective media were employed for cultivating the microbes only provides nutrients to meet hunger but is also curative to many diseases as natural medicine. The abundance of yeast, LAB, and Bifidobacterium sp., generally regarded as health-supporting microbes, makes this aforesaid a probiotic food. The succession of these organisms from the plant phyllosphere [23] to the food was first established by this study. These microbes are wild, biologically active, and not altered by the atmospheric agents [24]. Consumption of a large number of microbe ("live and active")-containing rice fermented food may support and improve health through modulation of gut microbiota.
pH and titratable acidity and organic acid content
Both titratable acidity (TA) and pH are the measures of acidity of any food. It has been estimated that the pH and TA of the newly formulated rice fermented food were 4.9 and 0.56%. The lowering of pH during fermentation enables the creation of anaerobiosis that facilitated the growth of anaerobic microbes [10,12] and the induction of a group of enzymes that can disintegrate the food matrix and even dietary fibers [25]. Concerning this, the content of lactic acid and acetic acid were also measured (Table 1), and it indicated that the participatory microbes supported heterolactic fermentation within the rice substrate. Many evidence supported that the produced lactic acid and other metabolites in food could inhibit the growth of intestinal pathogens, immune-stimulatory, cholesterol-lowering, anti-ulcer, antitumor, and have anti-allergic activities [10,11,25]. Besides, lactic acid has wide applications in food, pharmaceutical, textile, leather industries, and chemical feedstock [26].
Proximate composition of rice fermented food
The proximate composition includes moisture, fat, protein, crude fiber, carbohydrate, and energy content are essential for dietitians and other health professionals to promote the food. The proximate composition of the rice fermented food was evaluated ( Table 1). The content of protein (10.25 g%) and fat (1.3 g%) was notably higher than unfermented milled rice (protein 6.3-7.1 g%, fat 0.3-0.7 g%) [27]. The increase in protein may partly be attributed to simultaneous enhancement of microbial biomass and loss of dry matter of substrate during fermentation [28], while hydrolysis of starchlipids complex (lipids associated with starch granules) and synthesis of phospholipids content of the cell membrane of fungus are possibly related to the enhancement of fat content in the fermented rice [29]. In contrast, carbohydrate content was lower (62.32 g%) in fermented food than the unfermented (77-89 g%) [27], and this is due to enzymatic decomposition. Evidence also supported that acidification (lactic acid) during fermentation leads to the transformation of rapidly digestible starch (RDS) into slowly digestible starch (SDS), thereby, improved glycemic index [20]. The crude fiber content was unaffected, but the total energy content was lower in fermented rice than the milled rice. The lowering of the energy content is very related to the content of the carbohydrate. Thus, the microbial interplay makes the fermented rice more nutritious and healthy than the unfermented.
Analysis of vitamins, minerals, and volatile compounds
Vitamins like vitamin B 12 , folic acid, riboflavin, thiamine, and vitamin C were present in a considerable amount in rice fermented food ( Table 1). The content of vitamins mentioned above in the fermented rice was substantially higher than boiled rice [30] and more appropriate than the Recommended Daily Allowance (RDA) level for Indian people [31]. Among them, the content of folic acid and vitamin C was notably improved in the fermented rice. Folic acid (Vit 9) is involved in the DNA synthesis of growing cells and the repairing process. The pregnant women of developing countries are mostly deficient in folic acid, and its RDA is 75-150 μg, and this deficiency can be easily compensated by the consumption of fermented rice (as it contains 1.29 mg/g). Different strains of LAB and yeasts are the natural producer of water-soluble vitamins, and most of the reported rice fermented foods are enriched with this type of vitamin [20,21]. The contents of free calcium, magnesium, iron, zinc, and manganese in the rice fermented food were analyzed and represented in Table 1. Considering the data, this newly formulated herbal starter-based rice fermented food can be designated as calcium-magnesium-rich food. Fermentation of rice leads to dephytinization (by microbial enzyme phytase) and bioaccessibilities of minerals [29,32,33].
The aroma of a food principally depends upon the content of volatile organic compounds (VOCs) [20]. Different alcohol-based volatile compounds were analyzed in both foods and found that only methanol, propan-2-ol, butan-1-ol, and fatty alcohol were present in very little quantity in fermented food. Banik et al. [33] described that a yeast Saccharomyces cerevisiae isolated from fermented rice beverage was able to synthesize 38 volatile compounds including 15 acids, 6 alcohols, 7 ketones, 1 aldehyde, 1 sterol, 3 esters, alkane, and 4 others.
Carbohydrate fractions in the fermented food
After 4 days of fermentation, accumulation of only maltooligosaccharides (G3/maltotriose) was detected in the fermented food (Ts), but no such oligosaccharide was found in the unfermented sample (Fig. 2). The fraction was also confirmed by HPLC analysis, and it remained in 1.7 mg/g of ferment. In a starchy environment, participating microbes probably liberated a group of amylolytic or carbohydrateactive enzymes (CAZymes), which sequentially hydrolyzed the rice starch and produced simple sugars such as mono disaccharides were assimilated by the microbes leaving behind the comparatively larger ones. Ghosh et al. [10] also noted maltooligosaccharides (G3 and G4) in the end products of rice fermented beverages. These maltooligosaccharides have multifaceted health benefits, particularly promoted the growth of indigenous gut flora whose metabolic end product is a short-chain fatty acid (SCFA), induced mucine production, stimulated gut-associated lymphoid tissues (GALTs), etc. [10,34].
Fatty acid composition
A comparative account of the fatty acid profile of fermented (Ts) and unfermented (Cs) samples has been given in Fig. 2 Analysis of maltooligomers (G1: glucose, G2: maltose, G2: malto-triose, G3: maltotetraose) using TLC method where sample wells were marked as standard, 'Cs' and test 'Ts' (A). HPLC chromatogram of standard maltooligomers (B). The concentration (1.7 mg/g extracts) of maltooligosaccharide in the 'Ts' sample was analysed from of area of HPLC chromatogram using the corresponding standards Table 2. A group of fatty acids was newly evolved, and concentrations of many healthy fatty acids were also improved due to fermentation. The fatty acids such as octanoate, dodecanoate, pentadecanoate, and eicosenoate were absent in the Cs, but their concentrations increased in many folds in the fermented product. Interestingly, the concentrations of health-beneficial unsaturated fatty acids such as linolenate (ω3), linoleate (ω6), palmitoleate (ω7), and eicosenoate (ω9) were improved 3.2, 2.1, 2.3, and 25 folds, respectively, during fermentation of rice. dos Santos Oliveira et al. [29] mentioned that fermentation of whole rice bran leads to enhancement of phospholipids with a reduction in saturated fatty acid (20%) and enhancement of unsaturated fatty acids (5%) that enriched with ω3 (1.3 folds), ω6 (1.18 folds) and PUFA (1.07 folds). A perusal of literature also revealed that mold and yeast could synthesize fatty acids, and it is encumbered by slow growth and strictly anaerobic culture conditions [12,35]. It was also suggested that fatty acids of various chain lengths are important nutraceuticals that improve glycemic and lipid profile, promote the function of the brain and nervous system [36]. Besides, unsaturated fatty acid could intensify the production and secretion of intestinal alkaline phosphatase, which might suppress lipopolysaccharides (LPS; a component of the Gram-negative bacterial cell wall)-producing microbes such as Proteobacteria, therefore, prevented metabolic endotoxemia and systemic inflammation, relevant to obesity and diabetes like metabolic diseases [37]. Recently, Aryan et al. [38] mentioned that arachidonic acid, eicosapentaenoic acid, and docosahexaenoic acid are enigmatic chemicals that could prevent the multiplication of coronavirus. Our results demonstrate that the enrichment of different nutraceuticals in rice was due to the interaction of active microbes and not by the phytochemicals from plant residue (Cs).
Amino acid enrichment
The fermented rice was enriched with a pool of essential (leucine, histidine, lysine, methionine, phenylalanine, and valine) and non-essential (arginine, serine, aspartic acid, glutamic acid, glycine, alanine, tyrosine, and proline) amino acids. Their quantity surprisingly improved during fermentation of rice with the herbal starter (Table 3). A significant portion of the rural population of least developed and developing countries are at risk of quality protein inadequacy. Lysine is now being considered as an indicator of the quality of protein. Considering this point of view, the studied fermented rice can be categorized as a quality food product with a balanced composition of essential and dispensable amino acids. Holzapfel [39] mentioned that LAB fermentation in cereal-based foods improves concentrate and enhances nutrients like minerals, vitamins, and essential amino acids. Similarly, yeast-enriched food also showed positive nitrogen balance with a large number of free amino acids [28]. Our results are consistent with other traditional rice fermented beverages like apong, jou, judima, etc. [32].
Conclusion
The most extended lifespan of the people of Okinawa islands (islands known as the "land of the immortals") of Japan is primarily due to their traditional healthy food culture. Conventional fermented food occupied a leading part of their dishes. Native people have the traditional knowledge about the preparation process, nutrient and medicinal values of the traditional fermented food. In food processing, plant parts (as substrate or adjunct) serve as the source of wild, bioactive, and healthy microbes, which facilitated the fermentation by liberating a group of enzymes and bio-embolden the substrate with varieties of health-promoting micronutrients, phytochemicals, and other functional components. This experimental study explored that rice fermentation with the selective herbal starter (rhizome of Asparagus racemosus) enriched rice with mine of nutraceuticals like vitamins, minerals, oligosaccharides, essential fatty acids, and a pool of amino acids. Thus, fermentation is a useful process strategy to improve the nutritional quality of this low-cost and abundant substrate. The technological advancement of this experiment is that the fermentation was controlled as a precise quantity of starter (selected amount of herbal residues) was inoculated, and processing was optimized towards a largescale basis. However, experimental trials and validation of the health impacts are essential to explore this nutrient-rich rice fermented food into a functional one. | 2021-08-06T05:30:04.550Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "52ca7bec2856e3434cc4a14c055f6e3c531e895c",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s43393-021-00046-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "52ca7bec2856e3434cc4a14c055f6e3c531e895c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
157840232 | pes2o/s2orc | v3-fos-license | Study on Village Cadres ' Working Enthusiasm on Voluntary Provision of Village-level Public Goods : Take One Case , One Meeting System as an Example
Because of the collaborative dilemma in the One Case, One Meeting system, the villagers cannot agree on village-level public goods supply. After the financial rewards and subsidy innovation in 2008, many villages were witnessed to break through the collaborative dilemma. Village cadres has played a key role on the provision of village-level public goods in the One Case, One Meeting system.which factors transform village cadres’ behavior from “idleness” to “entrepreneur”?Based on One Case, One Meeting cases in Fujian Province, by means of quantitative analysis ,the paper obtained a results that national performance assessment, election and villagers’ meeting have significant impact on village cadres' work enthusiasm .
Background
China's rural public goods supply mechanism has undergone tremendous changes since 2000: the supply of basic public goods and services in rural areas was up to government finance; whereas the supply of village-public goods such as small village irrigation, village road and bridges, sanitation facilities, public facilities used inclusively by villagers was up to one case, one meeting system, which means it is the villagers who discuss, decide, finance the villagelevel public goods,and it is the village cadres who construct the village-level public goods .
Issues raised
Statistics from the Ministry of Finance showed that the number of villages built public projects through the "One Case, One Meeting" mechanism accumulated to 2008 was less than 10% [1].The government innovated the mechanism by providing financial awards to match investment from villagers, accounting for 50% of villagers' investment, [2].Good results were achieved by this innovation: data from the Ministry of Finance showed by 2012, the number increased from 14% in 2008 to 37.3% [1].Data from Department of Finance of Fujian province, the number of villages received financial awards was 7443,accounting for 51.56%only in the year 2011.
More and more villages were witnessed to have built public goods after financial rewards and subsidy being provide d to the construction.The motivation of villagers can be attributed to the financial rewards.: Most scholars attributed the reduction of villagers cooperation cost and improvement of villagers' participatory enthusiasm to the successful implementation of financial awards, neglected or not gave due attention to roles played by village cadres.In fact, village cadres pay a huge effort to integration of the villagers' will, funds collection, project construction and maintenance [3,4].Some scholars noticed village cadres' key role in the development of "one case, one meeting" project, what factors motivate them to put on efforts to the supply of rural public goods?The existing research gave no further discourse [5,6].This essay will take village construction of public goods in Fujian Province as an example, in order to identify factors contributing to villager cadres' working enthusiasm, and what can do to improve their enthusiasm.Village main cadres (including village chief and village secretary) are elected by villagers, but employed and assessed by town government, so their behavior is also subject to the dual constraints from demands of both villagers and government.Oi Jean and others noted as "dual representative of the interests of the government and villagers," village cadres strike a balance between the will of the government, the requirements of villagers and their interests [7].This paper holds that financial awards encouraged villagers to participate actively in "one case, one meeting" program, also asked the village cadres to actively contribute to public welfare projects, government motivated village cadres by adjusting the village cadres performance evaluation indicators, putting more importance to village public construction.That is to say, the dual pressures from both villagers and government have substantial influence on village cadres' behavior.L. J. Li and others noted that their behavior was impacted by wages and job generation method [8]; L. X.
Zhang noted that gender of village cadres, and village scale also affect their behavior [9].So this paper argues that:Town government influence village cadres' behavior through performance appraisal, villages influence village cadres' behavior through the villagers' democratic.
3 Variables and the model
Variables description
The dependent variable is the estimated ratio of time village cadres' spending on the "one case, one meeting" issues versus total working time .Based on data of the ratio in 2012, the author assumed that:the higher the ratio, the better.Main independent variables are as follows: (1)Influences from government.Firstly, the proportion of score allocated to public welfare project in the performance appraisal system.villagecadres will invest more time on the village-level public goods when the town government lay more emphasis on this issue We assumed that: more emphasis on construction of village-level public and higher wages prompt village cadres to put more time in public issues.
(2)Influences from villagers Firstly, because of village cadres' being elected by villagers, they respond positively to villagers' requirements.Secondly,the villagers meeting times.We assumed that: more meeting times prompt village cadres to put more time in public issues.
Control variables are as follows: (1)The village characteristic variables.thevillage numbers, distance from village to town.
(2) The village cadre's personal characteristics.Village cadres age, education level, gender will also affect the ratio.We assign "1" for male, "0" for female.Female cadres may want to spend more time on family compared to male, so gender may have influence on time spent on public issues.
(3) How much village migrant workers are paid also has impact on the ratio.Being village cadres means giving up the wage obtained as migrant workers.So we assumed that: the more village migrant workers earn, the less the ratio.
The model
The model is to test the compound impact of government and villagers on the dependent variable: Variable Tratio refers to time percentage of village cadres' spending on rural public goods,which is a continuous variable.Variable score refers to the proportion of score allocated to public goods project in the performance appraisal system,a continuous variable too.The aforementioned two variables are measured by percentage.Variable wage denotes village cadres' position wage.Variable meeting refers to meeting times of villager representative in a year title is a zero, one binary variable,with 0 coded for village secretary,1coded for village director.The controls denotes control variables as follows: villagers group ,village cadres' age(measured by year),village cadres' education (measured by year),gender(coded as a zero, one binary variable,with 1 coded for males), and migrant wage (village migrant annual income measured by ten thousand yuan),distance (distance from village to town)
Empirical results and analysis
Based on main variables' statistical features, using STAT12 statistical software to process data, taking OLS estimation method, a one-time regression.the regression results in Table 1 show that the model simulation is better, goodness of fit is 0.3272; and at 1% significance level.Specific analysis of the results is as follows: From results listed in Table 1, we can conclude that: (1) Influence from government .Firstly, the regression shows the influence of performance appraisal on village cadres' working time spent on public welfare,if government putting more importance to village-level public goods in the performance appraisal system, can significantly affect village cadres' working time ratio in the "One Case,One Meeting",significant at the 1% statistical level; secondly, influence of salary on village cadres' working time ratio is positive, but not significant, which means the role of salary in motivating village cadres is not as important as imagined.Many village cadres surveyed claimed that they pay more attention to the right to collective resource allocation, contact brought to their business, and the corresponding fame status, social security.
(2) Influence from villagers.Firstly, the meeting number of villagers has a positive impact on time spent on "one case, one meeting" by village cadres and significant at the 1% statistical level.As a formal way to express the will of the villagers,meeting plays an important role in coordinating the interests of different villagers and strengthening the villagers' cooperation,furthermore exerting pressure on the village cadres to work enthusiastically.Secondly, the election procedure contributes to enthusiasm for work, and positively affects the village director' enthusiasm more than the village secretary', significant at the 5% statistical level.
(3) Influence from control factors.Firstly, the village migrant workers' wages have a negative influence on village cadres' working time, but significant only at the 10% statistical level.Maybe village cadres are deprived of other working opportunities, the more village migrant workers' wages earn, the more opportunity cost to village cadres, the less time they put in "one case, one meeting".Secondly, the number of group has positive effect on village cadres' working time, and significant at the 10% statistical level.
Conclusions and policy recommendations
Empirical results show village cadres, are "double agent of government and villagers ",whose time spent on one case, one meeting issues is significantly affected by requirement of government and villagers.The impact of wages is not significant, perhaps non-wage returns brought by the position is more valued than a specific salary.
Based on these conclusions, in order to motivate village cadres spent more time on one case, one meeting system, several recommendations are raised to further motivate village cadres.
Firstly, More importance should be laid on the rural public goods supply in village cadres' performance appraisal system.improvevillage cadres' performance evaluation mechanism to make "village public welfare projects" account for prominent share; determine village cadres in reference to migrant workers wages, make sure it is equal to or DOI: 10.1051/ , 0 (2017) 710005054
2016
MATEC Web of Conferences 100 GCMM matecconf/201 5054 moderately above migrant workers' wages, basic wage should be correlated to village scale; while combine wage and non-wage motivations, we should notice that hidden motivation, intrinsic motivation, and prestige incentives also play due role in motivating village cadres.
Secondly, fully employ role of villagers meeting, make sure meeting be held regularly, to supervise village cadres behavior; plus, perfect the direct election mechanism, make the election of village cadres reflects villagers demands.
Last, strengthen cooperation between government and villagers, common requirements of the two party can promote the "double agent" to devote more energy to one case, one meeting issues.
2
Text theoretical analysis of influential factors of village cadres' behavior From January 2013 to March 2013, the author had conducted a questionnaire survey on village cadres from 203 villages in the prefecture-level city in Fujian.Survey data show that only 9.6% of the villages had carried out village-level public construction before 2008, and from 2009 to March 2013 we witnessed the ratio was 70.86%, with 256 public projects constructed in total.Village cadres played a key role in the process, with 58.58% and 89.45% being initiated and implemented by the village committee respectively. | 2018-12-18T06:46:28.505Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "9742dbde6d94450f35eac093f9ab60313b0e549a",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/14/matecconf_gcmm2017_05054.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9742dbde6d94450f35eac093f9ab60313b0e549a",
"s2fieldsofstudy": [
"Sociology",
"Economics"
],
"extfieldsofstudy": [
"Political Science"
]
} |
246482132 | pes2o/s2orc | v3-fos-license | Effect of a Diet Supplemented with Sphingomyelin and Probiotics on Colon Cancer Development in Mice
Previous studies have reported that dietary sphingomyelin could inhibit early stages of colon cancer. Lactic acid–producing bacteria have also been associated with an amelioration of cancer symptoms. However, little is known about the potential beneficial effects of the combined administration of both sphingomyelin and lactic acid–producing bacteria. This article analyzes the effect of a diet supplemented with a combination of the probiotics Lacticaseibacillus casei and Bifidobacterium bifidum (108 CFU/ml) and sphingomyelin (0.05%) on mice with 1,2-dimethylhydrazine (DMH)-induced colon cancer. Thirty-six BALB/c mice were divided into 3 groups: one healthy group (group C) and two groups with DMH-induced cancer, one fed a standard diet (group D) and the other fed a diet supplemented with sphingomyelin and probiotics (DS). The number of aberrant crypt foci, marker of colon cancer development, was lower in the DS. The dietary supplementation with the synbiotic reversed the cancer-induced impairment of galactose uptake in enterocyte brush–border–membrane vesicles. These results confirm the beneficial effects of the synbiotic on the intestinal physiology of colon cancer mice and contribute to the understanding of the possible mechanisms involved.
Introduction
Colorectal cancer (CRC) is the third most common diagnosed cancer representing an important health issue worldwide. Despite strong hereditary components, about 80% of cases of colorectal cancers are sporadic and develop slowly over more than 10 years. Epidemiologic studies have shown that the environmental factors, especially diet, are important risk factors for CRC [1,2] Mounting evidence supports the view that the colonic microbiota is involved in the etiology of colon cancer. Different factors, such as probiotics and different dietary bioactive compounds can modulate gut microbiota and its metabolism [3]. In this context, several investigations have focused on the beneficial effects of probiotics and bioactive compounds and their possible role in the prevention of colon cancer [4,5]. Most probiotics are members of the genus Bifidobacterium and several genera of the Lactobacillus group, but Saccharomyces and Enterococcus have also been studied. Experimental animal and human studies have shown that probiotics may reduce intestinal permeability and participate in the regulation of several intestinal functions [6][7][8]. Concretely, animal studies have demonstrated that certain species of lactic acid-producing bacteria, such as Lacticaseibacillus casei [9] and Bifidobacterium bifidum [10], could prevent colon cancer and other diseases linked to the gastrointestinal tract.
On the other hand, sphingolipids are structural and functional bioactive lipids found in eggs, milk, meat, fish, and soybeans that can act as chemo-protective agents regulating cell growth, differentiation, and death [11,12]. Sphingomyelin (SM) is the most abundant sphingolipid in plasma lipoproteins and contains predominantly a phosphocholine as head group [13]. The hydrolysis of SM by alkaline sphingomyelinase (alk-SMase) generates other bioactive molecules, such as ceramide and sphingosine, that play key roles in the maintenance of intestinal mucosal integrity and the inhibition of colon tumorigenesis [12,14]. Ceramide has been proposed as an important intracellular messenger in different signalling pathways implicated in the regulation of cellular proliferation, differentiation, and apoptosis [14][15][16]. Early studies have indicated that a therapy based on probiotics mixture containing several strains of acid lactic bacteria increased alk-SMase levels in mice with inflammatory gut disease [17] and cancer [18]. Furthermore, the combination of probiotic and other dietary compounds may exert additive effects in the improvement of colon carcinogenesis as compared to its separate administration [2,19]. However, the combination of SM and probiotics has not yet been analyzed although it could be hypothesized that dietary supplementation with probiotics together with SM could help to maintain the membrane integrity of the enterocytes, as well as alk-SMase activity in colon cancer models. Therefore, in this study, we have analyzed, for the first time, the beneficial effects of the combined administration of both lactic acid-producing bacteria (L. casei and B. bifidum) and SM in a mouse model of 1,2-dimethylhydrazine (DMH)induced colon cancer.
Bacterial Strains and Growth Condition
The L. casei CECT 475 T strain was isolated from kefir manufactured in the Lactology Laboratory of the Universidad Pública de Navarra, whereas the B. bifidum strain was obtained from the Spanish Collection of Cultures (CECT) with the number CECT 870. Both strains were used in this study as probiotics. Both bacteria were grown in autoclaved skim milk (Difco™; BD, Detroit, MI, USA), at 37 °C for 24 h under aerobic conditions for L. casei and at 39 °C for 72 h under microaerophilic conditions for B. bifidum in the presence of 5% CO 2 . The cell pellets were resuspended in 10% Difco™ skim milk at a final concentration of 10 8 CFU/ ml for each strain.
Chemicals
All chemicals were purchased from Sigma Chemicals (St. Louis, MO, USA) unless otherwise noted. All reagents were of analytical grade.
Animals, Diets, and Experimental Design
Thirty-six male BALB/c mice of 28 days old and weighing about 20 g were obtained from the colony of Charles River Laboratory Animals (Barcelona, Spain). The mice were housed in cages (four animals per cage) and kept in a wellventilated, thermostatically controlled room (22 ± 2 °C temperature and 55 ± 5% relative humidity) with a photoperiod of 12 h light/night cycle.
The animals, after an acclimatization period of 4 days, were weighed and randomly assigned to 3 homogeneous groups (n = 12, each) and fed a standard diet (AIN-93G, Research Diets Inc., New Brunswick, NJ, USA). Control group (C) animals were injected subcutaneously with ethylenediaminetetraacetic acid (EDTA) 1 mM, whereas the mice of the DMH group (D) and DMH + supplemented diet group (DS) were injected subcutaneously with 1,2-dimethylhydrazine (DMH) dissolved in EDTA 1 mM (30 mg DMH/ kg body weight, twice per week for 3 weeks) to induce the pre-neoplastic lesions (ACF).
Fresh diets were prepared weekly with purified ingredients. All diets were isonitrogenous and isoenergetic and were stored at 4 °C until serving feed. One week after the last DMH injection, the animals in the DS group were fed the standard diet supplemented with sphingomyelin 0.05% and skim milk was supplemented with probiotic bacteria at 10 8 CFU/ml. Food and drink were available ad libitum. The body weight was recorded weekly.
At the end of the 66th day of the experimental period, the animals were anesthetized with CO 2 and sacrificed by decapitation. Trunk blood was collected for the measurement of serum biochemical parameters, and different organs (spleen, liver, colon, jejunum, and cecum) were extracted and weighed. The jejunum was carefully removed, flushed out with ice-cold saline, frozen in liquid nitrogen and stored at −80 °C until processed for the isolation of the brush-border-membrane vesicles (BBMV) for the measurement of the intestinal absorption of D-galactose.
The Animal Research Ethics Committee of the "Universidad Pública de Navarra" reviewed and approved (reference PI:07/06) the animal care protocol and the killing method to ensure compliances with the guidelines of the Canadian Council on Animal Care [20].
Histological Analyses
Colon sections of 6 animals per group were flushed with icecold saline, opened longitudinally, fixed flat in 10% formalin for 24 h, dehydrated, and stained in 0.1% methylene blue for 15 min. The colon pieces were placed mucosa-side up on glass slides and evaluated for the presence of aberrant crypts foci by light microscopy (40 × or 100 ×). The aberrant foci were identified following the counting criteria described by Paulsen et al. [21]. Before being frozen, a portion of the distal colon (1 cm length) was taken for histological examination. Samples of distal colon tissue were immediately fixed in 4% formalin solution for 24 h. They were then dehydrated, embedded in paraffin, sliced into 5-µm sections and processed for hematoxylin-eosin staining.
Serum Biochemical Assays
All serologic parameters ( Table 2) were quantified in an automatic chemistry analyzer Cobas-Mira (Roche Diagnostic System, Basel, Switzerland) following the manufacturer's procedures.
Preparation of Mouse Intestinal BBMV
BBMV were obtained according to the method described by Shirazi-Beechey et al. [22]. Briefly, the BBMV were prepared from a portion of small intestine extracted from each animal of the three different experimental groups.
The mucosa was then resuspended in buffer containing 100 mM mannitol and 2 mM 4-(2-hydroxyethyl)-1piperazineethanesulfonic acid (HEPES) adjusted to pH 7.1 with 1 M Tris-HCl buffer. The suspension was homogenized with a Potter-Elvehjem (Braun, Melsungen, Germany) at 3000 rpm at 4 °C for 1 min. Next, MgCl 2 was then added to the homogenate to a final concentration of 10 mM, and the mixture was maintained on ice with continuous ice-cold shaking for 20 min. After that, the mixture was centrifuged at 2000 g for 15 min, and the supernatant was collected and centrifuged at 27,000 g for 30 min. This supernatant was discarded and the pellet resuspended in a buffer containing 100 mM mannitol, 0·1 mM MgSO 4 and 2 mM HEPES adjusted to pH 7·4 with Tris. After a second precipitation with MgCl 2 , the mixture was finally centrifuged at 27,000 g for 30 min. The pellet was then resuspended in a buffer containing 300 mM mannitol, 0·1 mM MgSO 4 and 10 mM HEPES adjusted to pH 7·4 with Tris. The BBMV of ten mice were pooled, assayed for protein content by using the Bradford diagnostic kit (Bio-Rad Laboratories, Barcelona, Spain), diluted to 10 mg BBMV protein/ml, aliquoted and frozen in liquid nitrogen. The final BBMV preparation was fivefold enriched for sucrase specific activity compared with the initial homogenate.
Sugar Uptake by BBMV
Sugar uptake by the BBMV was measured using a slightly modified version of the rapid filtration technique developed by Hopfer [23]. Three replications were performed for each experimental group. D-galactose (0.1 mM) uptake was determined in the presence of a Na + gradient at pH 7.4 and 37 °C. BBMV were incubated in a medium containing 0.1 mM D-galactose, 100 mM NaSCN, 100 mM mannitol, 0.1 mM MgSO 4 , 10 mM HEPES, and traces of D-[1-14 C] galactose (0.037 MBq/ml; Amersham Radiochemical Centre, UK). At the different incubation times, uptake was halted by adding ice-cold stop solution (150 mM KSCN, 0.25 mM phloridzin and 10 mM HEPES) at pH 7.4 for the galactose uptake determination.
The suspension was poured immediately onto a cellulose nitrate filter (0.45 μm, 25 mm diameter; Sartorius, Edgewood, NY, USA) and the filter was then washed twice in ice-cold stop solution and dissolved in HiSafe 3 scintillation liquid for the final measurement of radioactivity using a β-counter (1450 MicroBeta® TriLux; Wallac, Turku, Finland).
Statistical Analysis
Descriptive and inferential statistics were used according to procedures described by Anderson [24]. The normality of the sample distribution of each continuous variable was tested with the Kolmogorov-Smirnov test. To identify significant differences among the groups (body and organ weights, and biochemical parameters), statistical analysis was performed by one-way ANOVA followed by Dunnett's test (D group as reference). Kruskal-Wallis followed by Mann-Whitney U test (D group as reference) were used to compare Western blot data and the ACF number of the three experimental groups. Differences were considered statistically significant when 2-tailed p < 0.05. Statistical calculations were performed with the statistical software package SPSS version 21.0 for Windows (SPSS, Chicago, IL, USA).
Animal Growth: Body and Organ Weights
The second week after cancer induction, body weight was significantly lower in the animals of the D and DS groups when compared with the C group (Fig. 1). The body weight of the animals of the DS group was significantly higher than those of the D group from day 45 of the experimental period, indicating that the dietary supplementation improves the growth rate of the animals.
Regarding the weights of the different tissues (Table 1), colon, and liver were the only organs that showed statistical differences when comparing the three experimental groups. Colon weight was higher in the DS group when compared with C (p = 0.002) and D (p = 0.015) groups, whereas liver weight was lower in D (p = 0.001) and DS (p = 0.007) groups when compared with C group.
Serum Biochemical Measurements
Serum glucose, urea, total cholesterol and high-density lipoprotein cholesterol levels tended to be higher in the D and DS groups, but without statistical differences ( Table 2). Serum aspartate aminotransferase levels were significantly lower in the control group and in the DS group when compared to D group although statistically significant differences were only observed in control group. In this line, alanine aminotransferase levels were also significantly lower in control and DS groups when compared to D group indicating that the induction of pre-neoplastic lesions in the colon increased the enzymatic activity and that the administration of the symbiotic combination reduced this increase, although control levels were not reached. This altered enzymatic activity observed in the DMH treated animals confirms the DMH-induced liver damage observed in treated animals. The supplementation with SM and probiotics did not prevent the liver weight loss observed in the DMH-treated mice, but it reverted the increase in alanine amino transferase serum level (p ≤ 0.05).
Histological Analysis
These ACF are considered indicators of pre-neoplastic damage. All the ACF found were located in the distal colon. Interestingly, dietary supplementation with sphingomyelin plus L. casei and B. bifidum resulted in a significantly reduction in the number of ACF when compared with the D group (Fig. 2). Colon sections of control mice displayed normal crypt foci and a colonic architecture with no signs of apparent abnormality. However, large areas with dense lymphocytic hyperplasia, compatible with preneoplastic lesions, were observed in the D group. Interestingly, this high lymphocyte infiltration was reduced by the administration of the symbiotic compound (Fig. 3).
Sugar Uptake in BBMV
Intestinal function has also been analyzed in the present work by measuring D-galactose uptake in BBMV. In this context, sugar uptake was stimulated by CRC development. The presence of the synbiotic in the diet inhibited D-galactose uptake as compared to DMH group (Fig. 4). This effect was observed only when the nutrients entered the enterocyte by an active transport mechanism that was mediated by a transporter located in the brush-border-membrane (short assay times and a Na + gradient). For longer incubation times (10 min and 60 min, respectively), the sugar entered the vesicles by a process of diffusion that was not altered by the synbiotic.
Discussion
Previous reports have demonstrated that several probiotics and bioactive compounds have a positive effect on the immune system, improving the immune response by unspecific and specific mechanisms [25]. In the context of colorectal cancer, the protective effect of dietary supplementation with different probiotics and synbiotics has been studied in animal models [26].
The present work has analyzed the effects of the combination of two probiotics (L. casei, B. bifidum) and a sphingolipid (sphingomyelin) on the physiopathology and intestinal function of mice with DMH-induced colon cancer. As there are no animal models that develop colon cancer spontaneously, DMH has been used to induce pre-cancerous lesions that are histologically similar to those observed in humans and has been widely used in the literature [27,28]. Regarding body weight, the results obtained are in agreement with other studies that did not find differences in body weight gain after supplementation with SM [11,15] or probiotics [6] and in line with studies by other authors in which DMH has been used as carcinogen [29]. Although sphingolipids have relevant biological activities, they are not considered as essential nutrients for an optimal growth rate of the animals [30,31].
As it has been previously mentioned, colon and liver were the only organs affected by the pre-cancerous lesion induction or synbiotic treatment. DMH-treated animals (D) showed a tendency to a higher colon weight than the untreated control group. This effect could be due to the initial stage of the disease that occurs with the presence of colon mucosa polyps that would increase the organ weight. However, the colon of the animals treated with the supplemented diet (DS group) was significantly higher than the one of the DMH group (D) suggesting that the consumed bacteria might attach to the colonic mucosae and increase, in consequence, the colon weight. In this sense, other authors have demonstrated that some strains of the Lactobacillus group are able to colonize the colon and adhere to the intestinal epithelium [32]. On the other hand, the liver was also affected by CRC development. Liver weight was lower in the animals with DMH-induced cancer than in the control group, suggesting that liver damage is related to colon cancer development and therefore could be implicated in the body weight loss observed in these animals. This liver damage was confirmed by the altered levels of serum alanine aminotransferase and aspartate aminotransferase observed in D and DS groups. In this context, a previous study found lower liver weight and decreased hepatic lipid accumulation in mice fed a diet supplemented with 0.2-0.4% sphingolipids [33]. Furthermore, other authors have shown that treatment with probiotics like Lactobacilli and Bifidobacteria could improve liver function [34].
Moreover, the dietary supplementation decreased the number of ACF as compared to D group, as found in [37]. Finally, the inhibition of tumor formation due to dietary sphingomyelin has been attributed to a normalization of cell proliferation and rate of apoptosis, but not the induction of differentiation [12]. Finally, it seems that diet supplementation with the two bacteria and sphingomyelin inhibits the mechanisms involved in sugar absorption stimulation observed in DMH animals, restoring the sugar uptake levels obtained in control mice. These findings are in line with previous studies that demonstrated that probiotics [6] and sphingomyelin [11] are able to decrease glucose transport in intestinal epithelial cells models. These results suggest that the administration of the synbiotic could help to prevent colon cancer development in humans since the stimulation of sugar uptake increases the amount of intracellular glucose available for metabolic conversion, thereby promoting enhanced cell proliferation and cancer development [38].
In summary, the dietary supplementation with the synbiotic preparation reduced the number of histological intestinal lesions, suggesting that this combination has beneficial effects in DMH-treated mice with pre-neoplastic lesions. Moreover, this supplementation reversed the impaired intestinal sugar uptake observed in animals with DMH-induced pre-cancerous lesions, suggesting that it could be a good complementary therapy for the prevention of colon cancer in humans although more investigations are needed. | 2022-02-03T14:55:57.413Z | 2022-02-02T00:00:00.000 | {
"year": 2022,
"sha1": "5d4a10e8e1fa2f01b2dd86eabaa5fbbff49eceb8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12602-022-09916-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "805886580557d497b53c812bf5a74531c0a61220",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55092072 | pes2o/s2orc | v3-fos-license | Emotional Intelligence Dimensions , Job Satisfaction and Primary School Teachers
In this study, the researcher has tried to identify the relationship of five dimensions of emotional intelligence (Self-Awareness, Managing Emotions, Emotional Maturity, Empathy and Social competency & social skills) with job-satisfaction. 400 primary school teachers were randomly selected from 150 primary schools of district Meerut. Self prepared Emotional Intelligence Scale (EIS) and Teachers’ Job Satisfaction Scale’ (TJSS) developed by Dr. J.P. Srivastava and Dr. S.P. Gupta was used. The data was analyzed with the help of SPSS-17 programme. The results of the study indicate that Emotional Intelligence is important have significant positive relationship with job satisfaction. Among all the five dimensions (self-awareness, managing emotions, maturity, empathy and social competency & social skills) only managing emotions and maturity play major role in prediction of job satisfaction than the self awareness, empathy and social competency & social skills, it means that emotional intelligence is good predictor of job satisfaction for primary school teachers.
INTRODUCTION
Now a day, teaching profession has become extremely challenging.In this age of computer and internet, information is just one click away from the students.This situation forces teachers into a very hectic and busy schedule.Due to which teachers are feeling stressed out, unhappy and dissatisfied.Besides teachings, primary teachers are forced to undertake many tasks which are not intended for them.Students" performance is usually related to teachers" ability to teach and function effectively.Intellectual intelligence (IQ) does not guarantee well being.In fact without Emotional intelligence (EI), a person can have the best training, an analytical mind and endless supply of ideas but will not make a great leader (Goleman, 2001).
Job satisfaction refers to a collection of attitudes, which workers have about their jobs.Berry (1997) defined "Job satisfaction as an individual"s reaction to job experience".Job satisfaction describes how content an individual is with his or her job.There are a variety of factors that can influence a person"s level of job satisfaction; some of these factors include the level of pay and benefits, the perceived fairness of the promotion system within a company, the quality of the working condition, leadership and social relationships and the job itself (the variety of tasks involved, the interest and challenge the job generates and the clarity of the job description requirement).The happier people are with their job, the more satisfied they are said to be.Various steps are taken by organizations to improve and enhance job satisfaction; these include job rotation, job enlargement and job enrichment.Other factors that influence job satisfaction include the management style and culture, employee involvement and autonomous work groups.Numerous authors have theorized that emotional intelligence contributes to people"s capacity to work effectively in teams and manage work stress.(Caruso and Salovey, 2004).
Job satisfaction is a positive attitude, and individual has toward his job (Furnham, 1997;Mitchell and Kalb, 1982;Churchill et al., 1974).Working environment (Moriarty et al., 2001) and headmaster relationships with teachers (Menon and Christou, 2002) are significant sources of job satisfaction for teachers.Other factors that are just important contributors are relations with colleagues and students, opportunities to participate in decision making work condition, school culture, responsibility, communication, feedback from others and the nature of the work itself (Scott and Dinham, 2003;Chaplin, 1995).Looking at all the factors mentioned, it would seem that all the dimensions of EI would fit in very nicely with some of those factors.These days, researchers list numerous factors as affecting the job satisfaction of employees including: payment, the nature of the work, promotion, leadership and supervision, relations with coworkers, job safety, organizational structure, physical conditions of the job, personality factors, personal characteristics, and equality.Emotional intelligence can as well influence job satisfaction, since the happier people are with their job the more satisfied they are said to be which entails the expression of emotion.It is a fact that it takes more than traditional cognitive intelligence to be successful at work.It also takes emotional intelligence; the ability to restrain negative feelings such as anger and self-doubt, and rather focus on positive ones such as confidence to be successful at work.Bar-On (1997) in his study found a positive relationship between a combination of the dimensions of EI, interpersonal relationship, intrapersonal relationship, self-adaptability, stress management and overall feeling with job satisfaction.A negative relationship between EI and burnout and positive relationship between EI and job satisfaction was found by Platsidou (2010).Teachers with high EI is likely to experience grater job satisfaction (Wong et al., 2010).EI was significantly and positively related to job satisfaction and organizational commitment (Guleryuz et al., 2008).Kafetsios and Zampetakis (2008) indicated that positive and negative affect and research substantially mediate the relationship between EI and job satisfaction with positive effect exerting a stronger influence.
Umadevi M.R. ( 2009) in her study "Relationship between emotional intelligence, achievement motivation and academic achievement", she examined the relationship between emotional intelligence, achievement motivation and academic achievement of primary school student teachers.Emotional intelligence scale and achievement motivation test was administered on 200 D.Ed.students, and the data obtained was subjected to descriptive, correlation and differential analysis.The objectives of the study were: to find out the relationship between emotional intelligence, and academic achievement of student teachers, to find out the relationship between achievement motivation and academic achievement of student teachers, and to compare the emotional intelligence and achievement motivation of student teachers with respect to sex and arts and science groups.It was found that there is a positive relationship between emotional intelligence and academic achievement of primary school student teachers, there is a positive relationship between achievement motivation and academic achievement of primary school student teachers, male and female, student teachers, arts and science student teachers do not differ in emotional intelligence, and male and female student teachers, arts and science student teachers do not differ in achievement motivation.Afolabi et al. (2010) in their study examines the influence of emotional intelligence and gender on job performance and job satisfaction among Nigeria Police Officers.It employs a 2x2 factorial design as well as multiple regressions with emotional intelligence and gender as the independent variables.One hundred and nineteen police officers were randomly selected from Esan Area Command.The results show that Police Officers who are of high emotional intelligence are more satisfied and perform better than Police Officers who are of low emotional intelligence.Also, respondents who have male or female roles with high emotional intelligence perform better and more satisfied with their job than respondents who have male or female roles with low emotional intelligence.
In a study on 215 physical education teachers Mousavi, S.H et al. (2012) found that there is a significant positive relationship between emotional intelligence and job satisfaction (r=0.349) and between the components of social skills, empathy, and motivation and job satisfaction at level.Further, the results of stepwise regression showed that among the five components of emotional intelligence, social skills (0.442), empathy (0.302), and motivation (0.235) were predictors of teacher"s job satisfaction.The calculated Fisher"s z revealed that the difference between the correlation between the teachers with diploma and those with MSc is significant at 0.05 level.It seems that job satisfaction of teachers can be increased by training and improving their emotional intelligence along with providing facilities and satisfying their needs.
Di Fabio et al. (2012), on Italian university students looked at the relationship between career indecision and personality traits, career decision-making self-efficacy, perceived social support and selfreported emotional intelligence according to the Bar-On model (1997).The results showed that emotional intelligence explained a percentage of the incremental variance with respect to both personality traits and career decision-making self-efficacy and perceived social support in relation to both career indecision and indecisiveness.While career indecision was better explained by emotional intelligence, indecisiveness was better explained by personality.The study could thus investigate in depth the two constructs and highlight their convergences as well as their divergences.
In a study Syed Sofian Syed Salim et al., (2012) found a significant positive relationship between EI and job satisfaction and no effect of gender on the relationship between the two variables.On the contrary by Donaldson-Feilder and Bond (2004) on 290 workers in the UK suggests that neither EI nor acceptance is associated with job satisfaction.Many findings suggest that emotionally intelligent persons are better performer than their counterpart but most of these associations are based on self-report measures of emotional intelligence.
Kumar, A (2016), in a study of relationship between emotional intelligence and job satisfaction of secondary school teachers concluded that Emotional Intelligence and Job Satisfaction are significantly and positively correlated with each other.The gender plays an important role in emotional intelligence.female have significantly higher level of emotional intelligence in comparison to male and the secondary school teachers both male and female who lived in urban areas have significantly higher level of emotional intelligence in comparison to teachers who worked in rural areas.
RESEARCH QUESTIONS:
The following questions have been formulated for this study.1.Is there any relationship between emotional intelligence and job satisfaction?2. To which extent dimensions of emotional intelligence affect job satisfaction among primary school teachers?
To know the answer of the above questions following objectives for this study have been formulated.
To study the relationship between emotional intelligence and job satisfaction.2.
To study the influence of emotional dimensions on job satisfaction of primary school teachers.
HYPOTHESES:-
The following hypotheses were formulated -1.Ho1.There is no significant correlation between emotional intelligence and job satisfaction of primary school teachers.2. Ho2.There is no significant contribution of five dimensions of Emotional Intelligence on job satisfaction.
METHODOLOGY:-
The normative survey method has been used in this study, 400 primary school teachers were randomly selected from 150 primary schools of district Meerut.To collect the data, Emotional Intelligence Scale (EIS) was developed by the researcher.The EIS include five dimensions of EI viz-Self-Awareness, Managing Emotions, Emotional Maturity, Empathy and Social competency & social skills.Each field contains 16 items.Finally, total 80 items were included in the Scale.The test -retest reliability and validity of test was 0.89 & 0.626 respectively.To measure the job satisfaction of the teachers Teachers" Job Satisfaction Scale" (TJSS) developed by Dr. J.P. Srivastava and Dr. S.P. Gupta was used.The data was analyzed with the help of SPSS-17 programme.
ANALYSIS, RESULTS AND DISCUSSION
The analysis of data was done hypothesis wise through SPSS-17 for the formulation of results and interpretation with discussion.
Analysis-1:
To measure the objective 1, Hypothesis Ho1 "There is no significant correlation between emotional intelligence and job satisfaction of primary school teachers", has been analysed.The coefficient of correlation has been calculated through Pearson correlation between Emotional intelligence and Job-Satisfaction.The calculated value is shown below in table-1.
RESULT-1:
The obtained coefficient of correction value "r" is 0.573, which is significant at 0.01 level of significance.It means that there is a moderately positive and significant correlation between emotional intelligence and job satisfaction of primary school teachers.So, Hypothesis Ho1 gets rejected.
DISCUSSION-1:
The result of the study shows that there is a positive and significant relationship between Emotional Intelligence and job satisfaction among primary school teachers.This mean that higher the level of Emotional Intelligence, the higher the job satisfaction and vice versa.This finding support previous studies by Bar-On (1997), Guleryuz et al. (2008), Kafetsios and Zampetakis (2008), Platsidou (2010) and Syed et al. (2012).Emotional Intelligence is said to be able to affect one"s ability to succeed in coping with environmental demand and pressure, clearly an important set of behaviors to harness under stressful work condition (Bar-On, 1997).It is clear from this study that the primary school teachers experience job satisfaction despite the challenges and demands of their jobs.According to Goleman (1998aGoleman ( , 1998b)), individuals who have high EI are able to know and feel their own and others" emotions.These individuals are able to make better, accurate and rational decisions, produce a more realistic assessment and have high confidence in themselves.The abilities are no doubt very important in the teaching profession whereby teachers are constantly expected to make decisions based on their understanding of their students" behaviors, emotions and cognitions.In addition the ability to understand and appreciate the emotions of others in an organization is also an important aspect of EI in order to create harmony within an organization.In this context, one could say that this ability would help teachers in their ability to create harmony within school.
ANALYSIS-2:
The objective 2 for the hypothesis Ho2 "There is no significant contribution of five dimensions of Emotional Intelligence on job satisfaction" have been verified through multiple regression analyses to ascertain the relationship and contribution of the five dimensions of Emotional Intelligence viz self awareness, emotional management, emotional maturity, empathy and social competency & social skills.The result of the analysis is shown in Table -2
RESULT-2:
The results showed that the value of correlation coefficient of the five dimensions of Emotional Intelligence on job satisfaction (R = 0.584) is average positive and significant.These finding showed that teachers who were generally at the high and average level of Emotional Intelligence were normally satisfied with their job.The regression analysis of the five dimensions of Emotional Intelligence with job satisfaction indicates that the derived regression equation is significant (F = 30.40;p<0.01).Value of beta weight for each dimension showed that emotional management (t=4.01,p<0.01) and emotional maturity (t=3.14, p<0.01) has a significant predictive power for job satisfaction while the other three dimensions, on the contrary, showed insignificant predictive power.The joint influence of all the five dimensions of Emotional Intelligence is represented by R 2 (0.341).which means that all dimensions of emotional intelligence have combined affect 34 % on job-satisfaction.Table -2, shows that emotional management contributed 32%, emotional maturity contributed 17% of total beta weight.It is clear that only emotional management and maturity have power of prediction of job satisfaction of primary school teachers.
DISCUSSION-2
The Results also showed that the five components viz self-awareness, managing emotions, maturity, empathy and social competency & social skills are positively correlated with the job satisfaction.These results were supported by the (Goleman, 1998a) by stating that the ability to adapt to the mood of other individuals or to empathize is also one of the characteristics of a person with high Emotional Intelligence.Adaptability and empathy will make it easier for a teacher to interact well and effectively with both teachers and students.This indicates that individuals with high Emotional Intelligence also have good social skills.They can easily adapt to the working environment and find satisfaction in their work.The individual with high Emotional Intelligence will create a good, harmonious and conducive environment which will intern give them satisfaction in careers that they pursue (Cherniss, 2001).
CONCLUSION
The findings of the present study indicate that Emotional Intelligence is important in-terms of its relationship with job satisfaction.It also influences human behavior as a whole.So not only in the field of education but also in all field of life, Emotional Intelligence play a vital role.Though there are a lot of other factors other than self-awareness, managing emotions, maturity, empathy and social competency & social skills, which affect job satisfaction.But role of these factors cannot be neglected.The ability to manage their own emotions as well as other emotions and high level of emotional maturity significantly predicts the job satisfaction of a teacher.A teacher, besides teaching work, also have to deal with daily
Table - 2
. A summary of Multiple Regression Analysis of five Dimensions of Emotional Intelligence and job satisfaction | 2018-12-11T17:12:54.147Z | 2016-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "f2406cd6517b5ee4a0c7e067eed7e349f0f53b86",
"oa_license": "CCBYNC",
"oa_url": "https://research-advances.org/index.php/IJEMS/article/download/541/539",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f2406cd6517b5ee4a0c7e067eed7e349f0f53b86",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
119426339 | pes2o/s2orc | v3-fos-license | Jet modification with medium recoil in quark-gluon plasma
Jet energy transported to quark-gluon plasma during jet-medium interaction excites the QGP medium and creates energetic thermal partons -- recoil particles or recoils. Modification of the jet structure in heavy ion collisions is studied using \textsc{martini}, in which recoil simulation is enabled. In large systems such as central Pb-Pb collisions, the recoil effect is expected to be critical due to strong jet-medium interaction. We show the results of the jet mass function and jet shape function are improved when the recoil particles are included in the reconstructed jets. We conclude that the energy carried by the recoil particles are regarded as a part of reconstructed jets and are necessary in studying jet modification in heavy ion collisions.
Introduction
In heavy ion collisions, jets and thermal medium continuously interact each other; the jets are quenched due to the medium while inducing medium excitation along the jet path. The energy lost by jets, present as a form of medium response or recoil, gets dissipated by the flow of the medium, but some of the energy may still remain in the jet cone. When defining jets using the full jet reconstruction techniques, one should take into account whole energy-momentum within a given jet cone only excluding those originated from the thermal medium.
Using martini [1], in which recoil simulation is newly implemented, we present the effect of recoils in studying jet structure observables, i.e., jet invariant mass and the jet shape function. We find that the jet mass is reduced by jet quenching, while recoils substantially enhance the jet mass especially for higher energy jets. Also we show that recoils are crucial in describing the jet shape function at the peripheral side of a jet cone, giving rise to a increasing trend of the ratio between PbPb and pp collisions up to ∆r = 1.
MARTINI
martini is a Monte Carlo event generator for jet evolution in high-energy heavy ion collisions [1]. It performs in-medium parton shower according to the AMY formalism for the radiative energy loss rates [2,3] combined with collisional processes [4]. The energy loss rates depend on the properties of QGP medium, which is provided by hydrodynamics simulations. The initial vacuum shower and hadronization of the evolved partons are accomplished by pythia 8.
The time-evolving energy distribution P a (p, t) for a given particle a can be expressed in terms of a set of coupled rate equations, which takes the following form [5] : (2) dΓ a bc (p, k)/dkdt is the transition rate for a process where a parton a of energy p emits a parton c of energy k and becomes a parton b.
The AMY formalism describes energy loss of hard jets in heavy ion collisions as parton bremsstrahlung in the evolving QGP medium. The effective kinetic theory described in [3] assumes that quarks and gluons in the medium are well defined (hard) quasi-particles and have typical momentum p T and thermal mass of order gT . Under this assumption, the radiation rate can be calculated by means of integral equations [2]. Radiation is strictly collinear at the splitting vertex while collisional processes involve space-like momentum transfer inducing jet momentum broadening.
The radiative energy loss mechanism is improved by implementing the effects of finite formation time [6] and running coupling [7]. The formation time of the radiation process increases with p/p 2 T , and a hard parton and an emitted parton are coherent within that time. This interference effect suppresses the radiation rate at early times after the original radiation. For the renormalization scale of running coupling constant α s (µ), we use the root mean square of the momentum transfer p 2 ⊥ between the two particles, parameterized as whereq is the averaged momentum transfer squared per scattering and p the energy of the mother parton. A recoil parton is generated by adding momentum transfer and momentum of a thermal parton sampled from the medium. If the summed momentum satisfying the on-shell condition is greater than a certain kinematic cut, p cut , this is promoted to a recoil parton and further can participate in jet-medium interactions. In this work, we set p cut 4T , where T is the temperature of thermal medium. Any momentum contributions softer than p cut should be treated as sources for medium response, which is our future work.
The thermal background is produced by music [8], which allows full 3+1 dimensional hydrodynamics calculations. Thermal fluctuation in the transverse plane is initialized by the IP-Glasma model [9]. As The averaged jet mass as a function of jet p T , compared to the ALICE measurement [10] shown in Fig. 1 (c), the martini calculation of the nuclear modification factor R AA is consistent with the data, especially at high p T region. This indicates that martini is valid for describing leading-order jet fragmentation.
Jet mass
Unlike leading hadron observables, such as R AA , jet structure observables take into account the distribution of energy-momentum inside a reconstructed jet cone. As a part of jet constituents, recoils contribute to the total jet energy and the jet energy distribution with respect to the jet axis, therefore they play an essential role in the modification of jet structures.
Jet mass can be changed by jet quenching and broadening in thermal medium. Fig. 2 (a) shows the normalized jet mass distribution function for 3 different jet p T windows in central Pb-Pb collisions at 2.76TeV. The jet mass distributions for all jet p T ranges are altered toward higher jet mass in the presence of recoils. As shown in the right panel, the measurement for pp and Pb-Pb collisions are fairly coincident. Two effects, jet quenching induced by QGP and the creation of recoils, have competing influences and the recoils help restoring the jet mass reduced by jet quenching. The effect of increasing jet mass can be clearly seen in the averaged jet mass plot, shown in Fig. 2 (b). The contribution of recoils to the average of jet mass is greater for higher jet p T intervals and becomes more important to describe the data measured by ALICE experiment [10]. Fig 3 (a) shows the ratio of the jet shape function in central PbPb collisions at 2.76TeV and in pp, compared to CMS measurement [11]. The martini calculation excluding recoils results in narrower jets with monotonic ∆r dependence of the ratio at large angles due to jet quenching. Meanwhile recoils greatly affect the jet shape function at larger angles and give rise to an increase in the ratio of the functions. We obtain a good agreement between the martini simulation including recoils and the experimental measurement.
Jet shape function
In Fig. 3 (b), the same plot, but the ∆r extended to 1, is shown. The CMS measurement for leading jets in the 0-30% bin [12] is also shown for a rough comparison. Without recoils, jets are consistently quenched in a broad area outside of the jet cone, while the influence of recoils gets bigger at larger ∆r. We find a rising trend of the ratio showing reasonable slope. This indicates that our simulations with recoils give an adequate description for the jet shape function of quenched jet in the hydrodynamic medium.
Conclusion
In this proceedings, we present the importance of recoils in studying jet modification in heavy ion collisions using martini in which the recoil simulation is implemented. With this prescription, we successfully reproduced the jet mass function and jet shape function by including recoils. We found that the recoils greatly contribute to the jet mass function for higher p T jets and the ratio of the jet shape functions especially at peripheral side of the jets. Our results indicate that jet energy transferred to thermal medium must be counted as a part of the jet substructure. This study manifests that recoils are necessary when defining jets quenched by the QGP in heavy ion collisions. | 2019-01-16T16:40:13.000Z | 2018-07-17T00:00:00.000 | {
"year": 2018,
"sha1": "9c2cbabdd6e68ff363dd1bda6300ce5c730ee7d7",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nuclphysa.2018.10.057",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "34ede76a86df78645c156ee356d1035ca5a91727",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259259863 | pes2o/s2orc | v3-fos-license | Genetic Diversity of Lecanosticta acicola in Pinus Ecosystems in Northern Spain
Lecanosticta acicola is one of the most damaging species affecting Pinus radiata plantations in Spain. Favourable climatic conditions and unknown endogenous factors of the pathogen and host led to a situation of high incidence and severity of the disease in these ecosystems. With the main aim of understanding the factors intrinsic to this pathogenic species, a study of the population structure in new established plantations with respect to older plantations was implemented. The genetic diversity, population structure and the ability of the pathogen to spread was determined in Northern Spain (Basque Country), where two thirds of the total Pinus radiata plantations of Spain are located. From a total of 153 Lecanosticta acicola isolates analysed, two lineages were present; the southern lineage, which was prevalent, and the northern lineage, which was scarce. A total of 22 multilocus genotypes were detected with a balanced composition of both mating types and evidence for sexual reproduction. In addition to the changing environmental conditions enhancing disease expression, the complexity and diversity of the pathogen will make it difficult to control and to maintain the wood productive system fundamentally based on this forest species.
Introduction
Needle blights are currently the most serious fungal needle diseases affecting pine species worldwide. Among the main causal agents, Lecanosticta acicola (Thümen) H. Sydow, Dothistroma pini Hulbary and Dothistroma septosporum (G. Doroguine) M. Morelet are of particular concern due to the impact they have on Pinus ecosystems in the Basque Country, Spain [1]. The symptoms caused by these fungi are quite similar and, therefore, difficult to differentiate, especially when present on the same tree [1]. Severe defoliation is caused by these pathogens that results in significant growth loss when more than 25% of the needles are damaged [2,3]. Lecanosticta acicola is considered a regulated non-quarantine pest in the EU since 2019 according to the Commission implementing regulation (EU) 2021/2285 [4].
The disease caused by Lecanosticta acicola, brown spot needle blight (BSNB), was well known in Spanish Pinus radiata D. Don plantations for decades [5,6]. Until recently, BSNB had only minor impacts on native and exotic forest trees in the north of Spain. This disease was found mainly in valley bottoms, in plantations with high tree density and areas with high humidity. In the past seven years, abnormal climatic conditions favourable to the disease and potentially unknown endogenous factors made the usual silvicultural measures inefficient in mitigating its impact and progress. This pathogen species spread widely,
Sampling
This study focused on Pinus ecosystems located in the Basque Country, Spanish Atlantic climate region. In the Basque Country, conifers cover an area of 1504.59 km 2 (20.8%) over 7234 km 2 , of which 1094 km 2 (15.1% of the total surface) correspond to P. radiata (Figure 1). Field observations and sampling were conducted from spring to late autumn in 2018 and 2020.
Three sample collections were differentiated depending on the origin of the plant material. Sample collection 1 (named BC_1) was obtained from 118 different plantations of the Basque Country, representing the infected zones of the Pinus radiata provenance No. 6 ( Figure 1). Sample collection 2 (named AR_2) was obtained from 35 needle samples of Pinus species (P. brutia, P. elliottii, P. nigra, P. pinaster, P. pinea, P. ponderosa, P. sylvestris and P. taeda) produced in a French nursery and planted in 2011 in the arboretum AR20 located in Laukiz (Bizkaia) under the European project REINFFORCE (https://reinfforce.iefc.net/ es/arboreta/ar20/ accessed on 3 April 2023). AR20 arboretum was established in an abandoned nursery that produced and distributed reproductive material centralising the supply of P. radiata to the region under study. Furthermore, this place showed serious needle blight damage. Since this nursery received and grew local (provenance No. 6) and imported seeds (United State, Chile, New Zealand, France, etc.), it was considered to be a potential source of pathogen diversity.
of the Basque Country, representing the infected zones of the Pinus radiata provenance No. 6 ( Figure 1).
Sample collection 2 (named AR_2) was obtained from 35 needle samples of Pinus species (P. brutia, P. elliottii, P. nigra, P. pinaster, P. pinea, P. ponderosa, P. sylvestris and P. taeda) produced in a French nursery and planted in 2011 in the arboretum AR20 located in Laukiz (Bizkaia) under the European project REINFFORCE (https://reinfforce.iefc.net/es/arboreta/ar20/ accessed on 3 April 2023). AR20 arboretum was established in an abandoned nursery that produced and distributed reproductive material centralising the supply of P. radiata to the region under study. Furthermore, this place showed serious needle blight damage. Since this nursery received and grew local (provenance No. 6) and imported seeds (United State, Chile, New Zealand, France, etc.), it was considered to be a potential source of pathogen diversity.
Sample collection 3 (named AR_1) was obtained from 33 symptomatic needle samples from newly established P. radiata seedlings in this arboretum. This 2-year-old material (411 seedlings) was established in this location as pathogen trap plants, at a distance of 1.5 m to infected trees from AR_2, to determine the capacity of natural inoculation.
Needle samples with visible symptoms of BSNB were collected randomly from infected trees at the three sample sites (one sample consist of samples from eight to ten trees from each sampled location) and transported in a cooler box to the laboratory. The majority of samples were collected from P. radiata at sample site BC_1, since this was the most prevalent tree species in the studied area.
Pathogen Isolation
In order to obtain L. acicola isolates, needles were examined for typical erumpent fruiting bodies and when possible, five acervuli located on different needles were selected for isolations. Needle surfaces were sterilised by wiping them with a cotton swab soaked with 70% ethanol. Each acervulus was cut from a needle under a dissecting microscope using a scalpel and placed on a glass slide in a drop of sterile water. The presence of typical L. acicola conidia was verified under a compound microscope. The conidial suspension was plated on dothistroma selective medium (DSM) with streptomycin using an Sample collection 3 (named AR_1) was obtained from 33 symptomatic needle samples from newly established P. radiata seedlings in this arboretum. This 2-year-old material (411 seedlings) was established in this location as pathogen trap plants, at a distance of 1.5 m to infected trees from AR_2, to determine the capacity of natural inoculation.
Needle samples with visible symptoms of BSNB were collected randomly from infected trees at the three sample sites (one sample consist of samples from eight to ten trees from each sampled location) and transported in a cooler box to the laboratory. The majority of samples were collected from P. radiata at sample site BC_1, since this was the most prevalent tree species in the studied area.
Pathogen Isolation
In order to obtain L. acicola isolates, needles were examined for typical erumpent fruiting bodies and when possible, five acervuli located on different needles were selected for isolations. Needle surfaces were sterilised by wiping them with a cotton swab soaked with 70% ethanol. Each acervulus was cut from a needle under a dissecting microscope using a scalpel and placed on a glass slide in a drop of sterile water. The presence of typical L. acicola conidia was verified under a compound microscope. The conidial suspension was plated on dothistroma selective medium (DSM) with streptomycin using an inoculation loop [13,14]. After four days, germinating conidia were located microscopically on the surface of the medium and transferred to Petri dishes containing DSM to obtain pure, single hyphal cultures [15]. The plates were incubated at room temperature (21 • C). An individual germinated conidium per conidiomata was kept for further analyses. Long term preservation of each fungal isolate was conducted by placing two-week-old mycelium cubes in 10% glycerol at 4 • C and was maintained in the research institute collection (Neiker, Arkaute, Spain).
Pathogen Identification
Fungal tissue was scraped from the surface of 2-week-old cultures with a sterile scalpel blade. The mycelium was homogenised using a Qiagen Tissuelyser II with sterile metal beads (Ø 2.5 mm). DNA was extracted from 100 mg of lysed fungal tissue with the Plant DNA Mini Kit (Analytik Jena AG, Jena, Germany), following the manufacturer's instructions.
The integrity of the DNA in terms of quality and quantity was verified using a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). DNA working stock solutions of 20 ng/µL were made for polymerase chain reaction (PCR) amplifications. All DNA was stored at −20 • C until further use.
The identity of each isolate was confirmed by species-specific conventional PCR targeting the elongation factor region [16]. The identification of the isolates was further supported by PCR amplification and sequencing of the internal transcribed spacer (ITS) region, and the translation elongation factor 1-α (TEF1) using the primers ITS1 and ITS4 [17], and EF1-728F [18] and EF1-986R, respectively, as described in van der Nest et al. [19]. PCR reactions for each region contained 20 ng DNA, 2.5 µL 10× PCR reaction buffer, 2.5 mM MgCl 2 , 400 nM of each primer, 200 µM of each dNTP and 1 U IBIAN-Taq DNA polymerase (IBIAN Technologies, Zaragoza, Spain). The reaction conditions included an initial denaturation step at 94 • C for 10 min, 35 cycles at 94 • C for 30 s, a 45 s annealing step at 56 • C for the ITS region, 52 • C for the TEF1 region, 72 • C for 60 s and a final 10 min extension at 72 • C [11] PCR products were purified using the NucleoSpin Gel and PCR Clean-up kit (Macherey-Nagel, Düren, Germany) and sequenced by Eurofins (Genomics, Konstanz, Germany). Sequencing data were edited using Finch TV software version 1.4.0 (https://finchtv.software.informer.com/1.4/, accessed on 17 January 2023) and aligned with MEGA X software version 10.0.4 (https://www.megasoftware.net/, accessed on 17 January 2023). BLAST searches for the fungal taxa were conducted on the NCBI database (National Center for Biotechnology Information NCBI, Bethesda, MD, USA) and the consensus sequences deposited in GenBank. ITS and TEF1 haplotypes were determined with TCS 1.21 software [20] and nucleotide diversity (Pi) within the Basque Country population was calculated using DnaSP v6 [21].
Phylogenetic trees were inferred using maximum parsimony (MP) and maximum likelihood (ML) analysis in MEGA X. Alignment gaps were set as additional characters with equal value and confidence levels were calculated from 1000 bootstrap replicates. The MP tree was obtained using the Tree-Bisection-Reconnection (TBR) heuristic search option and for the construction of the ML tree, the Hasegawa-Kishino-Yano nucleotide substitution model was used.
The mating type ratio in each population was calculated. This ratio was expected to be 1:1 for randomly mating populations. A chi-square goodness of fit test for a 1:1 ratio and associated p value were estimated to evaluate departure from the null hypothesis (ratio proportion 1:1).
Simple Sequence Repeat (SSR) Loci Amplification and Data Analysis
Ten microsatellite markers MD1, MD2, MD4, MD5, MD6, MD7, MD9, MD10, MD11 and MD12 designed for L. acicola were used to amplify the respective regions in the genome [22]. The PCR reaction mixture and reaction conditions with fluorescently labelled primers were carried out as described by Janoušek et al. [11,22]. For fragment analysis, PCR products were pooled into two panels and 1 µL of these multiplexed PCR products was separated on an ABI Prism 3130 Genetic Analyser (Applied Biosystems, Foster City, CA, USA). The mobility of the SSR products was compared to those of the internal size standard, LIZ-500(-250) and allele sizes were estimated by GeneMapper 4.0 computer software (Applied Biosystems, Foster City, CA, USA). A reference sample was run on every gel to ensure reproducibility.
For each population defined by tree origin, the total number of alleles at each SSR locus was estimated. A multilocus genotype (MLG) was constructed for each isolate by combining data for each of the 10 SSR alleles obtained. The expected multilocus genotype (eMLG) was calculated based on rarefaction using the R package poppr V.2.3.0 [23,24]. Genotypic diversity was conducted for the non-clone-corrected dataset and clone-corrected dataset, in this last case with only one isolate of each MLG considered. Shannon-Wiener index of MLG diversity (H) [25], Stoddart and Taylor's diversity index (G) [26] and evenness index E5 [27] were calculated using the same R package.
The standardised index of association (rbarD) as an estimate of linkage disequilibrium was calculated to investigate the mode of reproduction [24,28]. The expectation of rbarD for a randomly mating population was zero, and significant deviation from this value would suggest clonal reproduction. Significance was tested based on 1000 permutations and conducted in the R package poppr using the clone-corrected data [24].
The standardised measure of genetic differentiation, G'st, described by Hedrick [29] was calculated to estimate subdivision among populations. This index ranged from 0 to 1, independent of the extent of population genetic variation and locus mutation rates [29]. Pairwise GST values within the clone-corrected data were calculated using the R packages strata G V.1.0.5 [30] and mmod V.1.3.3 [31].
Hedrick's standardised GST was estimated to assess population structure among these populations [29]. Statistical significance was calculated based on 1000 permutations. Hierarchical analysis of molecular variance (AMOVA) was performed to evaluate the extent of population differentiation and structure among populations, hosts species groups, and within these groups [32].
Discriminant analysis of principal components (DAPC) was performed to infer clusters of populations without considering previous tree origin criteria [33]. DAPC was conducted with the R package adegenet V. 2.0.1 [34] using the Bayesian information criterion (BIC) to infer the optimal number of groups. Important advantages of DAPC are that it maximises variation between the groups, minimises the within-group genetic variability and does not require assumptions regarding evolutionary models [33].
To assess the relationships among MLGs, minimum spanning networks (MSNs) were constructed. Bruvos's genetic distance matrix and MSNs were generated using the R package poppr V.2.3.0 [23,24]. The genetic distance described by Bruvo et al. [35] takes the SSR repeat number into account, with a distance of 0.1 equivalent to one mutational step (one repeat).
Isolation and Population Description
A total of 153 isolates were obtained. Taking into account the initial sampling strategy, 97 isolates were obtained from sampling site 1 (BC_1), 28 isolates were obtained from six Pinus species from the arboretum AR20 (AR_2) and 28 isolates were obtained from sampling site 3 (AR_1) from seedlings of P. radiata planted in late spring of 2020 in this arboretum. These seedlings were obtained from a biosafety P2 greenhouse and the absence of the disease was confirmed by morphological and molecular methods before their establishment in the arboretum.
Pathogen Identification
All 153 isolates were confirmed as L. acicola by species-specific conventional PCR targeting the elongation factor region. When analysing the ITS and TEF1 sequences, only one ITS haplotype was represented by all the isolates, and it was 100% identical (420 aligned nucleotides) to L. acicola ex-type KC012999; USA; CMW45427 [36]. Three TEF1 haplotypes (442 aligned nucleotides) ( Figure 2) were distinguished in the Basque Country population with a nucleotide diversity of Pi = 0.00021. Haplotype MZ065328 and haplotype MZ065330 differed from MZ065332 in a single base pair and in two base pairs, respectively. Representative isolates per haplotype were included in the phylogenetic analyses ( Figure 2) and deposited into GenBank. These were MZ065328 (representing two isolates: DFA1c06 and DFA5d06), MZ065330 (representing two isolates: h6a25 and h16c25) and MZ065332 (representing 149 isolates). The topologies of the ML and MP phylogenies were similar (Figure 2), where isolates representing the haplotype of MZ065330 were clustered into the northern lineage of L. acicola and were identical to the ex-type KC013002 [36]. Isolates of haplotype MZ065328 and MZ065332 were clustered into the southern lineage of L. acicola. The haplotype of MZ065332 was 100% identical to KJ938451 (south USA) [11], whereas those of MZ065328 showed a distinctive single base polymorphism.
SSR Loci Data Analysis
All primer pairs amplified the SSR loci in the L. acicola Spanish population. Three loci (MD6, MD10, MD11) were monomorphic across all 153 isolates and were, therefore, removed from the analysis (Minor Allele Frequency <0.01). The Spanish population exhibited a total of 22 MLGs. A clone-correction of the dataset was implemented to remove the bias of resampled MLG in the analysis, resulting in a total of 33 representative isolates
SSR Loci Data Analysis
All primer pairs amplified the SSR loci in the L. acicola Spanish population. Three loci (MD6, MD10, MD11) were monomorphic across all 153 isolates and were, therefore, removed from the analysis (Minor Allele Frequency < 0.01). The Spanish population exhibited a total of 22 MLGs. A clone-correction of the dataset was implemented to remove the bias of resampled MLG in the analysis, resulting in a total of 33 representative isolates (Table 1). [25]. G, Stoddart and Taylor's Index of MLG diversity [26]. E5, evenness [27,37,38], rbarD, the standardised index of association [28] and p-value based on rbarD index.
The number of MLGs identified for each sampling site was 18 MLGs for BC_1 and 8 and 7 MLGs, for AR_1 and AR_2, respectively. This difference is related to the sampling size and the number of isolates obtained (N = 97, for BC_1, and N = 28 for AR_1 and AR_2) ( Table 1). A more appropriate estimate for richness comparison is the eMLG value, which is an approximation of the number of genotypes that would be expected after correction of the unbalanced sample size based on rarefaction. Thus, genotypic richness was lower in AR_1 and AR_2 compared with BC_1 after sample size correction (Figure 3). The BC_1 population showed the highest genotypic diversity (G = 18), followed by AR_1 (G = 8) and AR_2 (G = 7) ( Table 1). Shannon-Wiener diversity index (H) for the BC_1 population was higher (2.89) than the index for arboretum populations (AR_1 and AR_2). The BC_1 population showed the highest genotypic diversity (G = 18), followed by AR_1 (G = 8) and AR_2 (G = 7) ( Table 1). Shannon-Wiener diversity index (H) for the BC_1 population was higher (2.89) than the index for arboretum populations (AR_1 and AR_2). The values of evenness (E5) were the same for the three established populations.
The BC_1 and AR_1 populations showed significant deviation in the rbarD value from the null hypothesis of recombination, not supporting sexual reproduction (rbarD = 0.1798 and rbarD = 0.5678, respectively, with p = 0.001 in both cases). On the other hand, AR_2 showed evidence for sexual recombination (rbarD = −0.0656, p = 0.796).
An analysis of molecular variance on the clone-corrected dataset revealed no statistically significant variation among populations (p > 0.05, variation within samples p = 0.20; variation between samples p = 0.24; variation between locations p = 0.672). There was no structure in the populations. In BC_1, 7 out of the 22 MLGs identified were present in the population defined by AR location, AR_1 and AR_2, these last populations also showed, respectively, exclusive haplotypes, three in the case of AR_1 and one in the case of AR_2 (Figure 4). The discriminant analysis of principal components (DAPC) chart also showed the lack of population structure between isolates based on location ( Figure 5). respectively, exclusive haplotypes, three in the case of AR_1 and one in the case of AR_2 (Figure 4). The discriminant analysis of principal components (DAPC) chart also showed the lack of population structure between isolates based on location ( Figure 5).
Mating Identification
Mating type idiomorphs were successfully identified for 151 of the 153 isolates ( Table 2). A chi-square test of independence indicated that there was no significant difference (p ≤ 0.05) between the mating type ratios observed in the three populations. Both mating types were found in more or less equal proportions except in the AR_1 population, in which Mat 2 was more frequent (Mat-1:Mat-2 = 10:18) ( Table 2).
Discussion
In this study, an intensive sampling of pines in the Basque Country was implemented and L. acicola was exclusively detected out of the nine species described in this genus [8]. Only L. acicola was reported in Europe within the genus Lecanosticta [39] and it was the only species known to cause BSNB until 2022, when L. pharomachri was detected in plantations in Colombia causing a severe outbreak of the disease [40].
Lecanosticta acicola is currently by far the most damaging and abundant fungal pathogen present in Pinus radiata stands in the Spanish provenance region No. 6 together with Diplodia sapinea (Fr.) Fuckel 1870 [41]. The reports of L. acicola expansion in the Northern Hemisphere increased in the last 15 years, not only in a geographical dimension, but also increasing in the number of host species, and the climatic conditions in which this pathogen is detected [8,39,42]. This emerging disease escalated in incidence and severity in the last decade, affecting the sustainability of Pinus radiata ecosystems. In the Basque country, the damage caused by this pathogen accelerated a change in the forest model due to the logging of a thousand hectares a year and the mistrust concerning P. radiata sustainability under these circumstances [1].
In the studied area, three TEF1 haplotypes were identified; one clustered into L. acicola northern lineage, identical to the ex-type KC013002 [36] and two clustered into L. acicola southern lineage. The northern haplotype represented two isolates and were isolated from seven-year-old P. sylvestris and P. nigra located in Irisasi (Gipuzkoa). The two haplotypes in the southern lineages included 151 isolates, from which 149 isolates were isolated from P. radiata, P. ponderosa, P. nigra, P. sylvestris and P. brutia. Two isolates showed a unique basepair mutation at bp site 101, these were obtained from a P. radiata stand located in Amurrio (Araba). In Europe, the southern lineage of L. acicola is found in Spain and France and the northern lineage in central and northern Europe [8]. Two isolates out of 153 were part of the northern lineage; however, there are no previous records in Spain of southern and northern lineages coexisting in the same geographical area. Previously, this phenomenon had only been in France [11,39].
The predominance of the southern lineage in the studied area might be due to the northern lineage being a relatively recent introduction. The trees from which the isolates were obtained were produced in two French nurseries located in the Alps of Upper Provence (France) during 2011 and 2012, and in a nursery located in Guémené (France) in 2013, and planted in one of the arboreta established under the European project REINF-FORCE. Despite the fact that the material was subjected to phytosanitary controls prior to introduction, it is possible that the pathogen was not detected. Predominance of the southern lineage might also be related to differences in life history traits. For example, southern isolates were reported more virulent to Pinus spp. than northern ones with the exception of P. sylvestris [43]. Spore germination capacity at 32 • C for the southern isolates was successful, whereas it failed for the northern strains [44] and these differences might contribute to a lack of adaptation to higher temperatures.
Genetic diversity in the BC_1 population obtained from different plantations of the Basque Country was higher compared with the diversity in the arboretum AR20 (location of population AR_1 and AR_2). This arboretum was established in a nursery that centralised the supply of P. radiata seedlings of the region under study. This nursery was created to produce and distribute reproductive material to the forest sector. Furthermore, this place showed serious needle blight damage. Since this nursery received and grew local and imported seeds in the past, it was considered to be a potential source of pathogen diversity and dispersal through the seedlings to the entire region. Anthropogenic movement of infected plant material and seedlings is considered the main source of long-distance dispersal of L. acicola [8]. The 28 isolates from population AR_2 with eight haplotypes from pine species established twelve years ago, and AR_1 with 7 haplotypes obtained from newly established seedlings in the arboretum, support the hypothesis of their high and fast colonisation capacity of different hosts but mainly P. radiata, which shows a high susceptibility to disease in the region [1].
Indication of sexual recombination in the sampled region was supported by the fact that both mating types were identified in more or less equal proportions in the populations of L. acicola, the high levels of observed genetic diversity, and by many of the isolates with the same multilocus haplotypes having different mating types in the same populations. Direct evidence for the sexual state of the pathogen already exists in a location 0.53 km far away from the arboretum [12] and it may also be present in these areas. Population structure analysis showed no evidence of population subdivision. It is likely that there is only one panmictic population present throughout all locations. However, a more intensive sampling of these areas may reveal new hypotheses about population structure, as was observed in other fungal species in the region [41].
The seeds and plants used in Northern Spain come from distributors in the United States, France, Denmark, New Zealand, Chile, etc., which makes it difficult to generate hypotheses about the potential origin of the pathogen's introduction [45]. Nevertheless, considering that the main pine species in the studied region is Pinus radiata and that the pathogen is absent in Chile and New Zealand, either the United States and/or France could be the main candidates for the pathogen introduction into Spain. Previous population analysis established that the origin of the northern and southern lineages present in Europe were from North America [11,46,47]. The Basque Country was the first location in Europe where the presence of L. acicola was confirmed, and where North America was potentially considered the source of the infected host plants [39]. The presence of the isolates from the northern lineage in our area could be a relatively newer introduction from other European countries caused by the northern lineage spreading within Europe through separate introductions, and thus defining characteristic populations [46,47].
The knowledge of the origin, diversity and genetic structure of pathogen populations at a global and local scale can have a remarkable impact on landscape-level planning models and other decision support systems that enable forest managers to generate optimal disease management strategies. The high levels of genetic diversity of the pathogens would complicate the implementation of successful control measures and breeding programs, and could enhance the capacity of adaptation of the pathogen to stressful conditions. In this context, preventative methods should be directed to reduce the movement of plants among countries and regions to avoid the introduction of new genetic sources of diversity into existing populations. This is even more important now in Spain, seeing that a possible recent introduction of a northern lineage was discovered in this study and that adaptation of isolates in each lineage to local climatic conditions could contribute to the success of the pathogen [8]. Some areas are so devastated by the disease that restrictions in the use of highly susceptible pine species might need to be restricted in plantations to help reduce inoculum pressure and the species becoming reservoirs of the pathogen. | 2023-06-28T06:17:26.154Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "033a7413fc84a00014473f699985be875ac8e883",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b99bec8edec3e94cd7a5291e4643a9e2b5591ab1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
157148027 | pes2o/s2orc | v3-fos-license | Relative Valuation Model Analysis of Idx
There are various valuation models can be used by investors in order to predict the stock value. This paper focuses on the Relative Valuation Model, which is the popular model used by investors as it is easier to use compared to other models. The aim of this paper is to compare the prediction accuracy of various ratios in the model. Ratios examined in this paper are Price to Earning (PE), Price to Book Value (PBV), Price to Cash Flow (PCF), and Price to Sales (PS). Using the LQ45 listed stocks during period 2006 – 2010, it is found that, overall, PBV appears to be the best ratio to predict LQ45 stocks in Indonesian equity market. However, mixed results are found in the yearly analysis, in which PE, PBV, and PCF result in lower prediction errors, in different years. In the sector industry analysis, both PE and PBV are the best predictors in three sectors each. The descriptive analysis is supported by the hypo research testing that shows the accuracy of examined ratios is different. The research result implication is that investors should take time and sectors into the account before choosing a single ratio as a predictor.
of financial assets investment, people can invest either in money market or capital market. Both markets offer different instrument of investment with different rate of return and length of time. Money market offers instrument of investment such as; saving deposit, term deposit and SBI. Each instrument offers different rate of return. Based on data released by Bank of Indonesia in 2010, saving deposits in Indonesia offer around 3% rate of return for one year, term deposit for three months offer rate of return around 7%, for six months term deposit offers around 8% and one year term deposits offer 9.5%. Table 2 shows that in past few years capital market in Indonesia shows significant raise in its performance, especially in stock market. Based on Author's calculation, Jakarta Composite Index recorded average of return as 23.2% in aggregate five years from 2006 until 2010. In period of research, from 2006 -2010, JCI almost always recorded return above 25% except for 2008. In 2008, the return is negative because the impact of financial crisis. But in the next year, JCI recorded increase of return as 61%.
Table 2. JCI Rate of Return
Other instrument from capital market is bonds, which is offer another level of return with different tenor. Based on Indonesia Bond Price Agency, in Indonesia one year corporate bonds offer around 5.7% rate per annum, 6.2% for three years bonds and the highest is for 15-years bonds that offer coupon rate around 8.2%. Table 3 will show the yield of Indonesia's corporate bonds. Nowadays stock is one of popular media investment that offered in different industries with different type, and different level of price. To enter the stock market, investor should have a good investment strategy. In choosing stocks, usually the stock price will be the first thing to consider and evaluate whether the stock is cheap or expensive compared to fundamental of companies.
As a consideration to develop an investment strategy, generally investor will examine the fundamental of stock. To find out about the fundamental value of the stocks, some models of stock price valuation can be generated to appraise it. The models that used in stock price valuation should be reliable and apply of consistencies of same criteria time after time. One reason of using models in valuation is that the models are never vary and never moody in contrary of human forecaster that more natural to react emotionally and inconsistent (O'Shaughnessy, 1998) so that's why the models of stock price valuation can be used objectively.
There are many models to value stocks from a simple one to complex. Damodaran (2006) explains some valuation models such as; Asset-Based Valuation, Discounted Cash Flow Models, Relative Valuation and Contingent Claim Models. Each model has different assumptions and factors, therefore the final value derived from each model can be very different. The common models in predict a stock price usually is Relative Valuation, almost 85% of equity research reports in US are based upon a multiple and comparables. (Damodaran, 2006) This research will focus on some models of Relative Valuation model. Some relative valuation methods that will be examined in this study are Price to Earnings Ratio (P/E), Price to Book Value Ratio (P/BV), Price to Cash Flow (P/CF), Price to Sales (P/S) and Changes of Earning per Shares (% EPS). These methods are the most popular methods used by investors in various markets in the world (O'Shaughnessy, 1998) Those models will produce the forecasted share prices and compared their accuracies in predicting stock price in short period. By comparing the prediction errors, the best of Relative valuation model can be determined. The result of this research provides practical information with regards the best model used to predict the stock price in IDX.
The purpose of this study is to investigate the ability of average P/E, PBV, P/CF, P/Sales, EPS changes to predict future stock price in Indonesian equity markets.
Valuation
Decision making in investing usually need a valuation. Various kind of valuation can be used either its the simple or the complex one. Usually a simple valuations use a limited amount of information which is begin with multiple analysis that uses a few number of financial statement account like sales, earnings or book value. Simple methods run the risk of ignoring relevant information. In contrary, a complex fundamental analysis indentifies all relevant information and extracts the implications of that information for valuing the firm.
Berk (2000) define the valuation principle as follow, "The value of a commodity or an asset to the firm or its investors is determined by its competitive market price. The benefit and costs of a decision should be evaluated using those market prices. When the value of the benefits exceeds the value cost, the decision will increase the market value of the firm." According to Damodaran (2006) valuation approaches are used in investment because there is perception that the market are inefficient and the price in the market is incorrect. It means that in investor's perception the market price of stock could be undervalued or overvalued and it will be corrected to the fair value. In an efficient market, the market price is the best estimation of stock value.
Valuation Approaches
Some approaches in valuation usually mentioned in the investment literature are Discounted cash-flow (DCF) method, Relative valuation method (multiple method), and Conteingent claims valuation method. Figure 1 shows the approaches of valuation which is classified by Damodaran. The purpose of asset based valuation approach is to value whether the asset owned by company is worth or not for its business. In term of to calculate the expected future cash flow, discounted cash flow approach is used by relating the value of an asset to the present value of expected cash flow that can be generate from that asset. Basis in discounted cash flow is every asset has an intrinsic value that can be estimated, based upon its characteristics in terms of cash flows, growth and risk. In conduct discounted cash flow valuation we need to estimate the life of the asset, to estimate the cash flows during the life of the asset, to estimate the discount rate to apply to these cash flows to get present value. (Damodaran, 2006)
Figure 1. Valuation Models
The most commonly used valuation approach is relative valuation. This approach estimate value of asset by comparing a comparable variable like earnings, cash flows, book value or sales with other company. Last approach based on scheme 2.1 is contingent claim valuation that using option pricing model to measure the value of an asset.
Relative Valuation
The objective of relative valuation is to value assets based on how similar assets are currently priced in the market. In simply way, firms in the same business as the firm being valued are called comparable. Damodaran (2006) stated that the value of assets in relative valuation are derived from pricing of comparable asset or business by using Relative valuation is more likely to reflect market perceptions than oher approach of valuation. It formed by volatility of price which is depend on market situation. Damodaran (2006) argued that it is important that the price reflect these perceptions as is the case when the investment objectives are to sell a security at that price today (as in the case of an IPO) and to invest on the "momentum" based strategies. (Damodaran, 2006) Another description of relative valuation model is provided by Brown (2009) is "Relative valuation implicitly contends that it is possible to determine the value of an economic entity (i.e., the market, an industry, or a company) by comparing it to similar entities on the basis of several ratios that compare its stock price to relevant variables that affect a stock's value." Meanwhile, Penman (2007) stated that relative or multiple is only the ratio of the stock price to a particular variable in the financial statements. The most common ratios derived by important numbers in the statement such as earnings, book values, sales and cash flow, hence the price-earnings ratio (P/E), the price to book ratio (P/BV), the price to sales ratio (P/S) and the price to cash flow from operation (P/CF).
The summary of explanation above is that relative valuation can be applied for similar firms in the market by comparing some variables such as the earnings firms generate, to the book value or replacement value of the firms themselves, to the revenues that firms generate or to measures that are specific to firms in a sector. There are some advantages of relative valuation: 1. Relative valuation is much more likely to reflect market perceptions and moods than discounted cash flow valuation. 2. With relative valuation, there will be a proportion of securities that are undervalued and overvalued. 3. Relative valuation is more tailored to investor needs. 4. Relative valuation generally requires less information than other valuation method.
Some disadvantages of relative valuation: 1. Relative valuation is built on the assumption that markets are correct in the aggregate, but make mistakes on individual securities. To the degree that markets can be over or under valued in the aggregate, relative valuation will fail 2. Relative valuation may require less information in the way its just use a variable to value.
Relative valuation is easiest to use when there are a large number of assets comparable to the one being valued, these assets are priced in a market, and there exists some common variable that can be used to standardize the price. This approach tends to work best for investors who have relatively short time horizons, are judged based upon a relative benchmark, can take actions that can take advantage of the relative mispricing; for instance, a hedge fund can buy the undervalued and sell the overvalued assets.
Price to Earnings Ratio
The most common valuation multiple is the price earnings ratio. P/E ratio is equal to the share price divided by its earnings per share. This formula indicates when buying stocks, investor in sense buying the rights to the firm's future earnings. Investor should be willing to pay more for a stock with higher current earnings. The value can be estimated by multiplying its current earnings per share by the average P/E ratio of comparable firms. P/E ratio compares the value of expected future earnings to current earnings. The P/E ratio is based on expected earnings that have not yet been recognized.
O'Shaughnessy (1998) argued that buying stocks with low P/E is the one true faith. Investors who buy stocks with low PE think they're getting a bargain. Generally, investors believe that when a stock's PE ratio is high, investors have unrealistic expectations for the earnings growth of the stock. Damodaran (2006) explained that one of the more intuitive ways to think of the value of any asset is as a multiple of the earnings that assets generate. When buying a stock, it is common to look at the price paid as a multiple of the earnings per share generated by the company. This price/earnings ratio can be estimated using current earnings per share, which is called a trailing PE, or an expected earnings per share in the next year, called a forward PE.
There are two types of P/E. Firstly, the Trailing P/E that is calculation of firm's P/E using earnings over the prior 12 months. Secondly, the Forward P/E that is firm's P/E ratio calculated using predicted earnings over the coming 12 months.
A higher P/E ratio means that investors are paying more for each unit of net income, so the stock is more expensive compared to one with a lower P/E ratio. The P/E ratio also shows current investor demand for a company share.
The most common approach to estimating the PE ratio for a firm is to choose a group of comparable firms calculate the average PE ratio and subjectively adjust the average for differences between the firm being valued and the comparable firms.
Penman (2007) mentioned the biggest advantage of the P/E ratio is that it is easy to use and understand. Even those who are not educated in finance can understand it, and although it is only a very basic tool and method of evaluating the worth of the shares of a company, it can be used to make quick decisions.
Although the P/E ratio has some disadvantages such as, P/E is largely due to the subjective in nature, and that no standard in what price earnings ratio investor can sell at. The other disadvantage is inflation. At times of high inflation, the currency of the specific country where the share observation takes place. The problem with this is that if investor exchanges the earnings of the company to a foreign currency, say from USD to IDR, it can devaluate the earnings of the company, which in turn, given the formula, increases P/E. The interpretation of the P/E concerning a company is also dilemmatic. Some investors might consider a specific P/E ratio too high, whereas others might consider the same ratio to be too low, even if it is compared with the industry P/E average.
There are some important issues that have to be considered when using P/E ratio; (1) it is the reflection of the market's optimism concerning a firm's growth prospects, (2) it is a much better indicator of a stock's value than the market price alone, (3) in general, it is difficult to say whether a particular P/E is high or low, and (4) P/E ratios are generally lower during times of high inflation.
Price to Book Value Ratio
Firms usually trade at a price that differs from book value. The book value represents shareholders' investment in the firm. Book value is also assets minus liabilities that are equal to net assets. But, actually book value typically does not measure the value of the shareholders' investment. The value of the shareholders' and the value of net assets are based on how much the investment is expected to earn in the future. Accordingly, the intrinsic P/BV ratio is determined by the expected return on the book value.
P/BV ratio fits with the idea that shareholders buy earnings. Price, in the numerator of the P/BV ratio is based on the expected future earnings that investors buying. So, the higher the expected earnings relative to book value, the higher P/BV ratio. The P/BV ratio prices expected return on book value, but it does not price a return that is equal to the required return on book value.
O'Shaugnessy (1998) called P/BV as a better gauge of value. Essentially, investors who buy stocks with low PBV believe they are getting stocks at a price close to their liquidating value, and that they will be rewarded for not paying high prices for assets.
The relationship between price and book value of a firm has always attracted attention for investor. If the stock sold below the book value, it has been generally considered as a good candidate for undervalued portfolios, while those selling higher than book value are considered as overvalued. Damodaran (2006) stated that "The market value of the equity in a firm reflects the market's expectation of the firm's earning power and cash flows. The book value of equity is the difference between the book value of assets and the book value of liabilities." Penman (2007) described P/BV as follow, "The P/BV ratio is a financial ratio used to compare a company's book value to its current market price. Book value is an accounting term denoting the portion of the company held by the shareholders; in other words, the company's total tangible assets less its total liabilities." The calculation of P/BV can be performed in two ways, but the result should be the same each way. In the first way, the company's market capitalization can be divided by the company's total book value from its balance sheet. The second way, using per-share values, is to divide the company's current share price by the book value per share (book value divided by the number of outstanding shares).
LeRiche (2009) stated that "as with most ratios, it varies a fair amount by industry. Industries that require more infrastructure capital will usually trade at P/B ratios. P/B ratios also commonly used to compare banks, because most assets and liabilities of banks are constantly valued at market values." A higher P/B ratio implies that investors expect company to make more value from a given of assets. P/B ratios do not, however, directly provide any information on the ability of the firm to generate profits or cash for shareholders. There are several reasons why Price to Book value ratio useful in investment analysis: first, book value provides a relatively stable, intuitive measure of value that can be compared to the market price. Second, book value give a reasonably consistent accounting standard across firms, so this ratio can be compared across similar firms. Third, even firms with negative earnings, which is cannot be measured using PE ratio, can be valued using this ratio.
One of the advantages of the price/book ratio is that it is easy to calculate it and understand the meaning behind the numbers. As a result the knowledge of such a ratio will greatly facilitate the comparison between stocks target, especially those that are part of old-line industries. Price/book ratios give an understanding of how the market values the assets as compared to the earnings it makes. What is more, price/book ratio is applicable throughout the world.
On the other hand, price/book value fails to reflect intangible assets such as intellectual assets, which represent the basis of the functions of high-tech companies. As a result, the balance sheets of such companies fail to reflect the intellectual assets of such companies. In turn, this leads to low book values and artificially high price/book ratios.
Price to Cash Flow Ratio
Since the concern of window dressing has risen, the popularity of the relative price to cash flows valuation ratio are growth. Cash flow values are generally less prone to manipulation than other variables. Brown (2009) stated that cash flow values are important in fundamental valuation, and they critical when doing credit analysis where "cash is king." The price/cash flow ratio is a ratio used to compare a company's market value to its cash flow. It is calculated by dividing the company's market cap by the company's operating cash flow or, equivalently, divide the stock price by the operating cash flow per share. In theory, the lower a stock's price/cash flow ratio is, the better value that stock is.
The accounting rules sometimes cause certain types of businesses or industries to understate or overstate the true profits, causing the price to cash flow ratio to work better for valuation purposes than its counterpart, the price to earnings ratio. For some company, the price to cash flow ratio would provide a better idea of the amount of money available to management for further research and development, marketing support, debt reductions, dividends, share repurchases, and more. (Penman, 2007) Turk (2006) mentioned cash flow per share ratio is another important ratio in determining company's earnings. While this ratio does not relate to a company's earnings in the strictest sense, it does indeed give a picture of how much cash is flowing through the company during the course of a given year. If the business is operating properly, high percentage of this cash will comprise earnings. Some financial analysts give the cash flow per share ratio more bearing than earnings per share, because the earnings per share figure can be subject to manipulation, whether inadvertent or fraudulent. On the other hand, it is almost impossible to fraudulently manipulate cash as it is a highly physical asset, and very easy to verify.
Price to cash flow represents another measurement of the company's earnings power, or how much it can generate for its shareholders. However, price/cash flow has its disadvantages. One of them is that it is not easy to calculate it and it is also difficult to define what a cash flow is.
Price to Sales Ratio
A revenue multiple measures the value of the equity or a business relative to the revenue that it generates. Revenue multiple have number of reasons to be used in investment analysis, such as: unlike earnings and book value ratios, which can become negative in some firms, revenue multiples are available even for the most troubled firm and very young or new firms. Revenue multiples is relatively difficult to manipulate. O'Shaughnessy (1998) called price to sales ratio as The King of the Value Factors.
The Price to sales ratio has been suggested as useful by Martin Leibowitz (1997), a widely admired stock and bond portfolio manager. These advocates consider this ratio meaningful and useful for two reasons. First, they believe that strong and consistent sales growth is a requirement for a growth company. Second, given all the data in the balance sheet and income statement, sales information is subject to less manipulation than any other data item. (Brown, 2009) Price-to-sales ratio, P/S ratio, or PSR, is a valuation metric for stocks. It is calculated by dividing the company's market cap by the company's revenue in the most recent year; or, equivalently, divide the per-share stock price by the per-share revenue.
Both earnings and book value are accounting measures and are determined by accounting rules and principles. An alternative approach, which is far less affected by accounting choices, is to use the ratio of the value of an asset to the revenues it generates. For equity investors, this ratio is the price/sales ratio (PS), where the market value per share is divided by the revenues generated per share. The advantage of using revenue multiples, however, is that it becomes far easier to compare firms in different markets, with different accounting systems at work, than it is to compare earnings or book value multiples.
Some advantages of Price to Sales ratio are: since the sales revenue is always positive, the price to sales ratio is meaning full even firm is in distress, sales revenue is not as easy to manipulate as EPS and book value, which are significantly affected by accounting conversation, P/S ratios is not as volatile as P/E multiple, For start-up companies P/S is an appropriate measure whereas the P/E may be misleading. It is also a valuable tool for cyclical or mature industries On the other hand, some disadvantages of Price to Sales ratio are: if firms made significant amount of sales on credit, that will inflate their Price to Sales ratio, but that does not necessarily indicate operating profits as measured by earnings and cash flow, furthermore, analyst should be careful about the revenue recognition practices that can still distort sales forecast thus P/S ratio, P/S ratios can capture the sales part, but it cannot capture the differences in cost structure across companies.
Previous Studies
Relative valuation are widely used in some countries around the world. Some research had been conducted in term of using the relative models in predicting the stock price. Price earnings ratio, Price to book value ratio, Price to sales ratio and Price to cash flow ratio are the most generally use as the object of researches. Park and Lee (2003) conducted a research that test the relevance of relative valuation models in Japan stock market. Omran (2009) conducted the research about the valuation multiples of Islamic financial institution in UAE. Sehgal and Pandey (2010) conducted a research about equity valuation using price multiple in Indian companies. Minjina (2009) conducted a research of relative performance using multiples in Bucharest stock exchange.
The result from each research are different. Park and Lee found that the PBV gives rise to least prediction errors in Japan Stock market while Omran (2009) results is PE ratio for Islamic financial institution in UAE and also Sehgal and Pandey found that PE is the best approach for equity valuation in the Indian context and Minjina's results indicate that the best accuracy is determined by the P/CF multiple.
Based on Park and Lee (2003), PBV gives rise to least prediction errors in Japan Stock market. By comparing zero-net portfolios' returns, they found that P/S ratio is related to the best investment strategy in the period, but PER is a reasonably good multiple in the bear market period. Omran (2009) observed that the price-to-earnings ratio provides the best price forecast (and hence the minimum pricing errors) for most of the sample sectors. Further, price to book value seems to be the second-best financial ratio for equity valuation. P/E also performs best vis à vis other multiples on an aggregate market basis. Minjina (2009) results also show that the P/S multiple has a lower accuracy than those of the valuation methods compared to which it is statistically different.
Stock Selection
Data that will be used in this research are listed stocks in Indonesia Stock Exchange that had been listed in LQ45 index during 2006 -2010. The stock in LQ45 are frequently changed and this research use the stocks that ever listed as LQ45 within the period. As a result, the total company used in this research is 86 companies. These companies will be classified into nine sectors that available in Indonesia Stock Exchange. Number of company for each sector are listed in Table 4 below.
Number of Company
Banking 12 Basic Industry 9 Consumer 5 Infrastructure 11 Plantation 7 Mining 13 Other Industry 4 Property 13 Trade, Service 12 Total 86
Relative Valuation Methods
This research will conduct calculation of relative valuation models: price to earnings ratio, price to book value, price to cash flow, and price to sales ratio. Each model will be calculated individually and the companies will be classified based on sector industry category. The formulas of each model that will be used in this research are: 1. Price to Earnings Ratio : 2. Price to Book Value : 3. Price to Cash Flow : 4. Price to Sales : The four models can be calculated if the denominator such as EPS, book value per share, sales per share and cash flow per share show positive number. So the company which has the negative value of each denominator in each year will be removed from the sample.
To get the average industry ratio, the data will be classified into nine sectors as listed in Table 4 and this average industry will be used to calculate the predicted stock price by multiplying it with the denominator of each ratio: EPS, book value per shares, sales per shares and cash flow per shares. After getting the predicted price, it will be compared to the actual price at the end of the year. The difference between predicted and actual price are the prediction error values. In order to rank the accuracy of the predicting model, the prediction error values will be made absolute. The model with the lowest absolute prediction errors is the best model in predicting stock price.
Hyporesearch
There are many methods in stock price valuation. The better method is method that could make better prediction and expected to have small errors by compare the result with the actual price. This research will focus on relative valuation method and use four models in predicting the stock price. The objective of this research is to compare the level of error from each relative valuation models. The research problems induced the following hyporesearch: H 0 : All models have same level of error in predicting stock price Calculate the value of P/E, P/BV, P/CF, and P/S of companies and calculate the average of each model by industry. The average value will be used for predicting stock price in the next year by multiplying it with the denominator of each model. 4. Calculate error of models. Compare the stock price prediction with the actual price. The differences between the prediction and actual is the error of models. 5. The number of prediced error value will be made absolute, so all of the number will appear as positive number. 6. Calculate the average of absolute predicted error value to get the best model in predicting stock price which has the smallest number of error. 7. Test the average predicted error value by using statistical method to determine whether each average show a different number or not each other. The statistic method in this research is one way anova. 8. The result of statistical test is used for answer the hyporesearch of this research.
This research will investigate the best model in predicting stock price in aggregate five years from 2006 until 2010, split to each year and split to each industry. The statistical test is only apply for the error predicted values in aggregate five years.
As a result, this research will show which relative valuation model between PE ratio, P/BV ratio, P/S ratio and P/CF ratio can be used in predicting the stock price in Indonesia stock market.
Data Description
All data in this research such as, stock price, EPS, book value, sales, cash flow and outstanding shares are obtained from Bloomberg terminal as reported in financial report that published in Indonesian Stock Exchange. The number of each model such as price earnings ratio, price to book value, price to sales and price to cash flow are calculated using the formula as stated in chapter III.
There are 86 companies that will be included in this research. (list of companies in appendix). The four models that will be used can be calculated if the denominator such as EPS, book value per share, sales per share and cash flow per share show positive number. So the company which has the negative value of each denominator in each year will be removed from the sample. Overall, in five years the firm year in this research are 286. To get the average industry ratio, the data will be classified into nine sectors as listed as in Indonesia Stock Exchange. (See Table 6). This average industry will be used to calculate the predicted stock price and after getting the predicted price, it will be compared to the actual price at the end of the year. The difference between predicted and actual price are the prediction error value.
The difference value could be positive or negative. Positive means that the predicted price is higher than the actual meanwhile the negative means that the predicted price is lower than the actual. In order to rank the accuracy of the predicting model, the prediction error values will be made absolute. The model with the lowest absolute prediction errors is the best model in predicting stock price. Table 6 will show the results of average predicted error for nonabsolute and absolute 5-year error and absolute yearly error. Table 6. The average of prediction errors
Prediction Errors
The table above shows that for non-absolute error value, price to sales ratio has the highest positive error, followed by price to cash flow, price to earnings and the smallest error is price to book value ratio. The best prediction model is the model that shows the smallest amount of error. It means the predicted price has small different value or close with the actual price. From table above, error of PBV ratio shows the smallest value compare to others others. It indicates that in past five years from 2006 until 2010 for non-absolute error value PBV ratio is the best model in predicting stock price.
The average of absolute error shows quite different value of each 5years predicted error. In absolute number, the rank of each ratio is equal to the non-absolute error value but with different amount. In absolute, PBV still has the lowest error among other. Based on the calculation, in 2006 until 2010 PBV is the best model in predicting the stock price, second is PE ratio, followed by PCF and PS ratio.
PBV appear as the best model means that in picking a stock, investors notice about the book value of company. Book value is important. It is company's net asset. Book value can describe market values of assets and compared to the earnings it makes. On the other words, investor expect with the amount of asset how much profit will company generate. O'Shaugnessy called P/BV as A Better Gauge of Value. Essentially, investors who buy stocks with low PBV believe they are getting stocks at a price close to their liquidating value, and that they will be rewarded for not paying high prices for assets.
Predited error value for PE ratio is slightly different with PBV. Besides PBV, investor also notice about PE ratio of company. Earning is important for company and investor. For company, earning will be used for running the business. For investor, earning is important because from buying stock they expect to get dividend. PE ratio is the most common measurement of how cheap or expensive it is relative to other stocks and O'Shaugnessy named PE ratio as a separator of the winners and losers.
In whole term for five years, PBV shows the lowest error in predicting stock price. But, based on calculation derived from each year, the model that gives the lowest error was not same in every year. It seen on the From Table 6, PE and PBV show almost similar amount of error. Both have smaller error than PS and PCF. In period of research, price to sales always show the biggest error among others. Perhaps it because of sales is not having direct effect to stock price. Sales is in upper line of income statement, it will be subtracted by company's expense and will be end up with earnings of the company which is more logically to have impact to the stock price.
In 2008, all of the amount of error increase sharply. In that year, financial crisis hit the global economy. Based on author's calculation the rate of return in stock market Indonesia fell down around 68%. That will trigger the models to make a big error in predicted the stock price because almost all of the stock price in IDX fell down.
In 2009, PCF show the smallest amount of error. But the number is just slightly different with PBV. It was recovery period from the crisis. Cash is important for both company and investor. Cash can be used immediately to cover obligation of company. Company with good cash flow it tends to saver than others. Perhaps thats why investor most likely to notice about the cash flow of company in buying the stock.
Predicted Sectoral Error
The 86 companies are derived from nine sectors as classified in stock exchange such as: banking, basic industry, consumer, infrastructure, plantation, mining, other industry, property and trade and services. Table below will show which model is perform good in predicting the stock price for different industries. Table 6. 5-Years Absolute Sectoral Predicted Error Table 6 shows that the best predicting model for different industry is not similar. Based on this research, PE ratio was made least error in basic industry, consumer and plantation sector, PBV good in infrastructure, property and trade, services sector, PCF for other industry and PS ratio for mining sector and banking. Based on Table 6, the best model in predicting stock price are different for each sector. Section below will elaborate the consistency of each model in predict the stock price per industry each year. Table 7. Predicted Error of Banking Sector
Banking
In this research, the four models do not show consistently in predicting the stock price. Each model shows it's ever predicting price best in period of research. This research shows that on average price to sales (or revenue) is the best in predicting price but PBV model appear two times as the best model in year 2007 and 2009.
Basic Industry Table 8. Predicted Error of Basic Industry Sector
For basic industry sector, PE ratio is consistently appearing as the best model in predicting the stock price. It appears every year in period of this research. Compare to other models, in average PE ratio's error is less than 1 but other model's are not. The spread of PE's error compare to other is quite wide, which means that PE ratio is the best model to use in basic industry. For this industry, investors notice that earning is the most important thing to consider in determine the decision for buying a stock. As stated in chapter 2, Damodaran explained that one of the more intuitive ways to think of the value of any asset is as a multiple of the earnings that assets generate which means when buying a stock, it is common to look at the price paid as a multiple of the earnings per share generated by the company. Table 9. Predicted Error of Consumer Sector
Consumer
Alike with basic industry, consumer sector also shows that PE ratio is the best in predict the stock price. PS ratio and PCF ever predict once within period, but the number of error is slightly different with errors of PE. Companies that include in this sector are company who sell consumer goods such as Unilever, Indofood, Gudang Garam, and Kalbe Farma. These company have consistent record for paying dividend. It means that earning is important for investor who wants to buy the stock of these company. O'Shaughnessy mentioned that for many on Wall Street, buying stocks with low P/E is the one true faith.
Investors who buy stocks with low PE think they're getting a bargain. So, with buying the stocks investor which means investor is a stakeholder of company, they also can get another advantage which is dividend payment. In this industry, PCF shows the least error in average. But, within research period PBV and PCF ever appear equally as the best model in predicting the stock price. In this research, companies that include in this sector are ASII, GJTL, DOID and ADMG. In 2009, GJTL, DOID and ADMG had negative earning so the error only for ASII. All model show exactly same number of error. Company under this sector use fixed assets most in running the business. For example Astra International. One of ASII's subsidiary, United Tractor has many fixed assets because its business is provide heavy equipment. Companies under this sector also need factories and machinery which is will be classified as fixed asset. It called capital expenditure to get and maintain these assets. This capex will depend on company's cash flow.
Property Table 14. Predicted Error of Property Sector
In property sector, PBV appear as the best model in predicting stock price in average. The characteristic of this sector almost similar with infrastructure sector. In infrastructure sector, PBV also appear as the best model. For this sector, the most important thing is landbank. The business is how is company manage the landbank to make profit. The landbank is part of asset which is directly affect book value of company. If the error are splitted to each year, PE ratio shows that it appear as the best model more than PBV. It seems investors also notice on earnings before buy a stock from this sector. Table 15. Predicted Error of Other Industry Sector
Trade and Services
As in property and infrastructure sector, in this sector PBV also appear as the best model in predicting stock price. In average, PBV made the least error and within period it appears most among others. PS ratio never shows good in predict stock price in this sector. This sector consists of various type of company such as Service Company, Retail Company, and Investment Company. The characteristic of those company almost similar with banking and infrastructure sector which is have PBV as the best model in predicitng the stock price.
Hypo research Testing for Absolute Predicted Error
After absolute predicted error has been computed, it will be tested statistically to get the most accurate model in predicting stock price. The author use one-way ANOVA approach where the purpose of this method is to determine if more than two population means are equal.
In this research, four populations' means are tested (PE, PBV, PS and PCF). Table below is the result of Anova. Statistical test show that PBV is proved as the best model in predicting stock price. Overall, the result shows PBV is the best model, however, each industry has its own best model which is suit with company's characteristic.
Conclusion
The research findings can be concluded as follow. Firstly, in aggregate observed period, PBV appears as the best model in predicting LQ45 stock price in Indonesian equity market. However, the yearly analysis concludes that time is an important factor to consider in nominating the best predictor since the best predictor in each year is not the same (PE for 2006and 2010, PBV for 2007and 2008and PCF for 2009. Another factor to consider is sector (industry). The result shows that each industry has its unique best predictor, in which PE and PBV are the best predictors for three sectors, PS is the best predictor for tqo sectors and PCF is the best predictor for one sector. Since mixed results is found in indentifying the best predictor, the hyporesearch testing concludes that the prediction errors generated by the ratios are different.
Managerial Implication
The result implies that time and sector are two important factors that investors should consider when using Relative Valuation Model to value the LQ45 stocks. company in order to avoid buying a stock which is overvalued. Although PBV appears to be the best predictor, it is suggested that investors should use another ratio that shows as the best predictors in the sectors. | 2019-05-19T13:03:22.765Z | 2013-11-30T00:00:00.000 | {
"year": 2013,
"sha1": "6109fc3eaaa948da98b5c970d761ac7db790074a",
"oa_license": "CCBY",
"oa_url": "https://journal.binus.ac.id/index.php/JAFA/article/download/837/783",
"oa_status": "HYBRID",
"pdf_src": "Neliti",
"pdf_hash": "3468c8cc7ad2c6e3f6e574d3f84c6f0062b4bcf3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
258108876 | pes2o/s2orc | v3-fos-license | Predicting risk of metastases and recurrence in soft-tissue sarcomas via Radiomics and Formal Methods
Abstract Objective Soft-tissue sarcomas (STSs) of the extremities are a group of malignancies arising from the mesenchymal cells that may develop distant metastases or local recurrence. In this article, we propose a novel methodology aimed to predict metastases and recurrence risk in patients with these malignancies by evaluating magnetic resonance radiomic features that will be formally verified through formal logic models. Materials and Methods This is a retrospective study based on a public dataset evaluating MRI scans T2-weighted fat-saturated or short tau inversion recovery and patients having “metastases/local recurrence” (group B) or “no metastases/no local recurrence” (group A) as clinical outcomes. Once radiomic features are extracted, they are included in formal models, on which is automatically verified the logic property written by a radiologist and his computer scientists coworkers. Results Evaluating the Formal Methods efficacy in predicting distant metastases/local recurrence in STSs (group A vs group B), our methodology showed a sensitivity and specificity of 0.81 and 0.67, respectively; this suggests that radiomics and formal verification may be useful in predicting future metastases or local recurrence development in soft tissue sarcoma. Discussion Authors discussed about the literature to consider Formal Methods as a valid alternative to other Artificial Intelligence techniques. Conclusions An innovative and noninvasive rigourous methodology can be significant in predicting local recurrence and metastases development in STSs. Future works can be the assessment on multicentric studies to extract objective disease information, enriching the connection between the radiomic quantitative analysis and the radiological clinical evidences.
Lay Summary
Soft-tissue sarcomas of the extremities are a group of rare malignancies that may develop distant metastases or local recurrence, mainly in the lungs. A 3-year survival rate for patients with metastases is often lower for those who are not candidates for surgery; for these reasons, the earlier identification of patients with high risk of developing distant metastasis could potentially allow implementing more effective therapies. The Radiomic analysis will be performed used a public database of short tau inversion recovery and T2-weighted fat-saturated images. In addition, instead of using Artificial Intelligence, this paper introduces the possibility of: • exploiting mathematical methods together with Radiomics to generate a second-virtual opinion useful to radiologists and their co-workers when facing rare diseases; • localize the most important slices as a visual feedback for medical specialists; • meet the limits of the Artificial Intelligence techniques in the medical field.
The mathematical methods are called Formal Methods: they allow to build a rigorous model of a patient through the values of its radiomic features. The results in this article are greater than 0.70%: this suggests that radiomics and formal verification may be useful in predicting future metastases or local recurrence development in soft tissue sarcoma.
BACKGROUND AND SIGNIFICANCE
Soft-tissue sarcomas (STSs) are a group of malignancies which include a wide number of subtypes, all arising from the mesenchymal cells. More than 50 different categories are reported, as stated by the World Health Organization.
STSs are rare tumors and represent about 1% of all cancers. 1 Despite their low incidence, these malignancies are worrisome; in fact, about 25% of STSs develop distant metastases, representing the main factor leading to death, with a metastatic percentage that can reach about 50% for high-grade STSs. [2][3][4] The main site of metastatization is the lungs, which account for about 80% of lesions. 5 Prognosis of patients who develop metastases is generally: 3-year survival rate is lower than 50% for those undergoing surgical metastasectomy and lower than 20% in those who are not candidate to surgery. Thus, an imaging method, which potentially enables the prediction of metastases occurrence in this set of patients might be of high benefit. The median survival time after distant metastasis diagnosis is approximately 11.6 months. 2 Identifying patients who are at a high risk of developing distant metastasis at an early stage could potentially allow implementing more effective therapies. 6,7 Analysis of tumor heterogeneity on pathological samples obtained from biopsies may be challenging and the information may depend on which part of the tumor is sampled. 8 Attempts to solve this issue and to obtain better information from STSs have been done with the socalled "Radiomics", a field of imaging research which implies the analysis and extraction of large amount of data from medical images, using advanced quantitative characteristics and specific image-processing algorithms. 9,10 As a matter of fact, with radiomics will be possible to avoid the biopsy and to identify the patients with greater risk of metastases or local recidive. In addition, the help of radiomics during the follow-up can support clinicians to predict the behavior of tumors.
Promising results have been recently reported in the use of radiomics in musculoskeletal oncology, being mainly aimed for investigating conventional statistical methods/machine learning algorithm for musculoskeletal sarcomas, 11 for discriminating benign from malignant spine tumors, 12 for predicting patient's prognosis and treatment response. 7,13-15 Nevertheless, a common problem of radiomics studies is related to the considerable number of imaging features that are evaluated by the software 9 or to the poor understandability of the radiomic features in the clinical context. The first problem can have negative consequences on the accuracy of prediction models because some of these features may be redundant, useless, or highly correlated among each other.
Nowadays, various fields, including scientific research, data analysis, and medical diagnosis, utilize artificial intelligence (AI). Machine learning (ML) 16 represents the most commonly exploited AI technique in the medical field as it enables the analysis of data, detection of patterns, and derivation of conclusions also without explicit input. However, this technique often encounters problems related to the complexity of AI, which are frequently used as "black boxes" due to the inability to comprehend the processing from input to output. 17 For these reasons, the current study applies Formal Methods in the radiomics pipeline: these are based on mathematical logic and reasoning 18,19 that allow to model the radiomic data of a patient and to verify if this satisfies the properties belonging to a disease state. Formal Methods are a group of logical and mathematical procedures used to confirm and demonstrate the accuracy and the correctness of a computer system or software application.
Compared to AI techniques, Formal Methods permit to: • have a reduced dataset of patients and/or images for computing the model; • produce an explainable and understandable model; • easily reply the process, because a small number of parameters is required for the entire pipeline.
OBJECTIVE
The purpose of this study was to provide a Formal Method to predict distant metastases and local recurrence in STSs of the extremities. The proposed technique was noninvasive and without the need for biopsy, indeed radiomic features were computed from nonenhanced magnetic resonance images (MRI). To the best of our knowledge, Formal Methods for STSs have never been analyzed in the literature. MRI protocols were not uniform among the patients. From the whole MRI dataset, we selected only MRI scans T2-weighted fatsaturated (T2FS) or short tau inversion recovery (STIR); patients were labelled "metastases/local recurrence" (group B) or "no metastases/no local recurrence" (group A) as clinical outcomes; 4 patients with upper limb soft tissue sarcoma were excluded. In terms of texture, T2FS and STIR images are considered to be similar, and therefore, they were grouped together under one category. 7,21 Following these criteria, we included in our analysis a total of 47 patients from an overall number of 51 patients. Two different MRI exams (from 2 different patients) were used for modelling the property which will be automatically verified by a mathematical technique.
Segmentation and feature analysis
Segmentations for exams were obtained from the above mentioned public database; for this study, segmentations included visible edema. Every single segmentation was visually valued by a radiologist with 7 years' experience and modified if necessary. The 3D slicer software (4.13 version) was used for this step. 22 All the radiomic features were computed using Pyradiomics 3.0.1 (https://pyradiomics.readthedocs.io), a library for radiomic features extraction from medical imaging, 23 The hyperparameter for feature extraction 25 were as follows in Table 1; the remaining parameters were set to default. For each exam, features were extracted from each image separately.
Features used for Formal Method classifier were manually chosen, in order to describe the distribution of voxel intensities and shape characteristics, also with the support of Correlation Attribute Evaluation (Weka software version 3.8.5). 26,27 Figure 1 shows an example of the Radiomic pipeline: 102 features were extracted from the segmentation of a left tight pleomorphic sarcoma, and finally were selected 2 first-order features and 3 Shape 2D features. The next step was the discretization of extracted features, to simplify the translation in formal models. Authors divided the features in 3 different intervals: low, basal, and up with the equal-width partitioning. The discretized features were transformed into a formal model according to the Calculus of Communicating Systems (CCS), a process calculus introduced by Robin Milner. 28,29 More detailed information about this process are reported in the following section.
Formal Methods
Formal Methods 18,19,30,31 are techniques derived from the Computer Science field to verify the correctness of critical informatic systems. Hence, thanks to the mathematical theory, this methodology allows to build formal and rigorous representation of a system. As a matter of fact, this methodology is widely used in Cybersecurity, [32][33][34] Bioinformatics, 35 and Computer Science to verify the safety of complex system behaviors where there is the possibility of economic losses or deaths (automated air traffic management or banking transitions).
In the current study, instead of having a critical informatic system, authors considered the state of health of the patient: the methodology aims to verify the presence or the absence of a disease. After extracting the radiomic features, their numerical values were discretized in 3 levels; according to the keywords used in this methodology: a low level is represented by b1of3, a medium level is represented by b2of3, and the highest level is represented by b3of3. In order to create the formal model, radiomic features (Sphericity, Kurtosis, Skewness, Elongation, and Mesh Surface, as illustrated in the example below) were combined using the combination operator represented by the "." symbol, as shown in Figure 2.
The entire sequence of clinical or radiomics features is represented by a "model" (one model is generated for each subject or patient). The pattern of a disease is represented through a "property" or "formula"; the property is computed by analyzing the common pattern of the disease with the aid of mathematical logic algorithms.
For instance, a pattern can be composed of 2 consecutive values a and b, followed, even if nonimmediately, by other 2 consecutive values c and d. Such a statement can conveniently be expressed in a temporal logic as: The above representation is called "property" and, in case of a MRI exam, each row represents the discriminant radiomics feature values which correspond to the presence of a certain disease mark. In Figure 3, there is a practical example of property with a Shape feature called "Sphericity" and a First order feature called "Kurtosis". Their numerical values are described through the discretization in 3 levels and different operators, which are: • < and > are used for specify which action is performed; • _ is used to perform an 'OR' combination of the values; • min X is a function which define when the rule is satisfied; 36 • tt or ff means the termination condition of the property.
With the increase of radiomics studies on a certain disease, researchers can state (eg) that a low level of Kurtosis is indicative of a high risk of metastases or local recidive. Normally, the property is verified on a MRI exam independently of the number of slices contained. The agent responsible for verifying the absence or the presence of the property is called "Model checker": 18 it is a powerful automatic reasoning technique that allows a rigorous verification of the property on the model of the patient. Practically, this agent takes in input the model of the patient and the property of the disease status, verifying if the model satisfies the property; finally, the agent concludes with a simple result: the output is true if the model satisfies the property, otherwise it is false. If the output is true, this means the patient is affected by the disease status that the property describes (eg, the risk of metastases). If the result is false, researchers know the patient have a negligible risk of metastases and the methodology also return a counterexample with the explication why the model is considered false, which increments the understandability of the technique. In addition to the previous advantages, the use of Formal Methods can also help to localize the site of the sarcoma. Being a mathematical method, it allows to verify in which parts of the model the property is satisfied. In radiological terms, this functionality turns in a localization of the disease in the radiological exam. An example of the whole workflow, including radiomics and Formal Methods, is described in the Figure 4.
Clinical data
Our study population included 47 patients ( Table 2 and Table "DBInformation" provided in the Supplementary Materials section. MRI protocols were not uniform and only T2FS or STIR sequences were selected. Two exams were acquired in the sagittal plane, 4 exams were acquired in the coronal plane, and the remaining 41 exams were acquired in the axial plane. More details regarding to MRI acquisition protocols are reported in Table "MRIAcquisition" provided as Supplementary Materials.
In the public dataset, each individual patient did not have both STIR and T2FS sequences available; therefore, we selected the only available fluid-sensitive sequence. 7,21 Segmentation and feature analysis After visually assessment, 43 segmentations were retained, and 4 segmentations were manually changed, in order to better delineate the profile of the lesion.
From segmentations, in total 102 radiomics features were extracted from T2FS or STIR images. For formal modelling, we con-
Formal verification and statistical analysis
After generating the CCS models from all 47 patients, the disease property was generated by a radiologist and 2 computer science researchers, looking at 2 different exams, and it is shown in Table 3. Please note: in Formal Methods there is no division of training and testing, only the creation of the models, properties and their verification on the models. Formal Methods do not learn from any patterns or behaviors.
The following metrics were considered for evaluating the performance of the property to predict the development or not of metastases/ local recurrence (group B vs group A): specificity, sensitivity (also called recall), accuracy, positive predictive value and negative predictive value. Intercorrelation among selected features was calculated with the Spearman correlation coefficient. Furthermore, the clinical utility indexes (CUI) were calculated to take into account both occurrence and discrimination. 37 The value for CUI ranges from 0 to 1:
Explainability
Combining Radiomics and Formal Methods allows to obtain a "second-virtual opinion" for radiologists and clinicians. In addition, this method can also localize the slices where the property is satisfied, giving a visual feedback, as shown in Figure 6. This can be very useful when facing difficult radiological exams where the main tumoral component is not clearly visible.
For example, in Figure 6 the localization method is used on patient STS_038 (from group B) and, in Table 4 are depicted which features values are aligned with the formula.
DISCUSSION
This study provides a formal method to predict the development of distant metastases and local recurrence in STSs using MRI. The identified property obtained an accuracy of 0.74, a positive CUI of 0.606 and a negative CUI of 0.491. In the following section, we illustrate the choices in the materials and methods, the limitations, Table 3. Temporal logic properties used for the diagnosis of patients affected by sarcoma with local recurrence and recent articles regarding the same topic. For this work, 3 planes were considered: sagittal, coronal, and axial; axial plane was the most frequent acquisition plane. We decided to include all exams, even if with different planes, because current routine MRI protocols for STSs include all 3 orthogonal planes. Regarding segmentations, it was preferred to include edema inside the segmentations, because STS cells are present histologically also in the tissues beyond the tumor. 38 In addition, the ability to analyze tumor cells beyond the gross tumor volume has relevant implications such as in treatment.
As regards Kurtosis and Skewness, according to references 40-43, positive skewness and higher kurtosis can represent increased heterogeneity and poorer prognosis in several tumours; indeed, researchers in reference 44 showed that the presence of at least 2 out of 3 characteristics (heterogeneity, necrosis, and peritumoral enhancement) was a predictor of overall survival and metastasis-free survival in STSs. Regarding to Elongation and Sphericity, researchers in reference 45 included both features for their Radiomics-T2FS model, aimed to differentiate low-grade from high-grade STSs.
The principal limitations of the proposed study are as follows.
The first limitation is the relatively small sample size. However, differently from other AI techniques, the proposed approach does not require many exams because there is not a training step.
Regarding the second limitation, Elongation and Sphericity have a Spearman correlation coefficient of 0.86. It means redundant data can be present in the proposed property; anyway, we decided to retain them both on the basis of the above-mentioned article. 45 The third limitation is the inclusion of patients with development of metastases (23 patients) and local recurrence (3 patients) in the same group B, considering these patients as a unique group.
The fourth limitation is due to differences in histology. According to references 46 and 47, the risk of distant metastasis in STSs ranges from 20% to almost 100% based on grading and histological type. Histology effect has not been investigated in this study and it could be explored in further studies.
The fifth limitation is the heterogeneity of MRI protocols and future works will verify the stability of the proposed approach through various image acquisition protocols.
Regarding the state-of-the-art, researchers in reference 48 trained a radiomics score for metastatic relapse-free survival in 35 patients with myxoid/round cell liposarcomas. The model, combining the radiomics score and relevant radiological features, achieved an AUC of 0.925.
Researchers in reference 49 found 5 contrast enhancement MRI radiomics models for predicting metastatic relapse-free survival, using 50 patients having high-grade sarcomas; the model with the highest integrative AUC obtained a value of 0.87.
Authors in reference 50 built a radiomics-based models to predict metastatic relapse at 2-years with a training data-set of 50 patients. On the testing cohort (20 patients), the best supervised model obtained an accuracy of 0.75.
In reference 51, the authors constructed and validated a radiomics method for prediction of distant metastasis in STSs. They used a training dataset with 54 sarcomas and a testing dataset with 23 sarcomas. The highest AUC and accuracy obtained were 0.902 and 0.913, respectively.
The above-mentioned articles 48,49,51 considered only one histotype or the same MRI scanner. Differently, the proposed study included different histologies and different MRI scanners (further details regarding histological types and MRI scanner protocols are provided in "DBInformation" and "MRIAcquisition"-Supplementary Materials section).
Researchers in reference 7 used the same public data-set of the current study. They combined the texture features of FDG-PET and MRI imaging to assess lung metastasis risk in STSs on 51 patients; the model achieved an AUC, sensitivity and specificity of 0.984, 0.955, and 0.926, respectively.
Authors in reference 52 built a model for the prediction of lung metastases in STSs, by optimizing MR and PET image acquisition protocols. From the same public data-set of our study, the researchers selected 30 patients for their research and the model obtained an AUC of 0.89.
However, both the previous articles 7,52 did not validate their results on independent datasets. Table 4. Comparison between the values described in the property and those found in the radiological examination
Sphericity Kurtosis Skewness Elongation Meshsurface
Property High Low -Medium/High High The above literature confirms the novelties of the proposed Formal Method approach, which can be considered as a valid alternative to other AI techniques. Moreover, even if the present results are slightly lower than the state of the art, the proposed method is highly-available, indeed it is based only on routine MRI protocols.
The aim of AI is to develop computer systems capable of reasoning and contributing to various fields, such as interpreting natural language, 53 perceiving sensory information, and learning new information. These systems should emulate human intellect, performing tasks and improving their ability to perform them over time.
The most commonly used AI approaches are ML and deep learning (DL), which are often used interchangeably. ML 16 is a broad category of approaches and algorithms that analyze data and draw conclusions. DL, on the other hand, is a subset of ML that uses artificial neural networks to analyze extremely complex data. Creating and utilizing a ML system can often pose challenges and difficulties. The first issue may be the lack of data, as ML requires a large amount of training data to be effective; but finding relevant and high-quality data to train the computer model can be difficult. Furthermore, overfitting and underfitting can arise due to noise or unrepresentative data in the training dataset, leading to ML models that are unable to accurately generalize predictions or classifications to new data. Even with a perfect dataset, interpreting the results of an ML system can sometimes be challenging because the model may be difficult to explain. 17 Formal Methods can be utilized to analyze and evaluate the correctness and accuracy of computer systems by means of mathematical modeling of system behavior and the verification of specified properties. Formal Verification, Theorem Proving, Program Synthesis, and Model Checking are examples of Formal Methods techniques used to demonstrate a computer system's correctness with respect to specific requirements. 18 By enabling errors in the software to be detected and resolved during development, formal approaches can enhance the dependability and security of computer systems, minimizing the probability of catastrophic errors or system failures.
As a conclusion, despite the limitations, the current study suggests that Formal Methods can provide beneficial assistance for personalized medicine. As a matter of fact: • the availability of small datasets does not affect the robustness of the model and therefore the reliability of the results; • the construction through mathematical and rigorous methods allows to understand the production and the meaning of the property (avoiding the risk of a "black box" approach); • the entire process is supervised by radiologists and AI experts.
CONCLUSIONS
The proposed approach, based on Formal Methods, can be an alternative tool to predict the risk of local recurrence and metastases in STSs. If the data are confirmed from further validation, this technique may assist physicians in choosing the appropriate treatment for STSs and potentially improve patient survival. Future works can be the development of these mathematical methods to extrapolate the objective characteristics of the disease independently of MRI scanners.
SUPPLEMENTARY MATERIAL
Supplementary material is available at JAMIA Open online. | 2023-04-14T05:22:39.817Z | 2023-04-06T00:00:00.000 | {
"year": 2023,
"sha1": "814bab1888502b1247132ae3fdea56b3e1339a86",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jamiaopen/article-pdf/6/2/ooad025/49871610/ooad025.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "814bab1888502b1247132ae3fdea56b3e1339a86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7316590 | pes2o/s2orc | v3-fos-license | Specific quorum sensing-disrupting activity (AQSI) of thiophenones and their therapeutic potential
Disease caused by antibiotic resistant pathogens is becoming a serious problem, both in human and veterinary medicine. The inhibition of quorum sensing, bacterial cell-to-cell communication, is a promising alternative strategy to control disease. In this study, we determined the quorum sensing-disrupting activity of 20 thiophenones towards the quorum sensing model bacterium V. harveyi. In order to exclude false positives, we propose a new parameter (AQSI) to describe specific quorum sensing activity. AQSI is defined as the ratio between inhibition of quorum sensing-regulated activity in a reporter strain and inhibition of the same activity when it is independent of quorum sensing. Calculation of AQSI allowed to exclude five false positives, whereas the six most active thiophenones (TF203, TF307, TF319, TF339, TF342 and TF403) inhibited quorum sensing at 0.25 μM, with AQSI higher than 10. Further, we determined the protective effect and toxicity of the thiophenones in a highly controlled gnotobiotic model system with brine shrimp larvae. There was a strong positive correlation between the specific quorum sensing-disrupting activity of the thiophenones and the protection of brine shrimp larvae against pathogenic V. harveyi. Four of the most active quorum sensing-disrupting thiophenones (TF 203, TF319, TF339 and TF342) were considered to be promising since they have a therapeutic potential of at least 10.
Scientific RepoRts | 5:18033 | DOI: 10.1038/srep18033 To date, several QS inhibitors have been described or claimed 6,7 . Brominated furanones are the most intensively studied QS inhibitors, and these compounds have been reported to disrupt QS in various Gram-negative bacteria 6,13 . These compounds inhibit QS in V. harveyi by decreasing the DNA-binding activity of the quorum sensing master regulator LuxR 14 . Unfortunately, these brominated furanones are toxic to higher organisms 15 , which means that they will not be safe for practical applications. More recently, brominated thiophenones, sulphur analogues of the brominated furanones with the same mode of action, have been synthesized, and these compounds were found to be more active than the corresponding furanones [16][17][18] . One of these compounds, (Z)-4-((5-(bromomethylene)-2-oxo-2,5-dihydrothiophen-3-yl)methoxy)-4-oxobutanoic acid (TF310, Fig. 1) has been reported to show the highest therapeutic index of all QS-disrupting compounds tested in our V. harveyi -brine shrimp model thus far, with complete protection against the pathogen at 2.5 μ M and severe toxicity only being observed at 250 μ M 18 . Based on these promising results, in the present study, we aimed at determining quorum sensing-disrupting activity, protective effect and toxicity of 20 thiophenones (Fig. 1). Furthermore, we propose a new parameter to describe specific quorum sensing-inhibitory activity, A QSI , defined as the ratio between inhibition of quorum sensing-regulated activity and inhibition of the same activity when independent of quorum sensing. Most claims with respect to quorum sensing inhibitors are based on experiments with quorum sensing signal molecule reporter strains. We recently argued that these experiments are prone to bias due to other effects compounds may have on reporter strains, and therefore, that good control experiments are required in order to exclude false positives 7 . The use of the proposed parameter A QSI is a straightforward and elegant way to exclude false positives by taking into account (potential) bias related to the use of quorum sensing reporter strains.
Results
Impact of the thiophenones on quorum sensing-regulated bioluminescence of V. harveyi. In a first experiment, we determined the impact of the thiophenones on QS-controlled bioluminescence of V. harveyi. Wild type V. harveyi was grown to high cell density in order to activate QS-controlled bioluminescence, after which the thiophenones were added at 0.25, 1, 5 and 10 μ M, respectively. Bioluminescence was measured 1 h after the addition of the thiophenones and our results revealed that most of the compounds were able to block bioluminescence in wild type V. harveyi in a concentration-dependent way. Fifteen of the 20 compounds (TF103, TF113, TF116, TF125, TF203, TF307, TF312, TF319, TF332, TF339, TF341, TF342, TF346, TF347 and TF403) were found to inhibit bioluminescence at a concentration of 0.25 μ M and higher, while TF123 and TF301 significantly reduced the bioluminescence from 5 μ M onwards. Additionally, TF203 could completely inhibit the QS-regulated bioluminescence at 5 μ M, and TF301, TF332 and TF341 completely blocked the bioluminescence at 10 μ M. Finally, TF345, TF404 and TF405 showed no effect on the bioluminescence even at the highest concentration tested (Fig. 2). The compounds had no effect on the growth of V. harveyi at the concentrations used (Supplementary Figure 1).
We previously argued that the identification of QS inhibitors using QS molecule reporter strains is prone to bias due to other effects compounds may have on the reporter strains and that therefore, adequate control experiments are required 19 . In order to verify that the effect observed in the bioluminescence tests with wild type V. harveyi was not due to toxicity, we investigated the effect of the thiophenones on bioluminescence of strain JAF548 pAKlux1, in which bioluminescence is independent of the QS system 18 . At 0.25 μ M, none of the compounds, except for TF103, TF113 and TF116, blocked bioluminescence of JAF548 pAKlux1 (Fig. 3), suggesting that the luminescence inhibitory effect of the three compounds was not due to a specific inhibition of QS at this concentration. Moreover, TF123, TF125, TF301, TF312, TF332 and TF403 inhibited the bioluminescence of JAF548 pAKlux1 at 1 μ M or higher, while TF203, TF341 and TF346 blocked the bioluminescence from 5 μ M onwards. TF307, TF319, TF339, TF345, TF347, TF404 and TF405 showed no effect on the bioluminescence of JAF548 pAKlux1 at the highest concentration tested in this study (Fig. 3), indicating that they have low toxicity.
For the compounds that showed significant inhibition of quorum sensing-regulated bioluminescence, we further calculated the specific quorum sensing-inhibitory activity A QSI as the ratio between the percentage inhibition of quorum sensing-regulated bioluminescence (in wild type V. harveyi) and the percentage inhibition of quorum sensing-independent bioluminescence (in JAF548 pAKlux1). The specific quorum sensing-inhibitory activity of compounds TF103, TF113, TF116, TF123 and TF301 was smaller than 2 at any of the concentrations tested (Table 1), and these compounds were therefore considered as false positives. Six compounds showed a high specific quorum sensing activity (> 10) for at least one of the concentrations tested: TF203, TF307, TF319, TF339, TF342 and TF403. Impact of the thiophenones on the virulence of V. harveyi in the gnotobiotic brine shrimp model. Previous work in our lab showed that the virulence of V. harveyi in the gnotobiotic brine shrimp model system is regulated by QS 11 . Therefore, we further investigated whether the thiophenones could protect brine shrimp larvae from the pathogen in in vivo challenge tests. Fifteen compounds (TF113, TF125, TF203, TF307, TF312, TF319 TF332, TF339, TF341, TF342, TF346, TF347, TF403, TF404 and TF405) significantly increased the survival of brine shrimp larvae challenged to V. harveyi at 0.25 μ M (Fig. 4). Fourteen of these significantly increased the survival of challenged brine shrimp at 1 μ M (TF113 induced high mortality at this concentration), and 11 of them plus TF301 significantly increased the survival of challenged brine shrimp at 5 μ M (TF332, TF341 and TF403 seemed to be toxic). At 10 μ M, only 6 compounds were able to increase the survival of challenged brine shrimp larvae (TF125, TF307, TF346, TF347, TF404 and TF405), whereas for most compounds, high mortality was observed. Seven compounds offered a complete protection to the brine shrimp larvae (no significant difference in survival when compared to unchallenged larvae): TF125 ( Subsequently, in order to evaluate the relation between the inhibition of quorum sensing and the protection of brine shrimp larvae by thiophenones at different concentrations, we determined the correlation between A QSI and survival of brine shrimp larvae challenged with V. harveyi. The correlation is significant at all concentrations tested: Toxicity of the thiophenones to axenic brine shrimp larvae. We determined toxicity to axenic brine shrimp larvae at 0.25, 1, 5 and 10 μ M. At 0.25 μ M, TF103 and TF113 showed appreciable toxicity (causing > 25% mortality). At 1 μ M, 4 compounds induced appreciable mortality, including TF103, TF113, TF116 and TF123. At 5 μ M, TF332, TF341 and TF403 also started to induce appreciable mortality, and at 10 μ M, only 7 compounds showed no toxic effect (TF125, TF307, TF345, TF346, TF347, TF404 and TF405), whereas all other compounds except for TF116 and TF319 induced (almost) complete mortality (Fig. 5).
In addition, we also calculated correlations between inhibition of bioluminescence in JAF548 pAKlux1 and survival of axenic brine shrimp larvae treated with thiophenones. Results showed that the correlation is significant at all the concentrations we tested in this study: 0. Therapeutic potential of the thiophenones. In order to determine the therapeutic potential of each of the compounds, we calculated the ratio between the lowest concentration at which they increased the survival of challenged brine shrimp larvae to more than 75% and the lowest concentration at which they caused more than 25% mortality in axenic brine shrimp. Nine thiophenones showed a good therapeutic potential (ratio of at least 10): TF203, TF319, TF339, TF341, TF342, TF346, TF347, TF404 and TF405 (Table 1).
Discussion
Due to the rise of antibiotic resistance, a significant research effort currently is devoted to the development of novel methods to control bacterial disease, and quorum sensing inhibition is one of the promising alternatives that are explored. Many compounds have been claimed to be able to inhibit quorum sensing in various pathogens 6 . Brominated furanones are one of the most intensively studied classes of quorum sensing inhibitors with a relatively well-defined mode of action 14,20 . However, these compounds are toxic to higher organisms, which hampers their application to control disease. Brominated thiophenones, the sulphur analogues of brominated furanones, were recently reported to be more effective and less toxic 18 . Given the promising results obtained before with a brominated thiophenones, in this study, we investigated the quorum sensing-inhibitory activity and therapeutic potential of 20 synthetic thiophenones in a highly controlled model system. Seventeen of the thiophenones included in this study were found to significantly decrease bioluminescence in wild type V. harveyi at one of the concentrations tested, and 12 of them decreased luminescence at the lowest concentration (0.25 μ M). One of the factors that have resulted in a boost of the quorum sensing research is the development of signal molecule reporter strains, which demonstrate a certain phenotype in response to quorum sensing molecules. An important limitation to the use of such reporter strains is that the quorum sensing-regulated phenotypes are often co-dependent on other factors and/or depend on the metabolic activity of the cells, leading to false positives 7 . In order to solve this problem, we propose the use of the specific quorum sensing-inhibitory activity A QSI , defined as the ratio between the inhibition of a quorum sensing-regulated phenotype (bioluminescence in this study) and the inhibition of the same phenotype in the same bacterium, but independent of quorum sensing, as a new parameter. We calculated the specific quorum sensing-inhibitory activity A QSI for the thiophenones that showed significant inhibition of quorum sensing-regulated bioluminescence in wild type V. harveyi, and this allowed us to identify 5 compounds (TF103, TF113, TF116, TF123 and TF301) as false positives. On the other hand, six thiophenones showed a high specific quorum sensing activity (A QSI > 10) for at least one of the concentrations tested: TF203, TF307, TF319, TF339, TF342 and TF403.
It has been proposed that the 5-bromomethylene side-chain of quorum sensing-inhibiting thiophenones enables them to bind to nucleophilic amino acid residues in LuxR, the quorum sensing master regulator in V. harveyi, thereby decreasing the ability of LuxR to activate quorum sensing target genes 18,21 . Most of the thiophenones that inhibited quorum sensing V. harveyi (with A QSI > 2) also possess the bromomethylene side-chain, except for TF203, TF319 and TF342. These compouds, however contain a 5-side chain that can have the same function as the bromomethylene moiety (i.e. binding to nucleophilic amino acid residues). Some of the compounds with a 5-bromomethylene side chain (or 5-chloromethylene or 5-iodo-methylene, which have the same activity) showed no or a low specific quorum sensing inhibitory activity (i.e. TF103, TF113 and TF301). This is probably due to a low specificity of these compounds and the toxic activity that results from this (they did inhibit quorum sensing-regulated bioluminescence, but also inhibited quorum sensing-independent bioluminescence). Remarkably, the two compounds with a benzothiophenone core (TF404 and TF405) did not inhibit quorum sensing-regulated bioluminescence (despite having a bromomethylene side chain). Although the reason for this is not yet clear, we hypothesise that it might be caused by either a decreased reactivity of the bromomethylene moiety due to the presence of the benzene ring (making it less susceptible to nucleophilic attack), or steric hindrance due to the rigid benzene moiety attached to the thiophenone ring.
We further determined the therapeutic potential of the compounds in a highly controlled model system with brine shrimp larvae, and found that 9 thiophenones (TF203, TF319, TF339, TF341, TF342, TF346, TF347, TF404 and TF405) showed a good therapeutic potential. Four of these (TF203, TF319, TF339 and TF342) also showed a Luminescence measurements were performed 1 h after the addition of the thiophenones. Bioluminescence in the control treatment was set at 100% and the other treatments were normalized accordingly. The error bars represent the standard deviation of three replicates. Asterisks indicate significant differences when compared to untreated V. harveyi JAF548 pAKlux1 (independent samples T-test; *P < 0.05; **P < 0.01; ***P < 0.001).
high specific quorum sensing-inhibitory activity. One of them (TF341) showed toxicity to V. harveyi, which might explain the protective effect in the brine shrimp assay. The thiophenones increased the survival of challenged brine shrimp larvae at concentrations similar to those needed to block quorum sensing-regulated bioluminescence in vitro. Furthermore, our results also revealed a significant positive correlation between the specific quorum sensing inhibitory activity and the protection of brine shrimp larvae, suggesting that the quorum sensing-inhibitory activity largely determines the protective effect of these compounds. Remarkably, two of the compounds with good therapeutic potential (TF404 and TF405) showed no quorum sensing-inhibitory activity at all, nor did they show any toxicity to V. harveyi. This suggests that these compounds interfere with virulence gene expression by interfering with a mechanism that is distinct from the three-channel quorum sensing system. Finally, we found that there was a strong and positive correlation between toxicity of the thiophenones towards brine shrimp larvae and toxicity towards V. harveyi (as revealed by the inhibition of quorum sensing-independent bioluminescence), suggesting that toxicity to V. harveyi could be used as a good indicator for toxicity to higher organisms. Most of the thiophenones started to cause severe mortality in brine shrimp larvae at a concentration of 10 μ M. However, low toxicity was observed for TF125, TF307, TF345, TF346, TF347, TF404 and TF405 at this concentration. This might be attributed to the length and position of the side-chain since it has been reported that the side-chain could significantly affect the toxicity of thiophenones by interfering their binding to essential proteins 18,22 . We indeed found that the compounds with the largest side chains at the 3-position of the thiophenone ring (e.g. TF339 and 341) showed the lowest toxicity. In addition to this, the two compounds with a benzothiophenone core (TF404 and TF405) also showed low toxicity.
In conclusion, in this study, we determined the quorum sensing-inhibiting activities of 20 new synthetic thiophenones towards the quorum sensing model bacterium V. harveyi. We proposed the new parameter A QSI to determine specific quorum sensing inhibitory activity based on experiments with a quorum sensing reporter strain. We used this parameter to analyse data obtained with bioluminescence of V. harveyi as reporter phenotype, but it can easily be applied to any other signal molecule reporter strain. The use of A QSI allowed us to exclude 5 false positives out of the 17 compounds that were able to inhibit quorum sensing-regulated bioluminescence in V. harveyi. We identified 6 thiophenones that were able to inhibit quorum sensing at submicromolecular levels. Further, we determined the protective effect and toxicity of the thiophenones in a highly controlled gnotobiotic model system with brine shrimp larvae. There was a strong positive correlation between the specific quorum sensing-disrupting activity of the thiophenones and the protection of brine shrimp larvae against pathogenic V. harveyi, and 6 quorum sensing-disrupting thiophenones were considered to be highly promising to control bacterial disease.
Methods
Thiophenones. The structures of the thiophenones used in this study are shown in Fig. 1. The compounds were synhtesised as described before 17,23 . All the thiophenones were dissolved in pure ethanol at 5 mM and stored at − 20 °C.
Bacterial strains and growth conditions. V. harveyi wild type strain ATCC BAA-1116 (recently reclassified as V. campbellii 24 ; and mutant strain JAF548 pAKlux1 18 were used in this study. In the latter strain, bioluminescence is independent of QS and thus, this strain is used as a control in order to verify that inhibition of luminescence in V. harveyi is specifically caused by QS inhibition. Both strains were cultured in Luria-Bertani medium containing 35 g/L of sodium chloride (LB 35 ) at 28 °C under constant agitation (100 min −1 ). Cell densities were measured spectrophotometrically at 600 nm.
Bioluminescence assays. V. harveyi wild type and mutant strain were cultured overnight and diluted to an OD600 of 0. 10 μM. Survival of the unchallenged larvae was set at 100% and the other treatments were normalized accordingly. Asterisks indicate significant differences when compared to untreated brine shrimp larvae (independent samples T-test; *P < 0.05; **P < 0.01; ***P < 0.001).
for 1 h. Sterile cysts and larvae were obtained by decapsulation according to Marques et al. (2004). Briefly, 660 μ l of NaOH (32%) and 10 ml of NaOCl (50%) were added to the hydrated cyst suspension to facilitate decapsulation. The process was stopped after 2 min by adding 14 ml of Na 2 S 2 O 3 (10 g L −1 ). Filtered (0.22 μ m) aeration was provided during the reaction. The decapsulated cysts were washed with filtered (passed through 0.22-μ m membrane filter) and autoclaved (moist heat at 121 °C for 15 min) artificial seawater (containing 35 g l −1 of instant ocean synthetic sea salt, Aquarium Systems, Sarrebourg, France). The cysts were resuspended in a 50-ml tube containing 30 ml of filtered, autoclaved seawater and hatched for 28 h on a rotor (4 min −1 ) at 28 °C with constant illumination (c. 2000 lux). The axenity of cysts was verified by inoculating one ml of culture water into 9 ml of marine broth and incubating at 28 °C for 24 h. After 28 h of hatching, batches of 30 larvae were counted and transferred to fresh, sterile 50-ml tubes containing 30 ml of filtered and autoclaved seawater. Finally, the tubes were returned to the rotor and kept at 28 °C. All manipulations were performed in a laminar flow to maintain sterility of the cysts and larvae.
Brine shrimp challenge tests. The impacts of the thiophenones on the virulence of V. harveyi were determined in a standardized challenge test with gnotobiotic brine shrimp larvae as described by Defoirdt et al. (2005) with some modifications. A suspension of autoclaved LVS3 bacteria 25 in filtered and autoclaved seawater was added as feed at the start of the challenge test at 10 7 cells ml −1 , and V. harveyi was added at 10 5 CFU ml −1 . The thiophenones were added directly into the brine shrimp rearing water at different concentrations. Brine shrimp cultures, to which only autoclaved LVS3 bacteria were added as feed, were used as controls. The survival of the larvae was counted 48 h after the addition of the pathogens. Each treatment was carried out in triplicate and each experiment was repeated twice to verify the reproducibility. In each test, the sterility of the control treatments were checked at the end of the challenge by inoculating 1 ml of rearing water to 9 ml of marine broth and incubating the mixture for 2 days at 28 °C.
Statistics. Data analysis was carried out using the SPSS statistical software (version 21). Unless stated otherwise, all data were analysed using independent samples t-tests. | 2018-04-03T02:09:17.082Z | 2015-12-09T00:00:00.000 | {
"year": 2015,
"sha1": "ec43bbda68bf8f87d89d189123db95487a186710",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep18033.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "b163c8a654c8aa5b0dca9c5648b7478f03ac072c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247458879 | pes2o/s2orc | v3-fos-license | Interactive Learning System for Learning Calculus
Background IT tools has brought a new perspective to collaborative learning where students do not just sit in a chair and swallow lecture content but instead participate in creating and sharing knowledge. However, calculus learning augmented reality application has limitation in promoting a human collaboration in learning. Purpose This research develops an interactive application for learning calculus that promotes human-system interaction via augmented reality (AR) and human-human interaction through chat functions. The study examines the effect of both interactivities on learning experience and how that learning experience affects the performance of learning. Methods The research adopted a quasi-experimental study design and pre-post test data analysis to evaluate the effect of interactivities on learning experience and consequently the effect of learning experience on learning performance. The subjects were exposed to the developed application for learning the calculus chapter “Solid of Revolution” in a controlled environment. The study validated its research framework through partial least squares path modelling and tested three hypotheses via pre-and post-test evaluation. Conclusions The results found that both interactivities affect learning experience positively; human-human interactivity has a higher impact than the human-system interactivity. It was also found that learning performance as part of the learning experience increased from pre-test to post-test.
Introduction
In traditional classroom environments, there exists a problem with student participation. 1 It is also stated that the traditional method is inefficient as it is mostly spoon-feeding, and students' analytical processes are absent due to a lack of peer interaction. 26][7][8][9][10] Calculus is one of the core subjects in computer science studies.Malaysian students pursuing diplomas and degrees in Information Technology take calculus subjects that include differentiation, and integration.[13] In understanding the revolution of solids around an axis, visualization of the solid in-question is very important.Spatially related 3D images cannot be drawn on a 2D board or projected over a computer.These hinder students in visualizing the solid and result in difficulties in conceptualizing the concept.After consulting with the subject teacher on what the student struggle most in learning calculus, it was suggested that students face difficulties in understanding solid of revolution through traditional classroom method as it requires 3D visualization.8][19] Under the theory of learner constructing their knowledge Lev Vygotsky theorized that learning occurs through personalization and socialization. 20So it is crucial to provide the function to interact among the learners.Unfortunately, majority of the AR learning application only focused on providing content visualization resulting in not a holistic application in learning that promote both type of interaction. 14,15,21,22As such, developing an interactive AR application that facilitates both human-human and human-system interactivities and assists in conceptualizing and understanding math better is needed.
Interactive technology has made learning more personalized and kept students engaged through various applications. 23his paper also suggested that interactive technology can make learning more active, and intensive for students where students can communicate easily with each-others and also with the teacher. 23Recently implementation of Augmented Reality (AR) in pedagogy has been adopted by many researchers where the emphasis has mainly been kept on 3D visualization.Many have implemented AR in geometry for school students 14,19,24 while some others implemented it for calculus. 12,13In both cases, the authors have focused on the immersive learning experience through AR but no humanhuman interaction function was provided.This research aims to develops an augmented reality application that promotes interaction and spatial visualization for learning calculus and evaluates the effect of the interactive learning system on the performance of learning.
Literature review
Implementation of interactive learning systems in class to engage students in learning has become a common focus in the pedagogical arena.A system that promotes interpersonal interaction and provides a sense of others' presence can be considered as an interactive system. 4
Interactivity
In an attempt to increase class interactivity different types of technologies have been implemented in class.These can be categorized as synchronous and asynchronous mediums of interaction. 25Synchronous is used as a medium of interaction during class specially in carrying out in class discussion, whereas asynchronous is for facilitating learning remotely that may happen after the class.Interactive whiteboards, 2 and virtual-reality-based systems for learning language 4 are used in synchronous learning.AR-based interactive book 5 is a tool used in asynchronous learning.On the other hand, REVISED Amendments from Version 1 We have revised the manuscript carefully based on all the comments and issues raised in the objectives, literature review, methodology, research design and procedures used in this study, measurement instruments and data analysis techniques.Additionally, the discussion section has been rewritten to explain the study results and aligned with the research objectives.Furthermore, Figure 1 has been updated to correct the typo 'Casual' to 'Causal'.Figure 2 has been updated with the addition of H1, H2, and H3.interactivity can also be categorized as human-system and human-human, as technologies are allowing both in a learning environment. 3The framework for interactive learning application is developed based on these two types of interactions.Human-human interactivity is further divided into teacher-student and student-student interactivity.Past studies in Table 1 shows promoted interactivities by different studies.These studies focused primarily on human-system interactivity while a few provided human-human interactivities.
AR has become a trending technology in the pedagogical sector to provide students with an immersive experience.AR provides a real and virtual experience together to allow users real-time interaction. 163D representation of a virtual object in a real environment can make learning more engaging as it provides a visual learning experience.This can be helpful to the majority of students as studies have found that there are more visual learners compared to auditory or kinesthetics learners. 26Besides, haptic interaction happens when users are allowed to interact with the 3D object through AR technology 27 which also address kinesthetics learners as they learn through physical involvement.Past studies in Table 2 depicts the interactivities promoted by different AR studies.
From the past studies depicted in Table 2, five papers adopted mobility in their AR system except the three that were desktop-based.Table 2 depicts that most systems solely focused on interactivity through AR and not implementing any human-human interactivity.
Learning Experience
Learning experience can be defined as the experience a learner goes through while learning content set by an institution. 28ncluding active engagement and collaboration as a part of it affects learning performances. 29,30Furthermore, implementation of technology in class affects the learning experience positively.
Research Design and Participants
This study conducted a pre-assessment evaluation of the respondents' knowledge, skills, or understanding in the AR area related to the system before they started using it. 32One of the numerous forms of quasi-experimental design is pre-and post-test research."Quasi" refers to something that resembles experimental study.Studies evaluating a curriculum for education, a treatment system or a simulation training commonly apply pre-and post-test evaluation. 33Since this study is to compare students' learning experiences and academic performance before and after using the AR system, a pre-and post-test quasi-experimental research method was chosen.
Convenience sampling is a nonprobability or non-random sampling in which it is able to conveniently reach and recruit members of the target population for the study. 34Usually, convenience samples of university students are used in academic surveys. 35Since the respondents were students from a Malaysian university taking Calculus subject, a nonprobability convenience sampling method was used for this study.
Sample size calculation was done by following the 10 times rule, where the sample size is required to be 10 times larger than the maximum number of structural paths directed towards a latent variable. 36As such the minimum sample for this study is supposed to be larger than 20 as two paths are directed towards the latent variable.The sample size for this research was 59, but after eliminating missing data and straight-line answers the data was analyzed from 55 respondents.
Quantitative research designs include quasi-experiments, true experiments, causal-comparative research, surveys research and experiments research.Quantitative research is likely to test theories by investigating the relationships among variables. 37Therefore, the primary research method for this study was quantitative.
Structural equation modeling (SEM) combines multiple regression and factor analysis.It is employed to examine the relationships between a group of observable variables and latent concepts.There are two main types of the SEM.In this study, instead of using the traditional Covariance-Based-SEM (CB-SEM), Partial Least Squares-SEM (PLS-SEM).PLS-SEM is chosen to analyze the data based on its capability to handle complex models and small sample sizes.Data analysis using PLS-SEM was employed by defining clear research objectives and hypotheses, data preparation, structural model analysis, hypotheses testing, structural model assessment and predictive accuracy, effect size and relevance evaluation. 36LS-SEM is often chosen for longitudinal studies due to its effectiveness in handling small sample sizes, which are common in these studies, especially when compared to cross-sectional studies. 36gure 1 illustrates the research design of this study.
Research framework
This research has identified two types of interactivities, human-human and human-system.Figure 2 depicts the research framework of this study.An interactive learning system based on AR was developed.Human-system and human-human interactivities were included as part of application functions.Human-system interaction was promoted by haptic interaction based on markerbased AR technology whereas human-human interactivity was implemented through a function of discussion platform.
The framework was developed to find how these two interactivities affect the learning experience, the first dependent variable, and lastly how the learning experience incurred by the promoted interactivities affects students' academic performance.
This study hypothesizes: H1: Human-system interaction is positively associated with students' learning experience.
H2: Human-human interaction is positively associated with students' learning experience.
H3: Improvement of learning experience improves the performance of learning.
Survey Design and Procedures
This research was conducted into three phases.In the first phase, data were collected from students through a prequestionnaire and a quiz before using the application, in the second phase, they used the application and explored AR, and lastly, in the third phase, post-test data were collected through same questionnaire.
Figure 3 shows students exploring the AR function of the application.Figure 3. Implementation of AR system in class.
The survey questionnaire was divided into three segments.Section (A) was demographic questions; section (B) was for evaluating the two independent variables, section (C) for measuring the learning performance.
Section A comprises of six questions regarding participants' general information that included gender, ethnicity, age and whether they have used any educational AR application before or not.All the questions in this section were self-designed.
Section B evaluates the variables of the research framework comprised of self-developed and adopted questions from multiple sources.Independent variable, human-human interaction measurement questions were formed based on the idea of peer-peer interaction and student-teacher interaction (Table 3).How interactive technology promotes both types of interactivities in class was also examined by another study that targeted the usage of clickers in a classroom. 3r the second independent variable human-system interaction, the questions were adopted 38 and depicted in Table 4.
The idea of human-system interaction is based on how satisfied the users are with using a particular system.For this purpose, this study has adopted the measurement items used in terms of suitability to the task or how efficient it is in performing the intended tasks, controllability, suitability for learning the usage of the application, and how well the system is for self-descriptiveness. 38 The learning experience variable was evaluated based on the adopted questionnaire from 31 shown in Table 5.In their study, they measured learning motivation as part of AR learning experience.Motivation has been measured by a developed model of attention, relevance, confidence, and satisfaction. 39The self-developed questionnaire listed in Table 6.All the questions for the aforementioned variables were measured through 5-point Likert scale.
Section C was developed by a calculus subject expert in evaluating learning performance via a quiz.The quiz included five True/False and one formative questions on solid of revolution chapter.The quiz was used in pre-and post-test evaluation for measuring learning performance factor.To avoid question biases, the same questions were used.Order of true-false questions were changed to make sure students did not just follow the similar pattern of answer from the pre-test.
Data analysis techniques
This study conducted two types of analysis through Smart PLS 3.0 and SPSS 22 (IBM SPSS Statistics, RRID: SCR_019096) for conducting structural model assessment and pre-and post-test comparison of variables respectively.
R is an open-source alternative software (R Project for Statistical Computing, RRID: SCR_001905) for Smart PLS.
Demographic profile
This study collected 55 valid responses from the selected sample of 59 students from a Malaysian private university (Table 7).Out of 55 respondents, 46 were male (83.6%) while 9 were female constituting 16.4% of total respondents.The result is in line with other research findings as the gender ratio significantly skews towards males in the technology field. 40,41Among the three main ethnicity groups 43 students were Chinese (78.2%), followed by six Malay and six Indians constituting 10.9% each.As the sample was from pre-university students, the age range was from 18-22.The majority of students were 19 years old (60%) where 18 and 22 were the smallest age groups, constituting 3.6 % each.Students' experience of using similar types of applications in learning provides an indication of whether users are experienced in using this type of application or not.It is found that the students scored the same grades in this preassessment, indicating that they had the same initial level of understanding in the studied area before using the system.
Inferential statistics
The study adopted a PLS_SEM approach to maximize the variance of the defined framework's endogenous construct. 42tructural model analysis was used to test the hypotheses along with its predictive accuracy and effect size.All the constructs were measured with five or more indicators.As the model is reflective, the constructs are reflective too."Internal consistency reliability", "convergent validity" and "Discriminant Validity" are the three criteria used to assess the constructs. 37
Internal consistent reliability
To evaluate the same source biasedness coefficient variation, Internal consistency reliability is used to investigate the reliability of indicators that measure a latent variable.From Table 8, each construct satisfied the criteria of composite reliability (CR) of ≥ 0.700. 43As such it can be concluded that the constructs met the internal consistency reliability criteria.All the self-developed questions under human-human interaction variables satisfied the reliability as the loading is >0.600. 36For the three self-developed questions (LE 10, LE11, and LE12) under learning experience variable two of the questions LE11, LE12 also satisfy this criterion.
Convergent validity
Convergent validity is measured by the indicator of Average Variance Extracted (AVE) and factor loading. 43 AVE indicates what percentage of the variance of a construct is defined by a marker. 43Acceptable value of AVE for each construct must be ≥ 0.500.From the AVE value depicted in Table 8, all constructs satisfied the convergent validity criteria.Factor loading of each indicator ≥ 0.6 is acceptable. 44
Discriminant validity
Discriminant validity is measured by the Fornell and Larcker Criterion (FLC), cross-loading comparison and HTMT technique. 36FLC indicates that the square roots of AVEs for the reflective constructs must be larger than the correlation for all other constructs diagonally.Table 9 shows that all constructs satisfied FLC criteria, where the square roots of AVEs for the reflective constructs of HHI (0.751), HSI (0.726) and LE (0.711) satisfied this criteria.
From Table 10 of cross loading, all indicators load high on their own constructs compared to others.This confirms that the constructs are distinct from each other indicating discriminant validity as a result.All the items including the selfdeveloped questions satisfy this validity criteria.
HTMT mechanism of assessing discriminant validity requires the value of HTMT to be lower than 0.850 for stringent criterion and 0.900 for conservative criterion. 36From Table 11, all the constructs satisfy the above criteria confirming discriminant validity thereof.
From all the criteria of confirmatory factorial analysis, the research model is adequately fitting to be accepted.As such this designated measurement model with specified latent variables has been analyzed with SEM criteria.
Structural Model Assessment
Path analysis was performed to find the hypothesized relationship.The results for collinearity assessment and hypothesis are shown in Table 12.Both latent variables Human-Human Interaction (HHI) and Human-System Interaction (HSI) have positive effects on Learning Experience (LE).The variance inflation factor (VIF) results in Table 12 show that the lateral multicollinearity meets the criteria of being above the threshold of 0.2 and below the threshold of 5, implying collinearity was not an issue in the structural model. 43study by 43 suggested that for a one-tailed test "t-values" for a significant level of five per cent (α = 0.05) are required to be greater than 1.645.The result indicated that both exogenous constructs HHI and HSI have a "t-value" of >1.645 for a Predictive accuracy, effect size and relevance "Predictive accuracy" is evaluated through the "coefficient of determination, R 2 ".R 2 values imply the predictive power of exogenous constructs over endogenous ones.From the analysis construct LE's R 2 value is found as 0.576 which means exogenous constructs substantially explain 57.6% of LE's variance as predictive power can be considered as substantial if it is greater than 0.260. 45Although 43 has stated R 2 value less than 0.670 is moderate.
To evaluate the effect size of exogenous constructs Cohen's f 2 value was obtained from the model analysis. 45stated that the f 2 value of 0.350 is considered a substantial effect whereas 0.150 is considered moderate.From Table 12 it can be seen that the HHI construct has a substantial effect on LE (0.463) and HSI has a moderate effect (0.157).
In addition, the Q 2 value of endogenous construct LE was found at 0.269 indicating moderate "predictive relevance". 43esides, as the Q 2 value is larger than 0, it can be concluded that HHI and HIS exogenous constructs have "predictive relevance" for the endogenous construct LE. 46 Pre-test post-test For understanding the significance of pre-and post-test performance of student paired sample t-test was carried out and the result is shown in Table 13.From Table 13 it can be concluded that there is a significant relationship between the results of the pre-and post-tests as the P<0.050. 32The post-test mean implies that students' performance results in the post-test are higher than the pre-test one signifying a positive improvement in the performance of learning.
Discussion
The study aims to develop an augmented reality application that promotes interaction and spatial visualization for learning calculus and evaluates the effect of the interactive learning system on the performance of learning.Humanhuman interaction is evaluated as part of the chat function of application and human-system interaction is analyzed as part of augmented reality implementation through mobile application.
The first hypothesis was accepted as human-system interactivity positively affected the learning experience factor.As this study developed an augmented reality application for learning calculus, here human-system interaction is related to the interactivity promoted by the augmented reality application.Studies implementing augmented reality in improving learning experience founds that the didactic experience of this technology make students more engaged and hence providing an enriched learning experience. 12,14,16,21,27This study has also found that among the human-system interaction all students univocally cancelled out disagreement that it provided them with real world 3D object feelings.
The study using similar questionnaire have also found that it has significant impact on the learning experience in general and motivation in particular. 39So, the result of this study is aligned with the prior relevant studies.
The second hypothesis result found that human-human interaction significantly influences learning experience.The implication from this hypothesis acceptance indicate that learning experience is shaped by human-human interaction which can comes from student-teacher or student-student.In the study design both student-teacher and student-student interaction facilities were provided via application chat function.So, it can be said that both type of human-human interactions are associated positively with learning experience.This result is in line with the findings where both types of interaction led to learning satisfaction. 472][3] In addition to that another study has also claimed that learning confidence as part of learning experience has also increased through interaction with peers and teachers. 1From the findings of these research, it can be concluded that the result of human-human interaction affecting learning experience is in line with existing research.
The third hypothesis of the study was tested by using paired sample "t-test" where learning performance' means were compared from 'pre-test' to 'post-test' scenario.The result shows that there is significant relation between the score and "post-test" mean is higher than the 'pre-test' one.Other studies have also found that learning experience as result of human-system and human-human interactivity increase learning performance. 3,15,47So, it can be claimed that in terms of hypothesis acceptance, this study is in line with existing literature.
Conclusions
The role of interactivity in the learning experience has been established by many studies before, but implementing both human-human and human-system interactivities in an augmented reality application was overdue.This research had done exactly that and analyzed the effectiveness of the research framework by using PLS.From the results, it can be concluded that the model was fit to analyze the research framework with substantial predictive accuracy and a moderate effect size of the exogenous variable with a moderate relevance on endogenous variable LE.All three hypotheses are confirmed as P values for all three are at a satisfactory level.These results imply that human-human and human-system interactions positively affect learning experience and performance of learning as a result of an enhanced learning experience.
Ethical considerations and consent
All the procedures performed in this study involving human participants were in adherence to the ethical policies of the University as approved by the Technology Transfer Office of Multimedia University under ethical approval number: EA0702021.
Written consent was also obtained from all individual participants involved in the study.Personal data from individuals was promised to be kept confidential and strictly restricted for use in this study only.
Lukman Hakim Muhaimin
Universitas Pendidikan Indonesia, Bandung, Indonesia The study design, methodology, data interpretation, and conclusion sections are appropriate and effectively convey the research intent.I understand the researcher focuses here on discussing interactive media, with implicit reference to AR in the title.However, I suggest adding a discussion on the theories related to AR and linking them with interactive learning within the theoretical framework section.Furthermore, I recommend including a brief depiction of the interactive media (such as displaying images) to enhance the readers' visualization.
Lastly, I'd like to inquire: If these media were removed from the process of understanding calculus concepts, would students still be able to grasp those concepts?I hope the media developed by the researcher serves as an extended cognition, not merely a teaching aid that, if removed, would leave students unable to understand mathematical concepts.
Are sufficient details of methods and analysis provided to allow replication by others? Yes
If applicable, is the statistical analysis and its interpretation appropriate?I cannot comment.A qualified statistician is required.
Are all the source data underlying the results available to ensure full reproducibility? Yes
Are the conclusions drawn adequately supported by the results?
Is the work clearly and accurately presented and does it cite the current literature? Yes
Is the study design appropriate and is the work technically sound?Yes
If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility?Yes
Are the conclusions drawn adequately supported by the results? Yes
Competing Interests: No competing interests were disclosed.
Introduction:
In this section, explanations related to the following things need to be explained to construct a more complete research motivation: What problems are still encountered in the application of learning media based on 3D visualization with AR technology? 1.
What systems currently exist to support learning calculus using AR and what are their weaknesses? 2.
What are the existing systems that support visualization and interaction and what are the problems with these systems? 3.
Literature study:
The paper needs to include an explanation of the following points to add clarity to the problem and state-of-the-art research on learning using visualization: Utilization of AR technology in calculus learning.1.
Various interaction models exist in learning applications and their advantages and disadvantages.
2.
Why visualization systems must be supported by human-human interaction in learning applications?
3.
What prevents current AR applications from providing student-student and teacher-student interaction? 4.
Methodology:
To ensure the homogeneity of the sample, it was mentioned in the paper that none of the students had ever used AR applications before.However, the student's initial level of understanding in the area studied before they used the provided AR applications is unknown.Different starting points have the potential to provide different endpoints as well.
Discussion:
In the discussion section, the author should not only emphasize the findings obtained from the statistical approach that has been carried out.A more in-depth study to find out what aspects make human-human interaction give better student performance results than human-system interaction needs to be presented with comparisons from previous studies.
Using previous studies as a reference in the discussion/comparison of research results will provide clarity on the contribution of research in this field of study.
Are sufficient details of methods and analysis provided to allow replication by others? Partly
If applicable, is the statistical analysis and its interpretation appropriate?I cannot comment.A qualified statistician is required.
Are the conclusions drawn adequately supported by the results? Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: My research interest is in software engineering education, software visualization, and project management.
I confirm that I have read this submission and believe that I have an appropriate level of
expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
Author Response 02 Feb 2024
Sook Ling Lew
Comment 1: This research explores the benefits of human interaction in interactive learning systems with visualization through augmented reality.The subject exposed is calculus on the topic "Revolution of Solids".The research compares the impact of human-system interaction support with human-human interaction support on student learning experiences.In this paper, the author presents information that is quite valuable, however, several important things need to be detailed to get a more comprehensive point of view.Response 1: We have revised the manuscript carefully based on all the comments and issues raised.Although there are some studies that have addressed the issues of student interaction with the system there is still a lack of teacher-student or student-student interaction in interactive learning systems [1, 2, 3].Many systems have promoted a single type of interactivity [4, 5, 6, 7, 8, 9, 10].Literature Review -Interactivity -Paragraph 1: Past studies in Table 1 shows promoted interactivities by different studies.These studies focused primarily on human-system interactivity while a few provided human-human interactivity.Literature Review -Interactivity -Paragraph 2: Past studies in Table 2 depicts that most systems solely focused on interactivity through AR and not implementing any human-human interactivity.
(2) Lacking Spatial Visualization Introduction -Paragraph 2: Application of integration is one of the chapters in Calculus that requires spatial visualization and the 2D medium of the traditional classroom does not provide a proper solution [11, 12, 13].Introduction -Paragraph 3: After consulting with the subject teacher on what the student struggle most in learning calculus, it was suggested that students face difficulties in understanding solid of revolution through traditional classroom method as it requires 3D visualization.The existing interactive methods proposed to overcome these problems are included as: AR based interactive book [5] is a tool used in asynchronous learning.Past studies in Table 1 shows promoted interactivities by different studies.These studies focused primarily on human-system interactivity while a few provided human-human interactivity.
Introduction -Paragraph 4:
Advantages of the existing methods are included as: Interactive technology has made learning more personalized and kept students engaged through various applications [23].This paper also suggested that interactive technology can make learning more active, and intensive for students where students can communicate easily with each-others and also with the teacher [23].
Disadvantages of the existing methods are included as:
Recently implementation of Augmented Reality (AR) in pedagogy has been adopted by many researchers where the emphasis has mainly been kept on 3D visualization.Many have implemented AR in geometry for school students [14, 19, 24] while some other implemented it for calculus [12, 13].In both cases the authors have focused on the immersive learning experience through AR but no human-human interaction function was provided.
Comment 4: What are the existing systems that support visualization and interaction and what are the problems with these systems?Response 4: Introduction -Paragraph 3: Although researchers have implemented AR to assist in visualization, they are mainly focused on human-system interaction, either through haptic or non-haptic interaction [14, 15, 16, 17].Besides, agility is another issue that some systems could not addresses as they are desktop-based systems [17, 18, 19].Under the theory of learner constructing their knowledge Lev Vygotsky theorized that learning occurs through personalization and socialization [20].So it is crucial to provide the function to interact among the learners.
Comment 5: Literature study: The paper needs to include an explanation of the following points to add clarity to the problem and state-of-the-art research on learning using visualization: Utilization of AR technology in calculus learning.Response 5: Advantages of the existing methods are included as: Interactive technology has made learning more personalized and kept students engaged through various applications [23].This paper also suggested that interactive technology can make learning more active, and intensive for students where students can communicate easily with each-others and also with the teacher [23].
Disadvantages of the existing methods are included as:
Recently implementation of Augmented Reality (AR) in pedagogy has been adopted by many researchers where the emphasis has mainly been kept on 3D visualization.Many have implemented AR in geometry for school students [14, 19, 24] while some other implemented it for calculus [12, 13].In both cases the authors have focused on the immersive learning experience through AR but no human-human interaction function was provided.
Comment 6: Various interaction models exist in learning applications and their advantages and disadvantages.Response 6: Under the theory of learner constructing their knowledge Lev Vygotsky theorized that learning occurs through personalization and socialization [20] .So it is crucial to provide the function to interact among the learners.Unfortunately, majority of the AR learning application only focused on providing content visualization resulting in not a holistic application in learning that promote both type of interaction [14, 15, 21, 22].
Comment 7: Why visualization systems must be supported by human-human interaction in learning applications?Response 7: Under the theory of learner constructing their knowledge Lev Vygotsky theorized that learning occurs through personalization and socialization [20] .So it is crucial to provide the function to interact among the learners.
Comment 8: What prevents current AR applications from providing student-student and teacher-student interaction?Response 8: The current AR applications emphasized on immersive learning experience through AR that lacking in providing student-student and teacher-student interaction.The information is included as follows: Recently implementation of Augmented Reality (AR) in pedagogy has been adopted by many researchers where the emphasis has mainly been kept on 3D visualization.Many have implemented AR in geometry for school students [14, 19, 24] while some other implemented it for calculus [12, 13].In both cases the authors have focused on the immersive learning experience through AR but no human-human interaction function was provided.
Comment 9: Methodology:
To ensure the homogeneity of the sample, it was mentioned in the paper that none of the students had ever used AR applications before.However, the student's initial level of understanding in the area studied before they used the provided AR applications is unknown.Different starting points have the potential to provide different endpoints as well.Response 9: Homogeneity of the sample is included as follows: Methods-Research Design and Participants This study conducted a pre-assessment evaluation of the respondents' knowledge, skills, or understanding in the AR area related to the system before they started using it [32].
Descriptive Statistics-Demographic Profile It is found that the students scored the same grades in this pre-assessment, indicating that they had the same initial level of understanding in the studied area before using the system.
Comment 10: Discussion: In the discussion section, the author should not only emphasize the findings obtained from the statistical approach that has been carried out.A more in-depth study to find out what aspects make human-human interaction give better student performance results than human-system interaction needs to be presented with comparisons from previous studies.Response 10: The discussion was rewritten with comparisons from previous studies as follows: The study aims to develop an augmented reality application that promotes interaction and spatial visualization for learning calculus and evaluates the effect of the interactive learning system on the performance of learning.Human-human interaction is evaluated as part of the chat function of application and human-system interaction is analysed as part of augmented reality implementation through mobile application.
The first hypothesis was accepted as human-system interactivity positively affected the learning experience factor.As this study developed an augmented reality application for learning calculus, here human-system interaction is related to the interactivity promoted by the augmented reality application.Studies implementing augmented reality in improving learning experience founds that the didactic experience of this technology make students more engaged and hence providing an enriched learning experience [12, 14, 16, 21, 27].This study has also found that among the human-system interaction all students univocally cancelled out disagreement that it provided them with real world 3D object feelings.The study using similar questionnaire have also found that it has significant impact on the learning experience in general and motivation in particular [39].So, the result of this study is aligned with the prior relevant studies.
The second hypothesis result found that human-human interaction significantly influences learning experience.The implication from this hypothesis acceptance indicate that learning experience is shaped by human-human interaction which can comes from student-teacher or student-student.In the study design both student-teacher and student-student interaction facilities were provided via application chat function.So, it can be said that both type of human-human interactions are associated positively with learning experience.This result is in line with the findings where both types of interaction led to learning satisfaction [47].Another study found that human-human interactivity increases learning engagement [1, 2, 3].In addition to that another study has also claimed that learning confidence as part of learning experience has also increased through interaction with peers and teachers [1].From the findings of these research, it can be concluded that the result of human-human interaction affecting learning experience is in line with existing research.
The third hypothesis of the study was tested by using paired sample "t-test" where learning performance means were compared from 'pre-test' to 'post-test scenario.The result shows that there is significant relation between the score and "post-test" mean is higher than the 'pre-test' one.Other studies have also found that learning experience as result of humansystem and human-human interactivity increase learning performance [3, 15, 47].So, it can be claimed that in terms of hypothesis acceptance, this study is in line with existing literature.
Introduction
This section should include the following information: What are the existing problems in learning Calculus?
○
Why is "Revolution of Solids" proposed in this research?
○ A discussion on the design and development of the interactive methods should be presented here.
Participants -this study uses quasi-experimental design, the participants should be assigned into control and experimental group.This information is not available.How did you assign participants into these groups?How did you select the participants for this study?
For measurement items, what are the scales used in this study?Please list the sources.For the self-developed items, reliability and validity tests should be conducted and reported.
What is the structural equation modelling (SEM)?Why is SEM used in this research?How did you conduct the data analysis?You should also discuss the data analysis for longitudinal study using SEM.
Results
As the measurement items were adopted from many different sources, instrument reliability and validity tests using Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) should be conducted and reported.
For self-developed items, procedures to establish instrument reliability and validity should be presented.
The result section needs to do a major revision with appropriate analyses.Please include relevant Table(s) with appropriate details such as paired t test results, significant value, etc. for each of the test conducted.
Discussions
The author(s) should discuss/relate/compare the findings with prior relevant studies.Please also explain how the results in this study relate to previous findings, whether in support, contradiction, or simply as added data.The discussion should be aligned with the research objectives.
Additional comments:
Page 5, figure 1. Research design -Causal not casual relationship
○
The development of research hypotheses is missing.
○
Did not justify the adopted questionnaire.
○
Did not specify the questions for the quiz.Are the same or different sets of questions used for pre and post?How many questions?
○
Did not mention if measurement model and structural model were assessed based on data collected in the first phase or the third phase?
○
Ensure consistency in reporting the number of decimals throughout the manuscript (including in Tables).
Sook Ling Lew
Comment 1: In this paper, the authors introduced the interactive learning system for learning Calculus.There is a great deal of good work in this paper.However, it does require a significant overall revision to make the best of this and present the work in a clear and well-structured way, particularly on the literature review, the methodology, the research design and procedures used in this study, measurement instruments and data analysis techniques to achieve the research objectives.Additionally, the discussion section is not exhaustive to provide good attempt to explain many of the study results.In fact, the discussion is not aligned with the research objectives.
Response 1: We have revised the manuscript carefully based on all the comments and issues raised.
Comment 2: Abstract Background -the problems of the interactive system are not clear.The authors should point out the existing problems of the interactive system in learning mathematics, particularly in learning Calculus.Please include the main objective(s) or research question(s) of this study in this section.The research objectives/research questions are not clear.Response 2: Existing problems of the interactive system in learning mathematics, particularly in learning Calculus are added as follows: However, calculus learning augmented reality application has limitation in promoting a human collaboration in learning.
The main objective is included in Purpose and Introduction: This research develops an augmented reality application that promotes interaction and spatial visualization for learning calculus and evaluates the effect of the interactive learning system on the performance of learning.Lacking Human-Human interactivity.Although there are some studies that have addressed the issues of student interaction with the system there is still a lack of teacher-student or student-student interaction in interactive learning systems [1, 2, 3].Many systems have promoted a single type of interactivity [4, 5, 6, 7, 8, 9, 10].Past studies in Table 1 shows promoted interactivities by different studies.These studies focused primarily on human-system interactivity while a few provided human-human interactivity.Past studies in Table 2 depicts that most systems solely focused on interactivity through AR and not implementing any human-human interactivity.
Lacking Spatial Visualization
Application of integration is one of the chapters in Calculus that requires spatial visualization and the 2D medium of the traditional classroom does not provide a proper solution [11, 12, 13].After consulting with the subject teacher on what the student struggle most in learning calculus, it was suggested that students face difficulties in understanding solid of revolution through traditional classroom method as it requires 3D visualization.
Comment 5: Why "Revolution of Solids" is proposed in this research?Response 5: The reasons of "Revolution of Solids" are included as follows: In understanding revolution of solids around an axis, visualization of the solid in-question is very important.Spatially related 3D images cannot be drawn on a 2D board or projected over a computer.These hinder students in visualizing the solid and result in difficulties in conceptualizing the concept.
After consulting with the subject teacher on what the student struggle most in learning calculus, it was suggested that students face difficulties in understanding solid of revolution through traditional classroom method as it requires 3D visualization.
Comment 6: Why interactive approach is proposed to overcome these problems?Response 6: The reasons of interactive approach are included as follows: Although researchers have implemented AR to assist in visualization, they are mainly focused on human-system interaction, either through haptic or non-haptic interaction [14, 15, 16, 17].Besides, agility is another issue that some systems could not addresses as they are desktop-based systems [17, 18, 19].Under the theory of learner constructing their knowledge Lev Vygotsky theorized that learning occurs through personalization and socialization [20].So it is crucial to provide the function to interact among the learners.Unfortunately, majority of the AR learning application only focused on providing content visualization resulting in not a holistic application in learning that promote both type of interaction [14, 15, 21, 22].As such, developing an interactive AR application that facilitates both human-human and human-system interactivities and assists in conceptualizing and understanding math better is needed.Recently implementation of Augmented Reality (AR) in pedagogy has been adopted by many researchers where the emphasis has mainly been kept on 3D visualization.Many have implemented AR in geometry for school students [14, 19, 24] while some other implemented it for calculus [12, 13].In both cases the authors have focused on the immersive learning experience through AR but no human-human interaction function was provided.
Comment 9: How are the proposed interactive methods (augmented reality and chat functions) able to solve the existing problem in learning Revolution of Solids?Response 9: Introduction -Paragraph 3 So, it is crucial to provide the function to interact among the learners.Unfortunately, majority of the AR learning application only focused on providing content visualization resulting in not a holistic application in learning that promote both type of interactions.The interactive methods (augmented reality and chat functions) are proposed as follows: 1. AR -to implement Spatial Visualization 2. Chat -to improve Human-Human interactivity.
Comment 10: Please include the main objective(s)/ research questions of this study in this section.Response 10: Main objectives are included as follows: This research aims to develop an interactive AR application for learning calculus that promotes human-system interaction via AR and human-human interaction through chat functions.
Comment 11: Include a brief description of the methodology and system used in this study.Response 11: A brief description of the methodology and system used in this study are included as follows: Pre-test and post-test data from students learning calculus in the classroom using the AR interactive learning system were gathered using a questionnaire and a quiz to evaluate the impact of the developed AR system.Comment 12: Literature review/Background study In this section, discussion on the following should be included: Major problems in learning calculus (visualization) Response 12: Discussion on the major problems in learning calculus (visualization) are included as follows: 1.
Lacking Human-Human interactivity.
2.
Lacking Spatial Visualization Comment 13: How are the interactive activities able to solve the existing problem in learning calculus?Response 13: Under the theory of learner constructing their knowledge Lev Vygotsky theorized that learning occurs through personalization and socialization [20].So, it is crucial to provide the function to interact among the learners.Unfortunately, majority of the AR learning application only focused on providing content visualization resulting in not a holistic application in learning that promote both type of interaction [14, 15, 21, 22].
Comment 14: What are the existing interactive methods proposed to overcome these problems?Should discuss the advantages and disadvantages of the existing methods.Response 14: The existing interactive methods proposed to overcome these problems are included as: AR based interactive book [5] is a tool used in asynchronous learning.Past studies in Table 1 shows promoted interactivities by different studies.These studies focused primarily on human-system interactivity while a few provided human-human interactivity.
Advantages of the existing methods are included as: Interactive technology has made learning more personalized and kept students engaged through various applications [23].This paper also suggested that interactive technology can make learning more active, and intensive for students where students can communicate easily with each-others and also with the teacher [23].
Disadvantages of the existing methods are included as: Recently implementation of Augmented Reality (AR) in pedagogy has been adopted by many researchers where the emphasis has mainly been kept on 3D visualization.Many have implemented AR in geometry for school students [14, 19, 24] while some other implemented it for calculus [12, 13].In both cases the authors have focused on the immersive learning experience through AR but no human-human interaction function was provided.
Comment 15: What are the major issues of the existing interactive methods?Response 15: Unfortunately, majority of the AR learning application only focused on providing content visualization resulting in not a holistic application in learning that promote both type of interaction [14, 15, 21, 22].Past studies in [37].Therefore, the primary research method for this study was quantitative.
Comment 19: A discussion on the design and development of the interactive methods should be presented here.Response 19: A discussion on the design and development of the interactive methods is included as follows: Interactivity can also be categorized as human-system and human-human, as technologies are allowing both in a learning environment [3].The framework for interactive learning application is developed based on these two types of interactions.
Comment 20: Participants -this study uses quasi-experimental design, the participants should be assigned into control and experimental group.This information is not available.How did you assign participants into these groups?How did you select the participants for this study?For measurement items, what are the scales used in this study?Please list the sources.For the self-developed items, reliability and validity tests should be conducted and reported.Response 20: Additional information of participants is added in Research Design and Participants as follows: One of the numerous forms of quasi-experimental design is pre-and post-test research."Quasi" refers to something that resembles experimental study.Studies evaluating a curriculum for education, a treatment system or a simulation training commonly apply preand post-test evaluation [33].Since this study is to compare students' learning experiences and academic performance before and after using the AR system, a pre-and post-test quasi-experimental research method was chosen.Convenience sampling is a nonprobability or non-random sampling in which it is able to conveniently reach and recruit members of the target population for the study [34].Usually, convenience samples of university students are used in academic surveys [35].Since the respondents were students from a Malaysian university taking Calculus subject, a nonprobability convenience sampling method was used for this study.
For measurement items, the scales used in this study are included in Survey Design and Procedures -Section B as follows: All the questions for the aforementioned variables were measured through 5-point Likert scale.
For the self-developed items, reliability and validity tests were conducted and included in Internal consistent reliability, as follows: All the self-developed questions under human-human interaction variables satisfied the reliability as the loading is >0.6 [33].For the three self-developed questions (LE 10, LE11, and LE12) under learning experience variable two of the questions LE11, LE12 also satisfy this criterion.
Comment 21: What is the structural equation modelling (SEM)?Why is SEM used in this research?How to conduct the data analysis?You should also discuss the data analysis for longitudinal study using SEM.
Response 21: Structural equation modeling (SEM) combines multiple regression and factor analysis.It is employed to examine the relationships between a group of observable variables and latent concepts.There are two main types of the SEM.In this study, instead of using the traditional Covariance-Based-SEM (CB-SEM), Partial Least Squares-SEM (PLS-SEM).PLS-SEM is chosen to analyze the data based on its capability to handle complex models and small sample sizes.Data analysis using PLS-SEM was employed by defining clear research objectives and hypotheses, data preparation, structural model analysis, hypotheses testing, structural model assessment and predictive accuracy, effect size and relevance evaluation [36].PLS-SEM is often chosen for longitudinal studies due to its effectiveness in handling small sample sizes, which are common in these studies, especially when compared to crosssectional studies [36].
Comment 22: Results
As the measurement items were adopted from many different sources, instrument reliability and validity tests using Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA) should be conducted and reported.Response 22: CB-SEM and PLS-SEM are two widely accepted second generation data analysis approaches.PLS-SEM is used to analyze the data as the objective of the study is to confirm the hypotheses to explain the relationship between the exogenous and endogenous constructs for the following reasons [39]: PLS-SEM can be applied for exploratory research-when "theory is less developed".PLS-SEM can handle complex cause-effect structural models.Additionally, data characteristics, such as small sample size and non-normal data, can be some of the reasons to choose PLS-SEM.
Comment 23: For self-developed items, procedures to establish instrument reliability and validity should be presented.Response 23: For the self-developed items, reliability and validity tests were conducted and included in Internal consistency reliability, as follows: All the self-developed questions under human-human interaction variables satisfied the reliability as the loading is >0.6 [33].For the three self-developed questions (LE 10, LE11, and LE12) under learning experience variable two of the questions LE11, LE12 also satisfy this criterion.
Comment 24: The result section needs to do a major revision with appropriate analyses.Please include relevant Table(s) with appropriate details such as paired t test results, significant value, etc. for each of the test conducted.Response 24: Tables 8 to 13 are included to report the results of paired t test results, significant value and other tests conducted.
Comment 25: Discussions The author(s) should discuss/relate/compare the findings with prior relevant studies.Please also explain how the results in this study relate to previous findings, whether in support, contradiction, or simply as added data.The discussion should be aligned with the research objectives.Response 25: The discussion was rewritten with comparisons from previous studies and aligned with the research objectives as follows: The study aims to develop an augmented reality application that promotes interaction and spatial visualization for learning calculus and evaluates the effect of the interactive learning system on the performance of learning.Human-human interaction is evaluated as part of the chat function of application and human-system interaction is analysed as part of augmented reality implementation through mobile application.
The first hypothesis was accepted as human-system interactivity positively affected the learning experience factor.As this study developed an augmented reality application for learning calculus, here human-system interaction is related to the interactivity promoted by the augmented reality application.Studies implementing augmented reality in improving learning experience founds that the didactic experience of this technology make students more engaged and hence providing an enriched learning experience [12, 14, 16, 21, 27].This study has also found that among the human-system interaction all students univocally cancelled out disagreement that it provided them with real world 3D object feelings.The study using similar questionnaire have also found that it has significant impact on the learning experience in general and motivation in particular [39].So, the result of this study is aligned with the prior relevant studies.
The second hypothesis result found that human-human interaction significantly influences learning experience.The implication from this hypothesis acceptance indicate that learning experience is shaped by human-human interaction which can comes from student-teacher or student-student.In the study design both student-teacher and student-student interaction facilities were provided via application chat function.So, it can be said that both type of human-human interactions are associated positively with learning experience.This result is in line with the findings where both types of interaction led to learning satisfaction [47].Another study found that human-human interactivity increases learning engagement [1, 2, 3].In addition to that another study has also claimed that learning confidence as part of learning experience has also increased through interaction with peers and teachers [1].From the findings of these research, it can be concluded that the result of human-human interaction affecting learning experience is in line with existing research.The third hypothesis of the study was tested by using paired sample "t-test" where learning performance' means were compared from 'pre-test' to 'post-test scenario.The result shows that there is significant relation between the score and "post-test" mean is higher than the 'pre-test' one.Other studies have also found that learning experience as result of humansystem and human-human interactivity increase learning performance [3, 15, 47].So, it can be claimed that in terms of hypothesis acceptance, this study is in line with existing literature.The development of research hypotheses is missing.
2.
Did not justify the adopted questionnaire.
3.
Did not specify the questions for the quiz.Are the same or different sets of questions used for pre and post?How many questions?
4.
Did not mention if measurement model and structural model were assessed based 5.
The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com
Comment 2 :
Introduction:In this section, explanations related to the following things need to be explained to construct a more complete research motivation: What problems are still encountered in the application of learning media based on 3D visualization with AR technology?Response 2: The problems are included as follows:(1) Lacking Human-Human interactivity.Introduction -Paragraph 1:
Comment 3 :
What systems currently exist to support learning calculus using AR and what are their weaknesses?Response 3: Literature Review -Interactivity -Paragraph 1:
Comment 3 :Comment 4 :
Introduction This section should include the following information: Response 3: The existing problems in learning Calculus are included Introduction -Paragraph 1, 2, 3, Literature Review -Interactivity -Paragraph 1 & Paragraph 2; as follows: What are the existing problems in learning Calculus?Response 4: 1.
Comment 7 :
What are the existing interactive methods have been proposed to solve Mathematics problem in particular Calculus (Revolution of Solids)?Response 7: The existing interactive methods are focused on providing content visualization and included in the Introduction as follows:
Comment 8 :
What are the weaknesses of the existing interactive methods in solving mathematics (Calculus-Revolution of Solids) problems?Response 8: The weaknesses of the existing interactive methods are included in Introduction -Paragraph 4 as follows:
Table 1 .
Interactivities of different studies.
Table 3 .
Questionnaire of human-human interactivity factor.
Table 4 .
Questionnaire of human-system interactivity factor.
Table 5 .
Questionnaire of learning experience factor.
21It was a pleasure to work on such a well-designed lessonIfelt good to interact with AR object 22 It was a pleasure to work on such a well-designed lesson I enjoyed the audio visual content so much that I would like to know more about this topic 23 It was a pleasure to work on such a well-designed lesson I felt good after sharing information through discussion platform Table 6.Self-developed questionnaire of learning experience factor.No Self-developed 24 Video of object formulation through revolution of solid has deepen my understanding 25 AR object has facilitated in visualizing the solid better
Table 8 .
Convergent validity and composite reliability.
Table 12 .
Collinearity assessment and hypothesis test.
Table 13 .
Pre and post test analysis of performance.
Table 2 depicts that most systems solely focused on interactivity through AR and not implementing any human-human interactivity.We have re-arranged the sub-sections with the suggested sections such as Study design and procedures, Research Framework, Survey Design and Procedures and Data Analysis Techniques in Methodology.Comment 18: Study design and procedures -you should discuss the choice of quantitative research method in this research, what is the quasi-experimental design?Why is quasiexperimental design used in this research?What are the procedures to conduct this research?Response 18: The reason of Quantitative research method was used are added in Research Design and Participants as follows: Quantitative research designs include quasi-experiments, true experiments, causalcomparative research, surveys research and experiments research.Quantitative research is likely to test theories by investigating the relationships among variables | 2022-03-16T15:33:13.489Z | 2022-03-14T00:00:00.000 | {
"year": 2024,
"sha1": "1101d8a030a6cb420795b3e438a958587e1e3e0a",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/11-307/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f45f36dba48b66b7edf3ff8e38dd7aba0a5fcd3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
59391330 | pes2o/s2orc | v3-fos-license | Acquisition of Letter Naming Knowledge , Phonological Awareness , and Spelling Knowledge of Kindergarten Children at Risk for Learning to Read
This study measures letter naming, phonological awareness, and spelling knowledge in 2,100 kindergarten students attending 63 schools within a large, urban school district. Students were assessed across December, February, and May of the kindergarten year. Results found that, by May, 71.8% of students had attained full letter naming knowledge. Phonological awareness emerged more slowly with 48% of students able to reliably segment and blend phonemes in words. Spelling development, a measure of phonics knowledge, found that, by May, 71.8% of students were in the partial-alphabetic phase. A series of regression analyses revealed that by the end of kindergarten both letter naming and phonological awareness were significant predictors of spelling knowledge (b = .332 and .518 for LK and PA, resp.), explaining 52.7% of the variance.
Introduction
Becoming a competent reader is critical to academic achievement [1].However, the National Assessment of Educational Progress (NAEP) reports 74% of fourth-grade students attending the nation's largest school districts score below the proficient level in reading with this percentage climbing to 83% for African-American children [2].Unfortunately, the same data finds that a percentage of those struggling with reading in fourth grade continue to struggle through the secondary grades.This strongly suggests that students must acquire the foundational literacy skills prior to fourth grade that will set them on track for appropriate reading development.The purpose of the present study is to investigate the growth of reading subskills in approximately 2,000 kindergarten children attending a large urban district where an emphasis has been placed on the teaching of letter naming knowledge, phonological awareness, and lettersound correspondence in kindergarten.
1.1.Theoretical Framework.The verbal efficiency theory and related lexical quality hypothesis [3][4][5][6], as well as the connectionist model of reading [7], make the case that to be a successful reader phonological, orthographic, and semantic representations must be efficiently integrated.Perfetti maintains that of the various subskills involved in the reading process, fast recognition of letters and the letter-sound combinations found within words can be trained to a high level.It follows then that mastery of letter name knowledge, the ability to isolate and manipulate phonemes, and explicit instruction in letter-sound correspondence will predict conventional reading and spelling.
1.2.Letter Name Knowledge.Many kindergarten children who come from disadvantaged backgrounds often enter formal schooling lagging behind others in their early literacy development.As a result, they are at risk for later reading difficulties [8].Research syntheses [9] have found that success in early literacy subskills such as letter naming knowledge and phonological awareness requires explicit instruction.That may be essential to closing the early literacy development gap.
Children lacking competent alphabet knowledge upon entry into kindergarten need explicit instruction focused on letter identity, letter naming, and writing of letters.These capabilities enable them to successfully transition into letter sounds and spellings [10].Letter naming knowledge (LNK) requires children to master the recognition of upper-and lowercase shapes of each of the 26 graphemes of the alphabet and is a landmark accomplishment for successful reading acquisition [11,12].Further support for the importance of LNK is a long line of research that advances the idea of causality for letter name knowledge to more rapid learning of sounds associated with letters and letter combinations [13][14][15][16].As LNK acquisition typically occurs before phonemic awareness [17], it is critical that the child makes the connection that printed letters represent the sounds in speech, a concept called the alphabetic principle [18,19].In addition to automaticity in pronouncing letter names, LNK has been reported to provide access to phonemic knowledge about the letter when in the initial or final positions of words [15,20,21].
Critical to LNK is the ability to identify the features of letters that distinguish them from each other.Those who automatically retrieve both upper and lowercase letters from long-term memory are less likely to make letter identification errors and misread fewer words, making LNK an important predictor of a child's success with various literacy tasks [22][23][24].LNK has also been found to be important to early encoding processes developed through invented spelling instruction [25][26][27][28][29].It is thought that this skill may tap the same central processes that facilitate reading fluency and predict reading achievement [30,31].An example of the importance of LNK is found in a study by Share et al. [32].The authors measured 39 variables in beginning kindergarten students including IQ, socioeconomic status, and vocabulary knowledge with results showing letter naming knowledge to be the best predictor of individual end-of-year reading achievement and the second-best predictor behind phonemic awareness of first-grade reading scores.
Additional insight into the role of letter naming is found in its relationship to spelling development.Models of developmental spelling have identified the letter naming stage as the one where the reader relies on LNK to identify words [33,34].Reliance on consonant names in this stage is due to the early reader's difficulty with disentangling syllables and rimes into individual phonemes.As children develop the ability to segment rimes into phonemes, they become less reliant on LNK.As such, LNK contributes to the child's accuracy with consonant identification and allows the emergence of spelling knowledge that is based on sound, although they may still be unaware of sound at the phoneme level.This suggests the transition of reading and spelling away from a visual-cue strategy to one using phonetic cues.
Phonological
Awareness.An individual has phonological awareness when they are aware that words have constituent sounds and that those sounds do not always hold meaning within a word [35,36].While research discussions have involved whether phonological awareness is composed of one or two constructs [37], it is now thought to be a unitary construct [38][39][40][41].Phonological awareness develops on a continuum that moves from large to increasingly smaller units of sounds within words.This awareness ends with the identification of phonemes, the smallest unit of sound in the English language.This makes phonemic awareness a subset of phonological awareness and is present in the individual when they can isolate and manipulate individual sounds within words [37,40].Emergent readers acquire phonological awareness through instruction in a fairly predictable manner that begins at the syllable level, progresses to the recognition of onset and rimes (as in c-at), and ends with the awareness of phonemes as in /c/ /a/ /t/ [42].While some evidence suggests that phonological awareness may not be required for letter-sound acquisition [13], it is clear that phonological awareness and its subcomponent phonemic awareness are an important predictor of learning to read and spell words [11,37,[43][44][45][46].
Adams [11] uses five levels to describe the developmental progression of phonological awareness.The first begins with hearing the sounds of words, followed by the ability to compare and contrast like-sounding words in what is called the oddity task.For example, the teacher might ask the student "Which word sounds different?/cat/, /mat/, or /dog/?"The third dimension is the awareness that words can be split into syllables (to-day) and then blended back together.The fourth is the ability to split words into phonemes and put them back into a word (/dog/ into /d/ /o/ /g/ and then back to /dog/).The fifth and final dimension is the most difficult, to isolate a phoneme within a word, delete it, and then replace it with another phoneme to form a new word.Schatschneider et al. [39] add a sixth dimension where children develop sensitivity to alliteration, the ability to identify the beginning of words.
1.4.Spelling Knowledge.Following the seminal study of Read [28], researchers have clearly established that readers acquire spelling knowledge along a developmental continuum [47][48][49][50][51][52].Invented spelling occurs when one uses their selfdirected attempt to write words using print [28].As their reading development progresses, the student uses the knowledge of phonology and orthography to write increasingly accurate word spellings.Evidence suggests that invented spelling may be an independent predictor of literacy outcomes [26].Because the same lexical system is used in both reading and spelling [53], readers apply their orthographic knowledge to both tasks [54].Consequently, analyzing students' spelling gives insight into their orthographic knowledge and their understanding of reading [48][49][50].However, the contribution of these processes to effective spelling is not equal as when children grow in their spelling knowledge they shift their reliance from phonological to orthographic and morphological information [34,48,[55][56][57][58][59].
Henderson [24] has identified five stages that he labeled preliterate, letter name, within-word, syllable juncture, and derivational constancy.The stages are described by Henderson and Templeton [60] where the preliterate stage finds that children may scribble freely and attempt to match certain sounds with marks.In the letter name stage children attempt to spell alphabetically by matching letters to sounds.As they acquire an increasing inventory of sight words spellings become more accurate as the child learns to examine words systematically around specific, salient features.Students in this stage are recognizing initial and final consonants, blends, and diagraphs, short vowels, affricates, and final consonant blends and diagraphs.In the early within-word stage students can provide the correct representation of short vowels, including words containing both a sounded and silent vowel (e.g., "take").Also in this stage students are beginning to read silently.Cognitively, this stage is a large leap forward as students move from letter-to-letter analysis to reading units or groups of letters.In the syllable juncture stage students learn more complex letter features including consonant doubling, e-drops for ed and ing, and r-controlled vowels.The final stage, derivational constancy, consists of silent and sounded consonants and Latin-derived suffixes and prefixes.Understanding and mastering these various combinations of developmental spelling patterns suggests the phonological and orthographic knowledge acquired by the reader which has been shown to be related to becoming a fluent reader [61].Read.In learning to read we ask children to match sounds to letters to learn graphophonemic relationships.This skill builds a foundation that is helpful as children learn to recognize letter patterns repeated across words [62].When encountering sounds in a word, readers can tap their knowledge of letter-sound correspondences to identify letters and letter combinations [63].It is not surprising then that children who are taught to segment words into their phonological parts acquire word reading skills at a faster pace than do children without these skills [32].Additionally, the effect of phonological training has recently been found to continue through elementary school and into the sixth grade, with effects extending to ninth-grade comprehension [64].In a study assessing the direct instruction of phonemic awareness and letter-sound knowledge, these two skills were found to fully mediate differences in word-level reading skills some five months later, thus establishing a causal connection between the two [65].Caravolas et al. [66] found that across four languages LSK and phonemic awareness were the strongest predictors of early reading skill over a 10-month period.
The Present Study.
Research has established that letter naming knowledge and phonemic awareness facilitate the learning of letter-sound correspondences.Of importance is that these two skills must be explicitly taught to students.Of interest in the present study is the extent to which this skills become evident in kindergarten students attending school within a large, urban district and from backgrounds that put many of them at risk for reading acquisition.Our interest is to study the emergence and relationships between letter identification, phonological awareness, and spelling development in kindergarten students.Our research questions are as follows: (1) What is the extent of letter identification knowledge, phonological awareness, and spelling knowledge acquisition in kindergarten students across the latter half of the school year?
(2) To what extent do letter identification knowledge and phonological awareness predict spelling ability in kindergarten children?The study sample consists of = 2,100 kindergarten students instructed by the 91 teachers participating in the improvement project.The intention was to include every student instructed by each of the participating teachers in the study sample.Due to issues such as student mobility and students not available during the assessment window, not every student is included in the sample.The mean age of students at the time of the December assessment was 5 years and 8 months while the mean age in the spring (May) was 6 years and 1 month.
Letter Naming Knowledge.
To determine the ability to read aloud the letters of the alphabet, students are asked to complete a test of letter naming knowledge (LNK) by reading aloud 26 letters in both lower-and uppercase form.The lowercase letters "a" and "g" are provided in two different scripts, accounting for a total of 28 lowercase letters for a total of 54 letters.To begin the child is provided with a sheet with the 26 uppercase letters printed in random order.With no assistance from the teacher the child then reads aloud each letter from left to right.While the child is reading the teacher records any letters read incorrectly or omitted by the student.After the uppercase letters are read the student is then provided with a sheet containing 28 letters written in lowercase form.These letters are also arranged in a random order on the page.Again, without teacher assistance, the student reads aloud each letter while the teacher records misread and omitted words.The student's score is the number of letters out of 54 that were read correctly.An assessment of reliability found high reliability where Cronbach's = .852.
Phonological Awareness.
Students were assessed individually for phonological awareness using the Phonological Awareness Test (PAT) from the Classroom Reading Inventory [67].The PAT is an informal, 77-item, individually administered assessment containing three subtests.The first subtest assesses the ability of the student to identify initial consonant sounds (IC) with a range of correct answers from 0 to 10.The phoneme segmentation test (PST) has a range from 0 to 15 and assesses one's ability to segment a word into its constituent sounds.The blending sounds test (BST) has a range from 0 to 55 and asks the student to combine or blend individual sounds to make a complete word.Because of the length of the blending sounds test the number of test items was reduced to 30.This resulted in an assessment where the total number of items equaled 55.To score the PAT the student is awarded 1 point for each item completed correctly.Reliability was assessed using Cronbach's and resulted in = .820.
Spelling Knowledge.
The Kindergarten Inventory of Developmental Spelling (KIDS) [68] is a 5-word spelling test designed to measure the child's knowledge of letter-sound correspondences.The assessment consists of five consonantvowel-consonant (CVC) words such as jam, rob, and let.The assessment can be administered individually or to groups of students using paper and pencil.Administration begins with the teacher modeling an example word on the board such as /map/ using a think-aloud strategy.The teacher demonstrates how to stretch out or rubber band the example word to better hear the individual sounds within the word.The teacher then writes the letter corresponding to each sound in the word.Following this demonstration, the teacher then pronounces aloud the first word, followed by a sentence using the word, after which the teacher pronounces the word again.For example, the teacher would say "Jam.I had jam on my toast.Jam." Students respond by writing the word on their paper.No further modeling is provided by teacher beyond the initial example.This procedure is repeated for the remaining four words.To determine a score each of the five words is graded on a scale ranging from 0 to 6 with specific directions provided by the test author.A score of 0 would reflect a word written using scribbles, waves, or letter-like symbols. 1 indicates the use of random letters to spell the word.To earn a score of 2 the student must spell the ending consonant correctly or use an acceptable alternate identified by the author such as a P instead of a B in the word /rob/.The student must also use any random letters for the other two sounds.To earn 3 the student must use the correct beginning consonant (or an acceptable substitute) and include any random letters for the vowel and ending consonant.To earn a score of 4 the student must write the correct beginning and ending consonants (or an acceptable substitute).A score of 5 reflects the correct beginning consonant, vowel, and ending consonant (or the acceptable letter substitute).A score of 6 reflects the correct spelling of the word.To assess test reliability Cronbach's alpha was conducted and resulted in = .91.
Assessment Administration.
As part of their 90 hours of classroom training, teachers were taught to administer assessments for letter naming knowledge (LNK), phonological awareness, and spelling knowledge.Training consisted of an explanation of each assessment including what subskill it measured and the protocol for its administration.Teachers then practiced administering each assessment with a classroom peer under the guidance of the instructor.During the following two weeks teachers were observed by a literacy coach experienced in the administration of each assessment while they assessed two students in their classroom.Literacy coaches used a Likert-scaled rubric to grade the teacher on the administration of each assessment.These rubrics were then reviewed by the class instructor to insure the quality of administration.In cases where required benchmarks were not achieved, the teacher was remediated by the instructor and reevaluated for fidelity by the literacy coach.Because letter naming, phonological awareness, and spelling knowledge are learned through instruction, we did not measure these skills during the first few months of the school year.Our first measurement period took place in December in order to give time for children to benefit from instruction.Our second and third measurement periods occurred in February and May.During each of the three administration periods teachers were given three weeks to assess the students in their class.Teachers then submitted the scores for their students through transmission of an Excel spreadsheet to a school-wide literacy coach.Coaches had also been trained on all assessment instruments and provided instruction by the researchers on insuring teachers reported their data correctly.Data was submitted to the research team by the coaches for each school.In instances where data questions arose, coaches confirmed test results with the teacher in question.
Results
Means for the measured variables are shown in Table 1 while the bivariate correlations are in Table 2. Letter naming knowledge (LNK) means reveal the majority of growth had occurred by December with slower growth coming in February and May.Both phonological awareness (PA) and spelling knowledge (SK) showed the strongest growth between February and May.Bivariate correlations for December reveal a moderate correlation (.491) between LNK and PA while the correlation with SL is very small (.084).By February the correlation between all variables was similarly moderate, while in May correlations had strengthened to .432 between LNK and PA and to .556 between PA and SK.For letter naming knowledge (LNK) December period reveals a mean of 46.26; however, Figure 1 shows only 35.9% of students had mastered all 54 letters.In February, the mean grew to 50.20 while 56.1% of students knew all letters, and by May the mean of 51.9 resulted in 71.8% of students knowing all letters.Phonological awareness (PA), the second variable under consideration, was measured with a ceiling equal to 55. Examination of the PA means (Figure 2) reveals that in December 15.5% of students had scored 50 or higher.By February 24.8% of students had reached criterion and by May this percentage increased to 48.0%.Attainment for spelling knowledge (Figure 3) (range of 0 to 30) shows that in December 11.5% of students had attained a score of 24 or higher.By February 20.1% of students score 24 or higher, while three months later in May, 72.1% of all students scored 24 or higher.Table 2 shows the bivariate correlations.Of note is the strengthening of the relationship between LNK and spelling knowledge across the three months from .084 in December to .278 in February and .556 in May.Also of interest is the increase in correlation between PA and spelling knowledge from .168 in December, to .102 in February, to .662 in May. Figure 1 shows the growth of the three measured variables across the three measurement periods.
Research Question One.
Our first research question asks how do letter identification knowledge (LNK), phonological awareness (PA), and spelling knowledge (SK) emerge across the latter half of kindergarten.Figure 4 plots the changes in the measured variables across the three measurement periods.To answer this question we conducted a repeated measures analysis for time (December, February, and May) for each of the three variables.An assumption of repeated measures when three or more conditions are present is that the variances for each should be similar.Each of our three variables resulted in significant Mauchly's test indicating the
Research Question Two.
The second question of interest is the extent to which letter naming knowledge (LNK) and phonological awareness (PA) predict spelling knowledge (SN) in kindergarten children.To answer this question, we conducted a series of hierarchical regression analyses where we regressed spelling knowledge onto letter naming knowledge and phonological awareness.To gain insight into the predictive value of these variables over time, this same model was constructed for each of the three measurement periods.Table 4 displays the results for each measurement period.
For December only phonological awareness was a significant predictor of spelling knowledge, explaining 2.8% of the variance, = 6.78 and < .001.For the February time period letter naming knowledge becomes the sole significant predictor of spelling knowledge, explaining 7.7% of the variance, = 12.41 and < .001.By May both letter naming knowledge and phonological awareness are statistically significant predictors of spelling knowledge.Letter naming knowledge accounts for 31% of the variance in spelling knowledge, = 20.30and < .001,while phonological awareness predicts 21.8% of the variance in spelling knowledge, = 31.65 and < .001.Together letter naming knowledge and phonological awareness explain 52.7% of the variance in spelling knowledge.Also of interest are the changes in the standardized betas across time.In December, the beta for letter naming knowledge equals .001(nonsignificant) while phonological awareness equals .168( < .001).In February the standardized beta for letter naming knowledge equals .270( < .001)while phonological awareness (beta = .027)is a nonsignificant predictor.In May, the standardized beta for letter naming knowledge equals .332( < .001)while phonological awareness equals .518( < .001).
Discussion
The goal of this study was to investigate the emergence of early literacy skills associated with instruction, across a large urban school district in the latter part of kindergarten.In pursuit of this we measured the growth of letter naming knowledge (LNK), phonological awareness (PA), and spelling knowledge (SK) of 2,100 kindergarten students, across three time periods, who attended schools where students typically struggle with reading acquisition.We found first that the acquisition of LNK came slowly, with just over 33% of students knowing all 54 letters by December.By February 56.1% of students knew all letters, and by the end of May just under 72% of students knew every letter.This meant that over 28% of end-of-kindergarten students could not name all letters of the alphabet in upper-and lowercase form by the end of kindergarten, a critical benchmark for reading acquisition.
Phonological awareness would be expected to emerge more slowly than letter naming knowledge and our results supported this.At the end of December, 15.5% of students [34] suggest that as the child's ability to discriminate phonemes in words becomes developed, their ability to identify consonants increases.This phenomenon is seen in our results where increasing PA scores led to large increases in SK, suggesting students were relying on their phonemic awareness skills to correctly identify the beginning and ending consonants in the SK assessment.
Our second research question investigated the extent to which LNK and PA predicted SK.We found that in December LNK was a nonsignificant predictor of SK while PA accounted for 2.8% of the variance in spelling knowledge.While this is a small amount of variance that may be driven by our large sample size, it does suggest an emerging relationship between PA and SK.In February, phonological awareness was no longer a significant predictor of spelling knowledge while letter naming predicted 7.7% of the variance.While this is a curious result, the percentage of students able to correctly name all letters increased by over 50% to 56.1% of all students and may be driving the results.While the percentage of students achieving PA improved by 60%, the overall percentage was just less than 25%.This suggests a tipping point in the data where LNK becomes a stronger predictor than PA as LNK increases.Between February and May LN grew by 28% to a point where nearly 72% of students could correctly name all letters, while PA grew by almost 94% to a point where 48% of students attained a score ≥ 50 out of 55.Our regression results showed that both LNK and PA were significant predictors of SK, accounting for 31.0%and 21.8% of the variance, respectively, and explaining a total of 52.7% of the variance.In May both LN and PA were significant predictors of SK, with the betas equal to .332 and .518,respectively.Our results align with those of Ouellette and Sénéchal [26] who also found that LNK and PA predicted SK.The theoretical framework informing this study is drawn from the verbal efficiency and lexical quality hypotheses [3][4][5][6] and the connectionist model as proposed by Seidenberg and McClelland [7].Our results support the primary hypotheses of these two theories that as reading subskills increase in efficiency, reading outcomes improve.
Study Limitations.
The results of this study should be carefully interpreted with the following limitations in mind.The sample of students was not randomly sampled and represents many who come from backgrounds of poverty.It could be that a different student who was randomly drawn might exhibit very different growth trajectories on the measured variables.Also, a population not from atrisk backgrounds may also show very different development patterns from those seen in our study.While we described the teachers participating in the study it is not appropriate to attribute any differences in student growth to instruction as there is no control group of teachers to which a comparison can be made.The results of this study reflect our study sample and should not be generalized to other districts or populations of students.
Future
Research.This study measured the emergence of several critical reading subskills related to effective reading.The value of this study is that the data on 2,100 kindergarten students was gathered along three time points within a large, diverse school district with students attending 63 urban schools.Future research efforts could implement a longitudinal approach that follows students across first grade and possibly beyond.Other studies could employ additional measures to capture other subskills known to be important to reading such as rapid letter naming, working memory, and language factors.Of interest to future research could be further investigation to more precisely define critical tipping points in letter identification and phonological awareness critical to spelling knowledge.
Figure 1 :
Figure 1: Percentage of students by number correct for letter naming knowledge (LNK) for December, February, and May.
Figure 2 :
Figure 2: Percentage of students by number correct for phonological awareness (PA) for December, February, and May.
Figure 3 :
Figure 3: Percentage of students by number correct for spelling knowledge (SK) for December, February, and May.
Table 1 :
Means and standard deviations of the measured variables.
Table 2 :
Bivariate correlations of the measured variables.Note.LNK = letter naming knowledge.PA = phonological awareness.SK = spelling knowledge.All correlations were significant at p < .001.
Table 4 :
Hierarchical regression results when regressing spelling knowledge onto letter identification knowledge and phonological awareness. | 2018-12-23T12:28:06.427Z | 2018-03-28T00:00:00.000 | {
"year": 2018,
"sha1": "7ac54b5119de3b8587d6a598bc794f825d6702ae",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cdr/2018/2142894.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7ac54b5119de3b8587d6a598bc794f825d6702ae",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
10781254 | pes2o/s2orc | v3-fos-license | Methodological issues in estimating sodium intake in the Korea National Health and Nutrition Examination Survey
For policy goal setting, efficacy evaluations, and the development of related programs for reducing sodium intake, it is essential to accurately identify the amount of sodium intake in South Korea and constantly monitor its trends. The present study aimed to identify the status of sodium intake in South Korea and to review the methods and their validity for estimating sodium intake in each country; through this, we aim to determine more accurate methods for determining sodium intake and to monitor the trend in sodium intake for Korean citizens in the future. Using 24-hour dietary recall data from the 2012 Korea National Health and Nutrition Examination Survey (KNHANES) to estimate daily sodium intake, the average daily sodium intake among Koreans was 4,546 mg (men, 5,212 mg; women, 3,868 mg). In addition to the nutrition survey that uses the 24-hour dietary recall method, sodium intake can also be calculated from the amount of sodium excreted in 24-hour urine, 8-hour overnight urine, and spot urine samples. Although KNHANES uses the 24-hour dietary recall method to estimate the sodium intake, the 24-hour dietary recall method has the disadvantage of not being able to accurately determine the amount of sodium intake owing to its unique characteristics of the research method and in the processing of data. Although measuring the amount of sodium excreted in 24-hour urine is known to be the most accurate method, because collecting 24-hour urine from the general population is difficult, using spot urine samples to estimate sodium intake has been suggested to be useful for examining the trend of sodium intake in the general population. Therefore, we planned to conduct a study for estimating of 24-hour sodium excretion from spot urine and 8-hour overnight urine samples and testing the validity among subsamples in the KNHANES. Based on this result, we will adopt the most appropriate urine collection method for estimating population sodium intake in South Korea.
INTRODUCTION
Excessive sodium intake is a risk factor for hypertension, cardiovascular diseases, kidney diseases, and gastric cancer, and indirectly functions as a factor that can increase the risks of obesity, kidney stones, and osteoporosis. Reducing sodium intake significantly decreases the prevalence and mortality rates of chronic diseases [1]. It is well known that Korean consume too much sodium from traditional foods, such as kimchi, soy sauce and paste, salt-fermented seafood, soups, and stews; hence, policy-based approaches that can reduce sodium intake are urgently needed. Furthermore, in order to establish reduction goals and evaluate the effectiveness of sodium reduction programs, it is essential to accurately estimate sodium intake and continue to monitor trend in sodium intake in South Korea. Sodium intake can be investigated with nutrition surveys using 24-hour dietary recall or food records, as well as through amount of sodium excreted in 24-hour urine, 8-12 overnight urine, and spot urine samples; moreover, the methods for estimating sodium intake vary for each country, with each having its own advantages and disadvantages [2].
The aims of this study were to identify the status of sodium in- take in South Korea and to review the methods for estimating sodium intake for each country and their validity; through this, we will establish a method for accurately estimating and monitoring sodium intake in a representative Korean population.
STATUS OF SODIUM INTAKE IN SOUTH KOREA
According to the 2012 Korea National Health and Nutrition Examination Survey (KNHANES) [3], the daily sodium intake for Koreans was 4,546 mg (men, 5,212 mg; women, 3,868 mg), which exceeded 2-fold of the recommended maximum daily intake of 2,000 mg [4], established by the World Health Organization (WHO) (Figure 1). In looking at the trend in sodium intake (over 1 years old, age-standardized) since 1998, when KNHANES was first conducted, mean daily intake of sodium increased from 4,582 mg in 1998 to 5,260 it continuously decreased thereafter to 4,752 mg in 2011 and 4,546 mg in 2012. The daily sodium intake in men was higher than that in women by approximately 1,000-1,500 mg, which was attributable to higher consumption of food in men compared to women; however, annual trends for both men and women showed similar patterns.
In the 2012 KNHANES, the percentage of excessive sodium intake compared to the goal for sodium intake (9 years or older, 2,000 mg) established by the Korean Dietary Reference Intake was 93.3% for men and 79.8% for women aged 9 years or older ( Figure 2); moreover, the percentage of excessive sodium intake in people in their 30s and 40s appeared to be the highest at 91.9% (men, 96.9%; women, 86.7%). Regardless of area of residence or household income level, over 85% of the subjects showed excessive sodium intake.
ADVANTAGES, DISADVANTAGES, AND VALIDITY OF METHODS FOR ESTIMATING SODIUM INTAKE
Daily sodium intake can be measured with the nutrition survey methods, such as 24-hour dietary recall and food records, and through amount of sodium excreted in 24-hour urine, 8-12 overnight urine, and spot urine samples.
Among these methods, the 24-hour urinary sodium excretion is known to be the gold standard method for measuring sodium intake, because 85-95% of consumed sodium is excreted through urine and sodium in urine is highly correlated with sodium intake from food [5]. Nevertheless, the 24-hour urine collection has the disadvantages that urine collection without loss for 24hour from the general population that engages in free lifestyle is difficult and imposes a high burden on the participants [2]. In the National Diet and Nutrition Survey (England), the North Karelia Salt Project (Finland), FINMONICA Study (Finland), and the national FINRISK Study (Finland), 24-hour urine from a representative sample has been continuously collected to monitor the average sodium intake of their citizens.
A nutrition survey using 24-hour dietary recall or food records can be a relatively useful method in a large-scale population study, but it is subject to several limitations [6]; there are difficulties in recalling exactly how much of what type food was consumed for the past 1 day, as well as determining exactly how much salt was added during cooking. In addition, by applying the same value of sodium in dish to all, it has the disadvantage of not being able to reflect individual differences in salt intake. Moreover, since the food composition table is amended every 5 years, it presents limitations in making comparisons in annual trends. Sodium intake has been estimated from a 24hour dietary recall for 1-day in KNHANES, 24-hour dietary recall for 1-2 days in the National Health and Nutrition Examination Survey in the US, and a semi-weighted food record method for 1 day (3 days before 1995) in the Japanese National Nutrition Survey. The validity of 24-hour dietary recall methods for estimating sodium intake varies for each country, based on primary food consumed and cooking characteristics (Table 1) [7][8][9][10][11]. In the West, the primary source of sodium intake is processed foods (US, 77% [12]; UK, 65-70% [13]), which allows determination of a significant portion of individual sodium intake from food intake alone; thus, the correlation coefficients with 24-hour urinary sodium excretion were approximately 0.3-0.4 [7,9,11]. However, in a study on Koreans, the results indicated that the correlation coefficient between the sodium intake from 24-hour dietary recall and 24-hour urinary sodium excretion was 0.11 [10]. One of the reasons for such a low correlation is considered to be difficulties associated with the fact that the primary sources of sodium intake for Koreans are cooked dishes, such as kimchi, soups, and stews; the amount of ingredients and the amount of seasonings added could not be identified in the cooked foods, despite the fact that there can be significant differences among individuals based on sensitivity to salt and the amount of salt added during cooking. It makes difficult to estimate individuals' sodium intake with the application of a standardized amount of ingredients in cooked dishes. The use of sodium concentration in spot urine as a method for estimating daily sodium intake has the advantage of placing fewer burdens on participants, but it is limited in accurately measuring an individual's sodium intake due to the fact that sodium concentration can easily be affected by the volume of fluid ingested [2]. Despite this disadvantage, spot urine has been considered a useful method for estimating population mean value of sodium intake. It makes it possible to compare the sodium intake between different population and monitor trends over time within a specific population [14]. The correlation coefficients between sodium concentration in spot urine and sodium concentration in 24-hour urine were 0.28-0.86 [14][15][16][17][18][19], with differences observed in the time and frequency of spot urine collection (Table 2). An 8-12 hour overnight urine method has the advantage of low-burden urine collection than 24hour urine method, but it has the disadvantages of needing to collect the urine under strict time constraints and requires the assumption that urine excretion is constant day and night despite of usual diurnal pattern in excretion of sodium (i.e., higher excretion during the day). In the case of overnight urine, its correlation coefficients with 24-hour urine were 0.59-0.78 [20- 22], which indicates that it is more valid as a substitute for 24hour urine than spot urine (Table 2).
IMPROVEMENT OF METHODS FOR ESTIMATING SODIUM INTAKE IN KOREA NATIONAL HEALTH AND NUTRITION EXAMINATION SURVEY
Although a 24-hour dietary recall method has been used to estimate the average sodium intake of Koreans in KNHANES, as previously mentioned, a 24-hour dietary recall method has limitation in making accurate measures of sodium intake owing to its unique characteristics as a research method and in the processing of data. As such, in order to improve upon the limitations of the nutrition survey, we planned to adopt methods for estimating sodium intake using urine in KNHANES; however, prior to this introduction, an examination of the feasibility and validity of various urine sodium measurement methods is required. Accordingly, a study on the estimation methods for sodium intake in KNHANES is currently underway as a research project of the Korea Centers for Disease Control and Prevention, and 24-hour urine, 8-hour overnight urine, and spot urine samples from 300 adults who had participated in KNHANES is collecting to develop a formula for estimating sodium intake for Korean, and to test the validity of the sodium intake estimation method using spot urine and 8-hour overnight urine samples through comparisons with 24-hour urinary sodium excretion. Through the present study, guidelines for the most appropriate urine collection method, collection time, and collection amount for estimating sodium intake of Koreans will be developed and adopted in KNHANES. It is anticipated that the sodium intake estimated with spot urine or 8-hour overnight urine samples will complement the results of the 24-hour dietary recall method used in KNHANES; moreover, the sodium intake by Koreans estimated through this method is expected to be used as the basis for promoting policies on sodium reduction, developing health policies, and evaluating their effectiveness. | 2016-05-12T22:15:10.714Z | 2014-11-28T00:00:00.000 | {
"year": 2014,
"sha1": "76eb44fb94be7663a583704aba8686fadfb8828f",
"oa_license": "CCBY",
"oa_url": "http://www.e-epih.org/upload/pdf/epih-36-e2014033.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "070e74b9f5a90ff69da71fb0229e515e893cbcf6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245011701 | pes2o/s2orc | v3-fos-license | Risk of depression in patients with oral cancer: a nationwide cohort study in Taiwan
This study investigates an association between oral cancers and the risk of developing depression. We conducted a total of 3031 patients with newly diagnosed oral cancers and 9093 age-, sex-, and index year-matched controls (1:3) from 2000 to 2013 were selected from the National Health Insurance Research Database (NHIRD) of Taiwan. After adjusting for confounding factors, multivariate Cox proportional hazards analysis was used to compare the risk of depression over a 13-year follow-up. Of the patients with oral cancer, 69 (2.28%, or 288.57 per 105 person-years) developed depression compared to 150 (1.65%, 135.64 per 105 person-years) in the control group. The Cox proportional hazards regression analysis showed that the adjustment hazard ratio (HR) for subsequent depression in patients with oral cancer diagnosed was 2.224 (95% Confidence Interval [CI] 1.641–3.013, p < 0.001). It is noteworthy that in the sensitivity analysis is the adjusted HR in the group with depression diagnosis was 3.392 and in the oral cancer subgroup of “Tongue” was 2.539. This study shows oral cancer was associated with a significantly increased risk for developing subsequent depression and early identification and treatment of depression in oral cancer patients is crucial.
www.nature.com/scientificreports/ depression among new and follow-up cancer patients 16 . Overall, depression adversely affected patient quality of life and may also interfere with treatments and rehabilitation 17,18 . Previous studies have reported an association between depression and survival in patients with head and neck (HN) cancer 19,20 . Up to 2021, only one systemic review 21 study that link oral cavity cancer as a risk factor for depression with varied severity of symptoms after cancer treatment. Among the previous studies, Jansen et al. (2018) 22 and Kuma et al. (2018) 23 , using validated questionnaires, found that oral cancer is associated with the risk of depression. In addition, Rana et al. (2015) 24 have compared the risk of depression in the patients with oral squamous cell carcinoma and oral lichen planus.
We hypothesize that oral cancer is associated with an increased risk of subsequent depression. Therefore, we conducted this study to identify the association between oral cancer and depression and the potential effect modifiers, by using data from a nationwide health insurance database, the Taiwan National Health Insurance Research Database (NHIRD). Figure 1 shows the flowchart of study sample selection from National Health Insurance Research Database in Taiwan. Figure 2 shows the Kaplan-Meier analysis for the cumulative incidence of depression in the study and control groups. In addition, at first year of follow-up, the difference between the oral cancer group and non-cancer group became significant (log-rank test p < 0.001). However, the difference between oral cancer group and non-oral cancer group does not achieve significant (log-rank test p = 0.199). Table 1 shows the gender, age, comorbidities, urbanization and area of residence, and income of the study subjects and controls. Most of the oral cancer patients were male in middle age along with lower Insured premiums (NT < 18,000). Compared to the controls (non-cancer (C1)), the study subjects tended to have lower rates of alcohol abuse (0.001), IHD (p < 0.001), stroke (p < 0.001), and anxiety (p = 0.004).
Results
Compared to both controls (non-cancer (C1) and non-oral cancer(C2)), the study subjects tended to have lower rates of DM (C1 p < 0.001; C2 p = 0.001), hyperlipidemia, HTN, COPD, and renal disease (p < 0.001); and more lived in the highest urbanized areas, southern areas of Taiwan (p < 0.001), and more receive treatment in a hospital center with higher medical costs (NT$) and had longer length of days (p < 0.001). Table 2 shows the results of Cox regression analysis of the factors associated with the risk of developing depression. In oral cancer vs. non-cancer group, the crude HR was 2.181 (95% CI = 1.637-2.906, p < 0.001). After adjusting for age, gender, comorbidities, geographical area of residence, urbanization level of residence, and monthly income, the adjusted HR was 2.224 (95% CI = 1.649-3.028, p < 0.001).
In oral cancer vs. non-cancer group, the adjusted HR's for the patients with alcohol abuse, anxiety, and sleep disorder for developing depression were 10.631 (p < 0.001), 5.978 (p < 0.0013), and 3.109 (p < 0.001), respectively. The patients with DM and renal disease tended to have a lower risk for developing depression before adjustment; however, this became insignificant after adjustment.
The male patients adjusted HR was 0.687 (p = 0.047) and received treatment in hospital centers (0.573 (p = 0.001)) and regional hospitals (0.697 (p = 0.025)) tended to have a lower risk for developing depression than those who visited a local hospital. An increase in stay by 1 day also increases the risks for developing depression by 1.9% (p < 0.001). Table S2 illustrated that the depression, with the validation by the suicide (ICD-9-CM codes: E950-E959) and the usage of the antidepressants, which revealed that oral cancer is associated with the depression in the subgroups either with or without suicides or the usage of antidepressants. Table S3 depecits the different models on the matching between the oral cancer group and the controls.The adjusted SHR's would be closer to the crude SHR's while the covariates were more extensively matched. Table 3 shows the results of oral cancer subgroup by using Cox regression. The adjusted HR's for developing depression from areas such as and indicated: lips 2.379 (p = 0.001), tongue 2.539 (p < 0.001), gums 2.163 (p = 0.020), floor of mouth 2.791 (p = 0.031), Cheek mucosa 1.872 (p = 0.029), and Others 2.326 (p < 0.001). Table 4 shows the results of sensitivity analysis for tracking interval, in oral cancer vs. non-cancer group, interval group < 1 year 3.392 (p < 0.001), ≧1 year, < 3 years 2.604 (p < 0.001), ≧3 year, < 5 years 1.498 (p = 0.001), and ≧5 year 1.364 (p = 0.013) were associated with a higher risk developing depression and hazard ratio show decline tendency as follow-up time was longer.
Discussion
This study showed that patients with a diagnosis of oral cancer has nearly 2.2-fold increased risk of developing depression. Kaplan-Meier analysis revealed that the study subjects had a significantly lower 13-year depressionfree survival rate than the controls. In addition, it took 1 year to achieve a significantly adjusted HR, and therefore, the duration of 13 years appears to be a reasonable period to follow patients with newly diagnosed oral cancer. To the best of our knowledge, this is the first population-based study on the topic of the increased risk of depression in the patients with oral cancer.
A study done in Hong Kong concluded that approximately 8% of the oral cancer study participants met the clinical cut-off for depression 25 . Another previous study on Taiwan focused on early post-surgery stage patients with oral cancer and showed that 18.2% met the definition of depression 26 . A study about the Taiwan NHIRD further indicated that newly diagnosed oral cavity cancer patients age > = 50 had a significantly lower 5-year depression event-free survival rate with 2.53-fold hazard ratios higher than controls 27 .
Our study results show a 2.224-fold greater risk than controls that had a 2.28% oral cancer diagnosed with depression. Further, our study indicated a greater number of patients, broader age groups, and a longer follow-up period. More detailed data about each subgroup according to ICD9 area and sensitivity analysis was collected and provided. A low prevalence of depression was reported in this study, which might be rooted in the cultural www.nature.com/scientificreports/ context. Taiwan male adults tend to demonstrate a kind of stoicism that may be reflected in a much lower percentage in help-seeking behaviors 28 .
In this study, a patient diagnosed with oral cancer in the lip, tongue, and other cancer subgroups show a higher significant hazard ratio than others ( Table 3). The other cancer subgroup were mostly composed of hard palate, soft palate, and retromolar areas. Oral cancer treatment for the lip subgroup may include surgery that involved removal of these facial features. An altered facial appearance and trouble speaking can lead to social isolation and psychological distress 29,30 .
The tongue and soft palate are the most important organs in the oral cavity and oropharynx for speech and swallowing 31 . Patients who underwent 3/4 or total anterior glossectomy had poorer functional outcomes than Table 1. Characteristics of study in the baseline. Significant values are in bold. P: Chi-square/Fisher exact test on category variables and t-test on continue variables. NT$ New Taiwan Dollars, DM diabetes mellitus, HTN hypertension, COPD chronic obstructive pulmonary disease, IHD ischemic heart disease. *The pair of subsets showed a significant difference at p < 0.05. **The pair of subsets showed a significant difference at p < 0.001. www.nature.com/scientificreports/ www.nature.com/scientificreports/ those who underwent either 1/4 or 1/2 glossectomy 32 . A previous study showed that depression and swallowing function were highly correlated 33 . Patients with tumors of the tongue had worse functional dysphagia than those with cancers in other areas of the oral cavity 34 . Dysphagia contribute to malnutrition, dehydration, weight loss, reduced functional abilities, and fear of eating and drinking may also lead to depression and reduced quality of life 35 . Early identification and management of dysphagia can improve treatment outcomes and reduces the development of depression 31 . The direction of the relationship among depression, swallowing function, and quality of life for HNC patient remains unclear 33 . A recent study showed that dysfunction in salivation, problems with eating, and problems with social contact were major risk factors for the development of depression 36 .
Variables/group
Oral cancer represent a psychosocial challenge for patients, their families, and the diagnosing provider 29 .In Taiwan, the majority of oral cavity cancer patients belonged to a lower economic group and had a long history of smoking, chewing betel nut, and drinking alcohol 37 , which were also correlated with depression [38][39][40] . A synergistic effect on depression might develop from the sudden cessation of the afore-mentioned substance used after receiving the cancer diagnoses 26 .
Our study indicated that most oral cancer patients come from a lower annual income backgrounds, which may contribute to increasing the hazards for subsequent depression. A previous study showed a large percentage of patients who had received oral cancer treatment indicated a significant financial impact (treatment costs, work absences, medication prices, and other miscellaneous expenses), especially for patients with limited financial resources 41 . Financial struggle causes significant stress in patients undergoing oral cancer treatment 42 . Oral cancer survivors do not always return to work after rehabilitation 43 . Patients may encounter several physical challenges that may influence patient employability and an ability to maintain or continue employment 44 .
Oral cancer surgery often causes significant loss of function, and further radiotherapy and chemotherapy caused side-effects such as nausea, vomiting, mucositis, pharyngitis-related dysphagia, masticatory disorders, and possible loss of complete physiological functions 45,46 . During treatment, a higher consumption of painkillers, especially those with narcotic analgesics, has been reported to have a higher risk of depression 47,48 . Therefore, both oral cancer and its treatment disrupts core aspects of daily life, which may further damage a patient's selfesteem and encourage social isolation 49 .
According to another study, there is a high incidence of symptoms of depression in HNCA patients in the first 6 months to 1 year following definitive therapy 33,36 , which is similar to our study result especially in < 1 year group, i.e. they showed a significant 3.392 hazard ratio (Table 4). It remains not clear why depression develops so early in the present and other studies. One possible reason is that the oral cancer might be related to the physical discomfort and psychosocial distress, such as dysphagia, from the facial disfigurement due to the cancer itself, since more than half of the oral cancers occur on the tongues, cheeks and lips in the present study (as shown in 20 . Oral cancer patients with mental illness conferred a 1.58-times risk of mortality were less likely to undergo surgery with or without adjuvant therapy and had a poor prognosis when compared with those without mental illness 26 . Previous research has also shown that depression may compromise the immune system and may affect natural killer cells important to apoptosis, which leaves individuals at greater risk for cancer development 52 . It is noteworthy, Kim et al. found that HNC patients with pre-treatment depression had a decreased 3-year survival rate when compared to their non-depressed counterparts 53 .
The prognostic impact of comorbidity on overall survival is recognized and easily explained, but on diseasespecific survival it is less well understood 54 . A previous study showed lower survival rates in early stage oral cancer patients with comorbidity conditions that were due to less aggressive cancer treatments offered to this group of patients 55 . Another population-based study from Denmark concluded that comorbidity had a negative prognostic impact on overall survival rates in HNC patients, but cancer-specific death was not significantly affected by comorbidity, suggesting that patients die from or of their comorbidities rather than the cancer 56 . Table 2 showed that oral cancer patients with sleep disorder, anxiety, and alcohol abuse showed significant hazard ratio subsequent to depression when analyzing a study population, which was correlated to previous Table 4. Sensitivity analysis for factors of depression by using Cox regression. HR hazard ratio, CI confidence interval, Adjusted HR adjusted variables listed in Table 2. www.nature.com/scientificreports/ studies [57][58][59] . Meanwhile, the study group showed fewer associations with other comorbidities, which might imply that oral cancer and subsequent depression should have shown a higher specificity.
Methods
Data sources. In this study, we used data from the NHIRD to investigate the relationship among a matched control group (non-cancer), an oral cancer group, a non-oral cancer group, and depression over a 13-year period from the Longitudinal Health Insurance Database (LHID) in Taiwan (2000-2013). The National Health Insurance (NHI) Program was launched in Taiwan in 1995, and as of June 2009 it included contracts with 97% of the medical providers in Taiwan with approximately 23 million beneficiaries, or more than 99% of the entire population in Taiwan 60 . The details of this program have been well documented in the literature [61][62][63][64] . The NHIRD uses the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes to record diagnoses 65 . All diagnoses of depression were made by board-certified psychiatrists and oral cancers were confirmed by oral surgeons or otolaryngologists, and other cancers were confirmed by medical experts who specialized in oncology. The Catastrophic Illness Certificates (CIC) status was used to verify the diagnosis of oral or other cancers. The NHI Administration randomly reviews the records of 1 in 100 ambulatory care visits and 1 in 20 in-patient claims to verify the accuracy of diagnoses 66 . Several studies have demonstrated the accuracy and validity of diagnoses in the NHIRD 67-69 . Study design and sampled participants. This study was a retrospective matched-cohort design. Patients with newly diagnosed oral cancer were selected from 1 January 2000 to 31 December 2013 according to ICD-9-CM codes at the sites of lips, tongue, major salivary glands, gums, floor of mouth, cheek mucosa, and other sites in the oral cavity (listed in Table S1). The patients with ages younger than 20 years, any previous diagnosis of cancer, and depression before year of 2000 were excluded. The oral cancer group was matched (1:3) with the two control groups, cancers other than the oral cancer group and the non-cancer group. A total of 989,735 enrolled patients with the 3031 participants with oral cancer and age-, sex-, and index year-matched control groups as 9093 cancer other than oral cancer group and 9093 non-cancer group, respectively (Fig. 1).
The covariates included gender, age groups of the 10-year interval as 18-29, 30-39, 40-49, 50-59, 60-69, and ≥ 70 years, geographical area of residence (north, center, south, and east Taiwan), urbanization level of residence (level 1-4), and monthly income (in New Taiwan Dollars (NTD): < 18,000, 18,000-34,999, and ≥ 35,000). The urbanization level of residence was defined according to the population and various indicators of the level of development. Level 1 was defined as a population > 1,250,000, and a specific designation as political, economic, cultural, and metropolitan development. Level 2 was defined as a population between 500,000 and 1249,999, and as playing an important role in the political system, economy, and culture. Urbanization levels 3 and 4 were defined as a population between 149,999 and 499,999, and < 149,999, respectively 70 .
Outcome measures. All study participants were followed from the index date until the onset of depression (ICD-9-CM codes: 296.2x, 296.3x, 300.4, and 311), withdrawal from the NHI program, or through to the end of 2013 71 .
Statistical analysis. All analyses were performed using SPSS software version 22 (SPSS Inc., Chicago, Illinois, USA). The student's t-tests and χ 2 were used to evaluate the distributions of categorical and continuous variables, respectively. Multivariate Cox proportional hazards regression analysis was used to determine the risk of depression, and the results were present as hazard ratio (HR) with 95% confidence interval (CI). The difference in the risk of depression between the study (oral cancer) and control groups (non-oral cancer and noncancer) was estimated using the Kaplan-Meier method with the log-rank test.
Strengths. First, we conducted this population-based study, by LHID retrieved from the NHIRD, a large sample size during the duration of 13 years. A long-term observation allowed for more credibility in this study. Second, one recent study has validated the accuracy of the diagnostic code for major depressive disorder in the NHIRD and supporting the utilization of psychiatric diagnoses in claims databases 72 . Third, the validation of diagnostic codes of cancers, in the NHIRD, has been confirmed, in accordance with the Taiwan National Cancer Registry (NCR) and the Catastrophic Illness Certificates (CIC) status for oral 73 or a wide range of most of the cancers 74 . The accuracy of the diagnostic codes for the depression or oral cancer could provide a relative reliable association between oral cancer and depression. Fourth, the validation by suicide and antidepressant usage and sensitivity analysis also revealed the association between oral cancer and depression. These findings support evidence of the association between oral cancer and the risk of depression.
Limitation. There were several limitations to this study: First, since this is a study in a Taiwanese population, it would be limited in the generalization of the results to other countries. Second, in Taiwan, some specialists other than board-certified psychiatrists, such as neurologists or family medicine physicians, might prescribe antidepressants with the diagnosis of depressive disorder. In addition, depression is significantly underdiagnosed via ICD codes and that survey-based screening data is necessary for full screening of depression risk. Third, even though patients with the diagnosis of depression and oral cancer could be identified using the insurance claims data, the data on the severity, stage, and impact on their caregivers and socio-economic status were not available. www.nature.com/scientificreports/ Fourth, tobacco cessation was not included in the NHIRD, therefore, it was not matched it in this study. In addition, details regarding the cancer therapy and progression were not fully available in the NHIRD. More research may be necessary to clarify the long-term risk of developing depression in patients with oral cancer.
Ethical approval. This study was conducted in accordance with the Code of Ethics of the World Medical
Association (Declaration of Helsinki). The Institutional Review Board of Tri-Service General Hospital approved this study and exempted the need for individual written informed consent (TSGHIRB No: B-109-40).
Informed consent. The Institutional Review Board agreed that informed consent could be waived because these databases are encrypted databases.
Conclusion
In our study, oral cancer diagnoses were associated with a significant increased risk for the subsequent development of depression. This study serves as a reminder for dentists and physicians that early identification and treatment of depression in oral cancer patients is crucial. Future studies are needed for better treatment strategies for the oral cancer patients to help with and stave off the development of depression.
Data availability
The dataset is deposited in the website of Taiwan's National Health Research Database (https:// nhird. nhri. org. tw/ en/). | 2021-12-09T06:22:23.173Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "db72f36fc8b5c4b4f2f63f596a59b4d6742ecb77",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-02996-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d655e5cdaf8c194719c1378a7a922e6acd287983",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267637811 | pes2o/s2orc | v3-fos-license | Alignment of substance use community benefit prioritization and service lines in US hospitals: a cross-sectional study
Background Non-profit hospitals in the U.S. are required by the 2010 Patient Protection and Affordable Care Act (ACA) to conduct a community health needs assessment (CHNA) every three years and to formulate an implementation strategy in response to those needs. Hospitals often identify substance use as a need relevant to their communities in their CHNAs and then must determine whether to create strategies to address such a need within their implementation strategies. The aim of this study is to assess the relationship between a hospital’s prioritization of substance use within its community benefit documents and its substance use service offerings, while considering other hospital and community characteristics. Methods This study of a national sample of U.S. hospitals utilizes data collected from publicly available CHNAs and implementation strategies produced by hospitals from 2018 to 2021. This cross-sectional study employs descriptive statistics and multivariable analysis to assess relationships between prioritization of substance use on hospital implementation strategies and the services offered by hospitals, with consideration of community and hospital characteristics. Hospital CHNA and strategy documents were collected and then coded to identify whether the substance use needs were prioritized by the hospital. The collected data were incorporated into a data set with secondary data sourced from the 2021 AHA Annual Survey. Results Multivariable analysis found a significant and positive relationship between the prioritization of substance use as a community need on a hospital’s implementation strategy and the number of the services included in this analysis offered by the hospital. Significant and positive relationships were also identified for five service categories and for hospital size. Conclusions The availability of service offerings is related both to a hospital’s prioritization of substance use and to its size, indicating that these factors are likely inter-related regarding a hospital’s sense of its ability to address substance use as a community need. Policymakers should consider why a hospital may not prioritize a need that is prevalent within their community; e.g., whether the organization believes it lacks resources to take such steps. This study also highlights the value of the assessment and implementation strategy process as a way for hospitals to engage with community needs.
Introduction
While the global coronavirus pandemic monopolized much of the world's healthcare-related attention since 2020, drug-related overdose deaths have continued to rise largely fueled by the ongoing opioid epidemic.Opioid-related overdose deaths have increased by over eight times since the turn of the millennium; data from the Centers for Disease Control and Prevention put the number of deaths by opioid overdose at nearly 69,000 for 2020 [1].That same year, American life expectancy fell by almost two years.While this drop was propelled by the death toll from the new virus, it continued a recent downward trend fueled by drug overdose deaths [2].Nationwide, substance use disorders (SUDs) cost more than $500 billion per year [3].As the clinical and public health interventions focused on Covid-19 are reabsorbed into our health systems' routine operations, renewed attention should be focused on the multidimensional and cross-sector efforts to develop and scale-up evidencebased prevention and treatment strategies for SUDs.
While this issue is national in scope, many meaningful interventions are occurring on the local level, where hospitals are uniquely poised to address the impacts of untreated substance use disorders in their surrounding communities.Hospitals are on the frontlines of treatment because they treat acute overdoses in the emergency department (ED) and treated infections and injuries secondary to substance use [4,5].This has created a need for hospitals to effectively screen, initiate evidence-based treatment, and transition patients to care in communitybased settings [6].
A variety of formal service approaches have been utilized.For example, harm-reduction programs, such as syringe service programs and naloxone distribution, housed within EDs have shown to increase-in some studies, quite significantly-patients' pursuit of further care, both clinical and behavioral [7].Some hospitals have developed interdisciplinary addiction consult services, which have shown promise in reducing readmissions [8] and increasing trust in the healthcare system, [9] even in a telehealth context utilized by many hospitals during the Covid-19 pandemic [10].But even without personnel devoted specifically to addiction medicine, the initiation of medications for opioid use disorder (MOUD) within the hospital has demonstrated improved outcomes compared with other interventions, such as referral to outside providers [11].
Hospitals are also addressing SUDs beyond formal service provision, which has received less attention in the literature.Substance use is associated with socioeconomic factors such as unemployment, work-related injuries, and chronic stress [12,13].Some of hospitals' impact comes through their ambient role as "anchor institutions" that influence their local economic, professional, and social environments, which can in turn moderate some of the social determinants of substance use [14,15].Hospitals also adopt important targeted preventive initiatives aimed at the public health of their local environment, [16] and they are further incentivized to do so via federal tax structures and state policy [17,18].But whether considering services offered within the hospital or initiatives within the broader community, the question to consider is whether it is need alone that contributes to a hospital's decision to address SUDs, or whether other structural factors come into play.
Nearly 60% of hospitals in the US are non-profit entities and, as such, are required by the 2010 Patient Protection and Affordable Care Act (ACA) to conduct a community health needs assessment (CHNA) every three years and to formulate an implementation strategy (also referred to as strategy) in response to those needs [19,20].This process requires hospitals to engage with community stakeholders, particularly those with health expertise, to assess the needs of its community and to make its findings public, but is otherwise largely unstandardized [21] Hospital CHNAs often identify OUD, or substance use disorders more generally, as needs relevant to their communities [22,23], and then must make the determination as to whether to create strategies to address such a need within their strategy.
Using these public hospital administrative documents as a starting point, the aim of this study was to assess the relationship between a hospital's investment to address substance use within their community benefit documents and their formal offerings of substance use services in the clinical setting, while also taking other hospital and community characteristics into consideration.Doing so will provide insight into how hospitals make decisions in response to needs identified in the CHNA process, and what factors shape these decisions.In doing so, we utilized a novel dataset constructed from a national sample of hospital CHNA and implementation strategy documents, paired with hospital administrative data, which provides the first national comparison of hospital service lines to address SUDs and their community benefit programs related to substance use.
Sample
The sample for this study consists of a stratified, random sample of 20% of nonprofit hospitals in each state, rounded up to the nearest hospital (N = 601).We relied on a sample, rather than the full hospital population, due to the intensive work of coding CHNA and strategy documents.
We downloaded the publicly available CHNA and strategy from each hospital's website.If these documents could not be located, we contacted the hospital by phone and/or email to request a copy.Hospitals that did not respond were removed from the sample.We utilized a coding strategy that was first used by the research team to analyze CHNAs and implementation strategies from an earlier wave of data for a previous study [22,24].This involved coding the top five health needs identified in the hospital CHNA, and the top five health needs addressed in the corresponding strategy.We also coded dichotomous variables for whether substance use was addressed in the strategy.
Because a team of coders were involved with this laborintensive task, we measured intercoder reliability after a period of structured training and supervision.CHNAs and strategies typically follow a structured approach to reporting identified needs and related program implementation.We reached 100% reliability during a process of test coding.In situations where reports did not conform to the typical structure, we met as a team on a biweekly basis to collaboratively code all reports that were flagged.We also compared our sample to the national population of hospitals, using t-tests and chi-square tests, to assess consistency of characteristics such as bed size, system membership, teaching status, and rural location.We found our sample to have a slightly higher average bed size (mean of 236 beds, compared to 178 in the AHA population), but the sample appeared to be representative otherwise.
The dataset was created by merging CHNA data with data from the 2021 American Hospital Association Annual Survey, [25] County Health Rankings, [26] and Area Resource File, [27] in order to incorporate organization and community variables for each hospital within the sample.Hospitals that did not have publicly available CHNA or implementation strategies were removed from this analysis, as were hospitals that did not respond to the substance use services section of the AHA Annual Survey.Hospitals that do not provide this information tend to be systematically smaller and less-resourced than those that do [28,29].
The final analytic sample was 354 hospitals, after removing those with missing data after the merging of datasets and those that did not meet inclusion criteria (e.g., children's hospitals).Because our analysis relies entirely on secondary and organizational data and does not utilize any identifiable private information, our institutional review board indicated that our study did not require human subjects review.
Measures
The dependent variables for this study are whether hospitals offer certain SUD-related services.We analyzed each service category individually, and also the categories collectively as a count variable (0-6; see Table 1 for description of the six service categories).The variables medications for opioid use disorder (MOUD), substance use consulting, and substance use screening were constructed by taking the setting (primary care, ER, inpatient, and extended care) or treatment specific (OUD and SUD) variables, and then creating dichotomous variables of the service, regardless of setting (Yes/No), before being calculated into the count.The main independent variable is whether the hospital includes substance use disorders (SUD) in its implementation strategy (0 = no, 1 = yes).Additionally, we have included hospital characteristics (bed size, teaching status, and system membership) and county characteristics (community health indicator composite score of 1 to 4, uninsured rate, if the hospital is in a rural designated county, and the county overdose mortality rate), as well as controlling for region, in order to understand the organizational and environmental context in which the hospital is operating.
Analytic plan
We employed descriptive statistics to establish frequencies, percentages or medians, and range, where appropriate, for each variable.All frequency and percentage variables were coded as 0/1, and all median and range variables were continuous variables.The analysis also utilized chi-square tests to assess significant associations between individual service offerings and the presence of SUD in a strategy.Additionally, a Poisson regression model was used to assess the relationship between the count of service offerings and the presence of SUD in a strategy, while controlling for hospital and community characteristics.
Results
Our findings indicate that the majority of hospitals included in the sample (77%) have programming in SUD as part of their implementation strategy.An even higher number of hospitals (92%) offered at least one formal medical service to treat SUD (Table 2).
The mean number of service categories offered by hospitals is 2.57, with two-thirds of the hospitals in the sample offering between one and three of the categories analyzed.Just under 8% reported no availability of the specified services, while just over 5% reported offering all six (see Fig. 1).When considering the service categories individually, each of them was positively associated with SUD being included in the implementation strategy, and all but inpatient services were statistically significant relationships (see Fig. 2).
Poisson regression analysis showed a significant and positive relationship between the SUD prioritization in a hospital strategy and count of service offerings, indicating that hospitals that prioritize SUD on their community benefit documents offer more services relevant to SUD.Other variables that show significant relationships to the count of services are hospital size, with larger hospitals offering more services; and region, with hospitals in the Northeast offering a greater number of services than the reference region of the South (Table 3).
Discussion
The decisions hospitals make regarding service offerings and community benefit programs are often complex and multi-faceted.The Affordable Care Act introduced new requirements for hospitals to conduct CHNAs and develop implementation strategies, in part, to ensure that hospitals are developing community health programs that are in line with local needs, rather than organizational priorities.Our findings suggest that a hospital's service offerings are significantly associated with whether they invested in addressing SUDs in prior years.Because our data did not allow us to consider whether the service pre-dated the strategy or not, we cannot conclude whether or not the service is a result of the committing to address substance use in their community, though that is certainly possible.However, we can recognize a clear relationship between the willingness to engage with a need at a community level and the ability to provide services within the clinical setting.Beyond the number of services offered, it is notable that the only service without statistical significance is the one perhaps considered to be the most traditional (in-patient care) and that the relationship to hospital investments in the community is stronger amongst those services that could be viewed as taking a more preventive approach to SUDs.The findings of this study provide possible context to the decision-making process within hospitals, by indicating that hospitals may consider formal service offerings when addressing substance use in the community.In other words, hospitals may consider their formal services offerings as a foundation to build upon when publicly committing to address SUDs as part of their community benefit requirements.On the other hand, limitations in resources may both limit hospital service availability and community benefit investments.Small hospitals, in particular, may not have the economic resources or staffing to broaden their scope, no matter how high the need in the community may be [30].Such findings provide support for ideas such as the "liability of smallness" [31].It is plausible that addressing SUD within the broader community would help to control costs in the long run, given the potential to prevent overdoses and costly infections, but developing a community benefit strategy to address substance use may require considerable startup resources and personnel.Yet, clinical services may not be the only way to address substance use, and there may be hospitals that recognize the need, and are willing to engage in it through partnerships with other local stakeholders [32,33].Such efforts also have the potential to create effective change, and hospitals that feel limited in their service offerings could consider such alternative options.
If hospitals lack the direct resources to create SUD interventions in the community, there is potential for state and local policymakers to direct funds to address such gaps.Many patients with SUD do interact with the medical system for a separate condition or injury, but do not have their primary SUD addressed [5,34].To effectively address the SUD epidemic, additional resources must be deployed to educate, train, and support health care organizations to undertake both preventive programming and formal service provision.
Limitations
The primary limitation of this study is that it relies on the self-reported nature of implementation strategies.By gathering data from these reports, we are relying on the expressed intentions of hospitals, without verification that hospitals did adhere to their reported strategies.It is possible that some expressed strategies never came to fruition, although hospitals are required to provide an update on each strategy in their next round of reporting.It is also possible that hospitals did not report SUD as a priority at the time of the strategy, but later sought to address the issue in some way.A final limitation is the use of cross-sectional secondary data with a primary dichotomous outcome, with yes/no responses for key variables, which does not allow us to assess the degree of implementation or assess causality between formal service lines and community benefit investments.
Conclusion
Hospital CHNAs and implementation strategies provide extensive information about the needs hospitals identify within their communities, and the steps they intend to take to address these health needs.Analysis of these documents can also help researchers and policymakers understand hospital decision-making and the
Fig. 1 Fig. 2
Fig. 1 Percent of hospitals by count of SUD hospital services
Table 1
Key measures and descriptionMedications for opioid use disorder for substance use and/or opioid use disorder.Medications for opioid use disorder (MOUD) is the use of medications, in combination with counseling and behavioral therapies, to provide a "wholepatient" approach to the treatment of substance use disorders.Medications used in MOUD are approved by the Food and Drug Administration (FDA) and MOUD programs are clinically driven and tailored to meet each patient's needs Substance use consultation Addiction/substance use disorder consultation and liaison services in the settings of ER, inpatient care, primary care, and/or extended care.Consultation-liaison psychiatrists, medical physicians, or advance practice providers (APPs) work to help people suffering from a combination of mental and physical illness by consulting with them and liaising with other members of their care team Substance use screening Substance use disorder screening in the settings of ER, inpatient care, primary care, and/or extended care SUD inpatient services Provides diagnosis and therapeutic services to patients with alcoholism or other drug dependencies.Includes care for inpatient/residential treatment for patients whose course of treatment involves more intensive care than provided in an outpatient setting or where patient requires supervised withdrawalSUD outpatient servicesOrganized hospital services that provide medical care and/or rehabilitative treatment services to outpatients for whom the primary diagnosis is alcoholism or other chemical dependency Telehealth addiction services Telepsychiatry can involve a range of services including psychiatric evaluations, therapy, patient education, and medication management
Table 3
Multivariable Poisson regression; association of hospital and community characteristics with hospital count of offered SUD services N = 354 range of factors associated with such decisions; identify where gaps between those needs and actions exist; and begin to understand what the implications are for improving community health.Organizations may view themselves equipped to address community health needs when they believe they have the ability and appropriate resources to implement effective interventions.Organizations that recognize limitations in their resources may feel constrained both in their ability to offer key services and in their ability to stretch into new areas of prevention, even when high needs are shown.With this understanding, policymakers can consider new actions to support organizations that have limited resources but serve high-need areas where new preventive efforts are critical. complex | 2024-02-14T14:12:14.507Z | 2024-02-13T00:00:00.000 | {
"year": 2024,
"sha1": "19edc405bb247d3ca33826103316f6322164443f",
"oa_license": "CCBY",
"oa_url": "https://ascpjournal.biomedcentral.com/counter/pdf/10.1186/s13722-024-00442-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "47de05816862613442d739e4dab219a51be36309",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6096934 | pes2o/s2orc | v3-fos-license | Notch Signaling in T Helper Cell Subsets: Instructor or Unbiased Amplifier?
For protection against pathogens, it is essential that naïve CD4+ T cells differentiate into specific effector T helper (Th) cell subsets following activation by antigen presented by dendritic cells (DCs). Next to T cell receptor and cytokine signals, membrane-bound Notch ligands have an important role in orchestrating Th cell differentiation. Several studies provided evidence that DC activation is accompanied by surface expression of Notch ligands. Intriguingly, DCs that express the delta-like or Jagged Notch ligands gain the capacity to instruct Th1 or Th2 cell polarization, respectively. However, in contrast to this model it has also been hypothesized that Notch signaling acts as a general amplifier of Th cell responses rather than an instructive director of specific T cell fates. In this alternative model, Notch enhances proliferation, cytokine production, and anti-apoptotic signals or promotes co-stimulatory signals in T cells. An instructive role for Notch ligand expressing DCs in the induction of Th cell differentiation is further challenged by evidence for the involvement of Notch signaling in differentiation of Th9, Th17, regulatory T cells, and follicular Th cells. In this review, we will discuss the two opposing models, referred to as the “instructive” and the “unbiased amplifier” model. We highlight both the function of different Notch receptors on CD4+ T cells and the impact of Notch ligands on antigen-presenting cells.
inTRODUCTiOn Following signals from both antigen-presenting cells (APCs) and the micro-environment, activated CD4 + T cells are triggered to initiate secretion of specific effector cytokines. Since the original observation in 1986 upon antigenic stimulation naive CD4 + T cells can differentiate into T helper 1 (Th1) or Th2 effector T cells depending on polarizing cytokine signals (1), various additional Th subsets have been recognized. These include Th9, Th17, Th22, follicular T helper cells (Tfh), and regulatory T cells (Tregs), each characterized by a unique cytokine production profile and a key transcription factor [see for recent review Ref.
(2)]. These Th subsets play a crucial role in appropriate immune responses during host defense, but are also involved in the pathogenesis of inflammatory diseases (3, 4).
Th1 cells mainly produce IFN-γ and TNF-α and are associated with the elimination of intracellular pathogens. Th1 development is facilitated either by IL-12 and STAT4 or by IFN-γ, STAT1, and the key Th1 transcriptional regulator T-box-containing protein (T-bet), encoded by Tbx21 (5). Th2 cells control helminth infections and are implicated in allergic immune responses such as allergic asthma. They are potent producers of Th2 cytokines that induce IgE synthesis (IL-4), recruit eosinophils (IL-5), and cause smooth muscle hyperreactivity and goblet cell hyperplasia . Therefore, Th2 cells are central in the orchestration and amplification of inflammatory events in allergic asthma. The master transcription factor Gata3 is necessary and sufficient for Th2 cytokine gene expression in Th2 cells (6). Because Th2 differentiation is driven by IL-4, this raises the paradox that IL-4 is required to generate the cell type that is its major producer. But the origin of the first IL-4 required for Th2 cell induction remains unclear. While a range of cell types are able to produce IL-4, Th2 cell responses can still be generated when only T cells can make IL-4, arguing against an essential role for an external source of IL-4 (7,8).
An accumulating number of studies suggest that the Notch signaling pathway, which also plays a crucial role in early hematopoietic development and at multiple steps of T lineage development, is essential for Th cell differentiation [for recent review see Ref. (9)]. Currently, two opposing models have been proposed that explain how Notch ligands can influence Th subset differentiation. According to the "instructive" model, Jagged and delta-like ligands (DLL) on APCs induce Th2 and Th1 differentiation, respectively (10). Alternatively, the "unbiased amplifier" model proposes that Notch ligands are not instructive but rather function to generally amplify Th cell responses (11). In this review, we will discuss these two contrasting hypotheses on the role of Notch signaling. We will focus on both Notch receptor expressing T cells and Notch ligand-expressing cells.
THe nOTCH SiGnALinG PATHwAY
There are five Notch ligands: two Jagged (Jagged1 and Jagged2) and three DLL (DLL1, DLL3, and DLL4), which are bound by four receptors, Notch1-4. For these ligands to be functional, their ubiquitination by Mindbomb1 or Neuralized within the cell is required (12). Details of the Notch signaling pathway are discussed in various excellent reviews (13,14). Briefly, following ligand-receptor binding, the Notch intracellular domain (NICD) is cleaved by a γ-secretase complex and translocates to the nucleus and binds to the transcription factor recombination signal binding protein for immunoglobulin Jκ region (RBPJκ; Figure 1). Finally, additional co-activating proteins are recruited, such as mastermind-like proteins (MAML1-3) and p300 to induce transcription of target genes. Notch signaling does not only induce Th lineage-defining transcription factors and cytokines (described below) but also general pathways critical for T cell activation, including IL-2 production, upregulation of the IL-2 receptor, and glucose uptake (15)(16)(17)(18). Notch signaling potentiates phosphatidylinositol 3-kinase-dependent signaling downstream of the T cell receptor (TCR) and CD28 by inducing activation of Akt kinase and mammalian target of rapamycin, which enhances T cell effector functions and survival and allows them to respond to lower antigen doses (16,19,20). Notch signaling can be enhanced by the protein kinase PKCθ, which is crucial for TCR and CD28 signaling and regulation of the actin cytoskeleton (21). Moreover, upon TCR stimulation NICD interacts with other proteins in the cell in a non-canonical, RBPJκ-independent pathway that leads to NFκB activation (22,23).
Th2 Cells
Notch signaling can initiate Th2 cell differentiation by direct activation of (i) a 3ʹ enhancer of the Il4 gene and (ii) an upstream promoter of Gata3 (10,(53)(54)(55). Several studies using mice expressing a dominant negative (DN) MAML transgene have demonstrated that Notch signaling is essential for Th2 cell differentiation and function (54,56). When γ-secretase inhibitors (GSI) were used to block Notch signaling in OVA-induced asthma or food allergy models, Th2 cytokine production by T cells was inhibited while IFN-γ production was increased (57)(58)(59). Moreover, upon gene ablation of Notch1/Notch2 or RBPJκ, IL-4 production was abrogated and functional responses against parasitical pathogens were reduced (10). At the same time, IFN-γ expression was unaffected, supporting an instructive role for Notch signaling. In line with an instructive model, DLL4 was demonstrated to have a regulatory role in Th2 responses to cockroach allergen, OVA, RSV, or Schistosoma mansoni egg antigen ( Table 1) and in an experimental autoimmune encephalomyelitis (EAE) model (49). A protective Th1 response to RSV in the lungs was converted into an allergic Th2 response by DLL4-neutralization in vivo (36).
However, defective Th2 responses against the intestinal helminth Trichuris muris in DN-MAML transgenic mice were restored when mice received anti-IFN-γ antibodies, indicating that Notch functions to optimize rather than to initiate the Th2 response (11). Moreover, decreased Th2 responses were found Notch ligands act as an unbiased amplifier, thereby sensitizing cells to the environment to ensure that activated CD4 + T cells overcome a Th cell commitment threshold. Notch induces activation, proliferation, enhances anti-apoptotic signals, and is simultaneously recruited to Th1, Th2, and Th17 genes. So, in this hypothesis Notch acts as an enabler of differentiation, whereby the outcome depends on signals of the environment, such as cytokines. when DLL4 was blocked in a mouse model for RSV-mediated allergic asthma exacerbations (60). Finally, we very recently found that whereas mice with RBPJκ-deficient T cells failed to develop HDM-driven allergic airway inflammation (AAI) and airway hyperreactivity, mice with a DC-specific conditional deficiency of both Jagged1 and Jagged2 developed normal AAI following in vivo HDM-exposure (32). Although most studies using bmDCs would support an instructive role for Jagged in the induction of Th2 cell differentiation and function (Table 1), our studies indicate that induction of Th2 responses in HDM-driven AAI is dependent on Jagged expression on other cell types than DCs or alternatively on cooperation between Jagged and DLL on DCs.
Taken together, although several lines of evidence indicate that DCs use the Notch pathway to instruct Th cell fates, Notch may also act as an unbiased amplifier of Th cell differentiation.
Th1 Cells
The signature Th1 genes Ifng and Tbx21 were identified as direct Notch targets (11,61). Mice in which T cells were Notch1/Notch2 double-deficient showed impaired IFN-γ secretion by Th1 cells during in vivo Leishmania major parasite infection, but reports employing DN-MAML transgenic or conditional RBPJκ knockout mice demonstrated that Th1 cell function was unaffected (32,53,54,56,62). Therefore, these findings suggest that signals that regulate Th1 differentiation involve RBPJκ-independent functions of Notch. Studies using GSI showed that Th1 differentiation was impaired in an in vivo EAE model (11,61). By contrast, an increase in Th1 differentiation (and a concomitant decrease in Th2 cytokine production) was seen in an OVA-driven AAI model (58). The interpretation of these apparently conflicting findings remains complicated, because effects of GSI are not limited to Notch signaling and, e.g., also involve HLA-A2 expression and cadherins (63). The capacity of DLL1/DLL4 to induce Th1 cell differentiation is supported by many in vitro and in vivo experiments, as outlined in Table 1. For example, anti-DLL4 antibodies reduced IFN-γ and TNF-α secretion by T cells in vivo (47,49,50). DLL1-blockade decreased Th1 cell numbers in an allograft model (64). Conversely, Jagged1-Fc had no effect and anti-Jagged1 antibodies worsened EAE disease (24,48). Gene ablation of Jagged1 or Mindbomb1, which is critical for expression of functional Notch ligands, did not affect Th1 differentiation in vitro (28,30).
In conclusion, although most studies would support an instructive role for DLL1/DLL4 in Th1 induction, the role of Notch signaling in Th1 cell differentiation remains incompletely understood.
Other T Helper Cell Subsets
Given the increasing complexity of T cell subset biology, it is not unexpected that the bipotential instructional model is not sufficient to fully explain the function of Notch signaling in Th cell differentiation. For example, Notch signaling cooperates with TGF-β to induce Th9 cell differentiation and IL-9 expression via Jagged2 ligation (65). The Rorc, Il17, and Il23r gene promoters are direct Notch targets and, accordingly, Th17 cell differentiation is impaired when Notch signaling is blocked (66)(67)(68)(69)(70). Hereby, DLL1, DLL3, and DLL4 ligands were found to be essential (49,50,52,60,71), but a role for Jagged1 remains controversial (72)(73)(74). Remarkably, addition of DLL3 enhanced Th17 differentiation in vitro (75), although it was shown that DLL3 cannot activate Notch in adjacent cells, but inhibits signaling when expressed in the same cell as the Notch receptor (76). Differentiation and function of Tregs require Notch signaling in T cells (77)(78)(79)(80), whereby both DLL and Jagged ligands can promote Treg expansion (81)(82)(83)(84)(85)(86)(87)(88). Although the key Treg transcription factor Foxp3 is a direct Notch target (89), the role of Notch in Tregs seems rather complex, because targeting of DLL4 or Treg-specific components of the Notch pathway was associated with an increase of Tregs in in vivo autoimmune models (49,90,91). Moreover, hepatocytes and plasmacytoid DCs can induce IL-10 production in T cells via Jagged1 and DLL4, respectively (85,92,93). Finally, the finding that the absence of Notch receptors on T cells or DLL4 on lymph node stromal cells resulted in a deficiency of Tfh cells (94,95), implicates Notch signaling in Tfh cell differentiation.
"inSTRUCTive" veRSUS "UnBiASeD AMPLiFieR" MODeL As summarized in Table 1, considerable evidence supports an "instructive model" whereby pathogens direct Th1 and Th2 differentiation via upregulation of DLL or Jagged ligands on DCs (Figure 1). This implies that different Notch ligands induce distinct cellular responses in T cells, largely by the same signaling components. Although it has been speculated that different ligands might induce qualitatively different signals, e.g., RBPJκdependent or independent, or signals that differ in strength or kinetics (96), the molecular mechanisms involved are currently unknown.
It has been shown that DLL4 induces a stronger Notch signal than DLL1 or Jagged1 (86). Also, the ability of ligands to induce Notch signaling is dependent on the glycosylation status of the extracellular domain of Notch: Notch receptors carrying N-acetylglucosamine preferentially signal via delta ligands, while Jagged binding is inhibited (97). Absence or overexpression of Fringe glycosyltransferase proteins alters Th1 and Th2 differentiation (60,98). Another possibility would be that different ligands preferentially activate different Notch receptors, which may each have unique downstream nuclear targets to induce distinct cellular programs. Indeed, it has been reported that whereas Notch1 and Notch2 activate Th2 differentiation, Notch3 promotes Th1 differentiation and IFN-γ production (46,53). The expression of all these Notch receptors is induced on T cells upon TCR stimulation (62,99,100). Because different NICDs have different target gene preferences (101), distinct ligand-receptor combinations may produce quantitatively or qualitatively distinct signals (102). However, this is not supported by the findings that both Th1 and Th2 differentiation is affected in T cells that are Notch1/Notch2 double-deficient (53,62) and that retroviral expression of Notch1 as well as Notch3 was associated with increased Th1 responses (46,61). This issue is further complicated by the observation that individual Notch receptors are up regulated with different kinetics (103). It is, therefore, conceivable that they have distinct functions depending on the phase of the response.
Several studies are in apparent conflict with the "instructive model. " For example, DLL were reported to promote Th2 responses or Jagged ligands were implicated in Th1 induction (60,104). Neither Jagged1 nor DLL1 could instruct Th2 or Th1 cytokine differentiation in vitro in the absence of polarizing cytokines (105). Importantly, Bailis et al. showed that Notch signaling simultaneously induced Th1, Th2, and Th17 gene transcription, also under polarizing conditions that were described to favor only one of the differentiation outcomes (11). In addition, Notch signaling via DLL4 was shown to boost antigen sensitivity of CD4 + T cells via promoting co-stimulatory signals in T cells (16). Together, this would suggest that Notch acts as a co-stimulating factor that orchestrates multiple Th cell programs by sensitizing cells to exogenous cytokines, thereby ensuring that activated CD4 + T cells overcome a Th cell commitment threshold. In support of a role for Notch as an unbiased amplifier (Figure 1), Notch signaling was shown to be required for optimal T cell expansion, CD25 and IL-2 induction in vitro of both Th1 and Th2 cells (15,16,18,105). Finally, Notch signals promote survival by enhancing anti-apoptotic signals and glucose uptake (17,106).
It is conceivable that minor differences in experimental design or conditions form the basis of the discrepant results that support one of the two opposing models for Notch function in Th differentiation. Many studies on Notch ligands on APCs have employed GM-CSF cultured bmDCs (Table 1), which were recently shown to contain not only DCs, but also monocyte-derived macrophages (107). In our own studies, we found that Jagged expression was required for the induction of a Th2 response in the lung when in vitro HDM-pulsed bmDCs were used for allergen sensitization, but not when mice were in vivo sensitized by endogenous airway DCs (32). Moreover, studies are complicated by the finding that Notch ligands are not only induced on DCs, but also on macrophages, B and T cells, or lymph node stromal cells (24,95,99,108). Stimulation via CD46 and CD3 was shown to up regulate Jagged1 on human T cells (109), suggesting that T cells can provide Notch signals to each other. However, it is of note that normally several mechanisms, including lateral inhibition, are used to regulate Notch activity when similar cell types express both ligand and receptor. By lateral inhibition signal-sending cells actively repress their Notch signaling pathway (110), which would hamper concerted Notch-mediated differentiation and polarization of adjacent T helper cells. Finally, Notch receptors can become activated independent of ligand binding (111). Indeed, spontaneous Notch cleavage has been observed upon TCR triggering (15,18,22). Ligand-independent Notch signaling would also be supported by the recent identification of a PKCθ-dependent mechanism that enhances Notch activation (21). More experiments targeting Notch ligands in various cells types are required to determine how the Notch signaling pathway is activated in T cell subsets in vivo.
Another concern is that some gain-of-function approaches, involving overexpression of Notch receptors or ligands, may be associated with strong or prolonged, less physiological Notch signals. In this context, it is interesting that variable Notch signal strength allows induction of distinct responses by the same signaling pathway (112,113), paralleling previous experiments demonstrating Th1 or Th2 cells are induced by strong or weak TCR signals, respectively (114,115). Therefore, in studies on the effects of Notch ligands on Th differentiation, it may be critical to use a range of antigen doses. Finally, since it has recently been shown that Th2 inflammation also crucially involves IL-4producing Tfh cells (116,117), findings of impaired in vivo Th2 cell differentiation may point at Tfh rather than Th2 defects and should, therefore, be interpreted with care.
COnCLUSiOnS AnD FUTURe DiReCTiOnS
Given the increasing number of characterized Th subsets, it is unlikely that Notch signaling simply acts as a bimodal molecular switch for the induction of either Th1 and Th2 differentiation, based on DLL and Jagged expression on DCs, respectively. Nevertheless, many studies described above support the notion that individual Notch ligands have differential effects on Th cell differentiation, which cannot be explained by the unbiased amplifier model. The two models, however, may not necessarily be mutually exclusive. Effects of Notch signaling could be quite different during induction and during maintenance of Th subset differentiation. Moreover, the finding that there is quite some plasticity between Th subsets (2) and that Th2 differentiation may involve a Tfh phase has further complicated the role of Notch signaling in Th differentiation. We also conclude that the elucidation of the role of Notch ligands on particular cell types requires comprehensive in vivo studies, using cell-specific knockout of individual Notch ligands or combinations.
Since Notch signaling is involved in the differentiation of basically all Th subsets, it could serve as a potential therapeutic target, for example, by inhibiting Th2 responses in allergies or Th1/Th17 responses in autoimmune diseases. However, because effects of GSI are not limited to Notch signaling, it will be valuable to develop more specific compounds targeting Notch signaling components. Indeed synthetic, cell-permeable stabilized peptides that target a critical protein-protein interface in the Notch transactivation complex (118)(119)(120) as well as specific antibodies that target Notch receptors (121)(122)(123) or Notch ligands (24,124) have been designed. Promising results were obtained with Notch pathway blocking antibodies in cancer patients (125) and future studies should explore whether these antibodies are beneficial for allergic or autoimmune patients.
Interestingly, GSI administration during only the challenge in asthma models was sufficient to decrease Th2 cytokine production (58,59). These findings imply that Notch signaling is not likely critical to initiate IL-4 production in activated T cells and thus the initial source of IL-4, for example in AAI, remains unclear. While several cells including basophils, Tfh cells, NKT cells, and ILC2 are capable of producing IL-4 (55,116,(126)(127)(128)(129)(130)(131), mice deficient for NKT cells, ILC2, or basophils are still capable of inducing Th2 responses (132)(133)(134), suggesting that IL-4 production by Tfh cells could be crucial for Th2 cell induction. Nevertheless, the finding that in animal models allergic disease symptoms are reduced by GSI administration during challenge only indicates that Notch signaling is important in maintaining rather than inducing Th2 cell responses. This makes Notch signaling an interesting target for development of therapeutic strategies in allergic asthma.
AUTHOR COnTRiBUTiOnS
IT and MP performed the literature research, wrote the paper, and designed the figure. RH critically revised the manuscript. | 2017-05-17T19:04:17.857Z | 2017-04-18T00:00:00.000 | {
"year": 2017,
"sha1": "351861e5f833dd545304cdbaaf8c95407a945d38",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.00419/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "351861e5f833dd545304cdbaaf8c95407a945d38",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221684826 | pes2o/s2orc | v3-fos-license | Solutions for impulsive fractional pantograph differential equation via generalized anti-periodic boundary condition
*Correspondence: poom.kumam@mail.kmutt.ac.th 2Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok, 10140, Thailand 5Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, 40402, Taiwan Full list of author information is available at the end of the article Abstract
Introduction
Fractional differential equations are generally applicable in many fields such as chemistry, mechanics, fluid systems, electronics, electromagnetic and other fields; for an overview, the reader should see the literature on fractional differential equations, e.g., [3,4,12,17,21,24,26,[33][34][35][36]39] and the references therein. Fractional and impulsive differential equations were used as a powerful method to gain insight into certain emerging problems from various science and engineering fields [32,40,46]. In particular, much attention has been given to the theoretical studies such as existence, uniqueness, and stability of analytical solutions, in recent years (we refer, for example, to [1, 2, 6, 14-16, 38, 43]).
Several works on boundary value problems for an impulsive differential equation with anti-periodic boundary conditions were conducted, and results on the existence of solutions for mixed-type fractional integro-differential equation were established (see, e.g., [5,19,30,42,51,52,54,55]). More recently, the theory of existence, uniqueness, and stability analysis for impulsive fractional differential equations with different kinds of fractional operators and initial/boundary conditions has attracted the attention of many researchers; for an overview of the literature, we refer the reader to [8-11, 49, 50]. For example, in [47] Wang and Lin investigated the impulsive fractional anti-periodic boundary value prob-lem with constant coefficients. Recently, motivated by [47], Zuo et al. [56] investigated the existence results for an equation with impulsive and anti-periodic boundary conditions described by: Tu(t), Su(t)), t ∈ J = J\{t 1 , . . . , t m }, u| t=t k = I k (u(t k )), k = 1, 2, . . . , m, u(0) = -u(1), (1.1) where c D q is the Caputo fractional derivative of order q ∈ (0, 1), λ > 0, I k ∈ R, 0 = t 0 < t 1 < · · · < t m < t m+1 = 1, f ∈ C(J × R 3 , R), J = [0, 1], R is the set of real numbers, u| t=t k denotes the jump of u(t) at t = t k , S and T are linear operators. Under Lipschitz and nonlinear growth conditions, they established sufficient conditions for the existence and uniqueness of a solution to (1.1) using Banach mapping principle and Krasnoselskii's fixed point theorem.
On the other hand, in the deterministic situation there is a very special delay differential equation known as the pantograph equation given by where 0 < λ < 1. It is used in various fields of applied and pure mathematics, such as number theory, probability, dynamic system, and quantum mechanics. In particular, an important studies were conducted on the properties of both the analytical and numerical solutions of this equation (see [18,22,23,31]), also recently multi-pantograph and generalized nonlinear multi-pantograph equations were studied in [29,37,53]. Owing to the increasing interest and importance of this equation, Balachandran et al. [13] established the solution of abstract fractional pantograph equation via fractional calculus techniques and fixed point method. They consider the following equation: where 0 < p < 1, 0 < λ < 1, and f : J × X × X → X is a continuous function.
In this paper, motivated by [13,47,56], we consider the following impulsive fractional pantograph differential equation: where c D α is the Caputo fractional derivative of order α, 0 = t 0 < t 1 < · · · < t k < t k+1 = 1, , with x(t + m ) and x(tm ) representing the right and left limits of x(t) at t = t m .
The main aim of this paper is to establish the existence and uniqueness of solutions for the boundary value problem (1.3), by using the contraction principle of Banach and the fixed point theorem of Krasnoselskii. Presently, different techniques have been extensively applied in obtaining solutions to the impulsive fractional differential equations (see, e.g., [20,41,44]). However, in this article, we adopt the solution approach used in [47] to solve the impulsive fractional equation (1.3).
We highlight the main contributions of this paper as follows: • We consider the impulsive pantograph fractional differential equation.
• We consider more general anti-periodic boundary value problems with constant coefficients. In general, this paper contributes toward the development of qualitative analysis of fractional differential equations.
This paper is organized as follows: the statement of the problem, preliminaries, and some useful lemmas that will be required for the later sections are presented in Sect.
Preliminaries and lemmas
) and x(tm ) exist, m = 1, 2, . . . , k, is a space of continuous realvalued functions on the interval J, and x(tm ) = x(t m ). Then, clearly, PC(J, R) is a Banach space with the norm x PC = sup{|x(t)| : t ∈ J}, and let the norm of a measurable function ϕ : J → R be defined by: Then L p (J, R) is a Banach space of Lebesque-measurable functions with ϕ L p (J) < ∞. Definition 2.1 (see [26]) The fractional integral of order ρ with the lower limit zero for a function g is defined as provided the right-hand side is pointwise defined on [0, ∞), where Γ (·) denotes the Gamma function. Definition 2.2 (see [26]) The Riemann-Liouville derivative of order ρ with the lower limit zero for a function g is defined as provided the function g is absolutely continuous up to order (n -1) derivatives, where Γ (·) denotes the Gamma function. [26]) The Caputo derivative of order ρ > 0 with the lower limit zero for a function g is defined as
Remark 2.4 (see [28]) If g ∈ C n [0, +∞), then Since in this paper we deal with an impulsive problem, Definition 2.3 is appropriate.
(3) For any λ > 0 and t 1 , Lemma 2.7 (see [27]) Let P be a closed, convex, and nonempty subset of a Banach space X, and let F 1 , F 2 be operators such that: F 1 is compact and continuous, (3) F 2 is a contraction mapping. Then there exists z ∈ P such that z = F 1 z + F 2 z. Lemma 2.8 (see [48]) Let X be a Banach space, and let J = [0, T]. Suppose that W ⊂ PC(J, X) satisfies the following conditions: (1) W is a uniformly bounded subset of PC(J, X), x ∈ W } are relatively compact subsets of X. Then W is a relatively compact subset of PC(J, X). Lemma 2.9 (see [47]) Let g : J → R be a continuous function. The function u given by It follows from Lemma 2.9 and by using the boundary condition ax(0) + bx(1) = 0 that the solution of (1.3) can be expressed by
Main results
for all t ∈ J, x, y, u, v ∈ R. (C 2 ) There exists a positive constant L 2 such that
Then the boundary value problem (1.3) has a unique solution.
Proof Define a mapping T : then we show that T has a fixed point, which is a solution of problem (1.3).
s, x(s), x(γ s) ds
Secondly, we show that the mapping T is a contraction. Indeed, given any x, y ∈ H r and each t ∈ J, we obtain (Tx)(t) -(Ty)(t)
s, x(s), x(γ s)f s, y(s), y(γ s) ds
This implies that Tx -Ty ≤ η xy PC . Thus, T is a contraction, Hence we conclude the proof by applying Banach contraction principle.
Then problem (1.3) has at least one solution.
Proof It is easily to see that the set H r = {x ∈ PC(J, R) : x PC ≤ r} is a closed, bounded, and convex set in PC(J, R) for all r > 0. Let M and N be two operators on H r defined by . (3.4) It follows from condition (C 4 ) and Hölder inequality that for any x ∈ H r and each t ∈ J, Repeating the same procedure as above, we obtain Next, we show that there exist r 0 > 0 with Mx + Ny ∈ H r 0 for x, y ∈ H r 0 . Suppose by contradiction that for each r > 0 there exist x r , y r ∈ H r 0 and t r ∈ J such that |(Mx r )(t r )+(Ny r )(t r )| > r.
Dividing both sides by r and taking the lower limit as r → +∞, yields which contradicts condition (C 5 ). Hence, there exist r 0 such that Mx + Ny ∈ H r 0 , for all x, y ∈ H r 0 . Thus, for all t ∈ J and x, y ∈ H r , one gets Denoting η * = (2+κ) it follows from (C 5 ) that 0 < η * < 1 and Nx + Ny PC ≤ η * xy PC . Thus N is a contraction mapping.
Since f is continuous, this implies that operator M is also continuous. Now to show M is compact, we apply the same procedure as in Theorem 3.1. One can easily see that M(H r ) is uniformly bounded on PC(J, R). We now show that M(H r ) is equicontinuous on J m (m = 1, . . . , k). Let Φ = J × H r × H r and f * = sup (t,x(t),x(γ s))∈Φ |f (t, x(t), x(γ s))|, then, for any t m < ξ 2 < ξ 1 < t m+1 , we have By (2) of Lemma 2.6, it follows that E α,α (-t α λ) is continuous on t ∈ J, and thus E α,α (-t α λ) is uniformly continuous on t ∈ J, hence, there is a sufficiently small δ > 0 such that, for Let ρ 1 = 2-α 2(1-α) and ρ 2 = 2-α α . Then ρ 1 > 1, ρ 2 > 1, and 1 ρ 1 + 1 ρ 2 = 1. Applying Hölder inequality yields Hence, as ξ 2 → ξ 1 , which implies that M is equicontinuous on the interval J m . Hence, we have shown that M(H r ) is relatively compact on J. It now follows by Arzela-Ascoli's theorem that M is compact. Therefore, we conclude from Lemma 2.7 that problem (1.3) has at least one solution.
We now present some examples to illustrate our result.
Example 1 Consider the following fractional pantograph-differential equation with impulsive and anti-periodic boundary condition described by: By comparing with problem (1.3), we get: Then for any x, y, u, v ∈ R and t ∈ J, we obtain xy PC , Then by a simple calculation we can easily see that This implies that
Conclusions
Using the Banach and Krasnoselskii's fixed point theorems, we have established the existence and uniqueness of the solution for fractional pantograph differential equation with impulsive and generalized anti-periodic boundary conditions. We note that it would be interesting to study this kind of problem for a certain kind of generalized fractional derivatives and integrals [7,25]. In addition, this is the first paper, to the best of our knowledge, dealing with a fractional pantograph differential equation with an impulsive and generalized anti-periodic boundary conditions. Therefore, our result improves and generalizes the results in [13] and can be considered as a contribution to the development of the qualitative analysis of fractional differential equations. Lastly, we offered two examples to illustrate the obtained results. | 2020-09-10T11:18:39.003Z | 2020-09-09T00:00:00.000 | {
"year": 2020,
"sha1": "48a48159ecb4dcb46b17fe003203d04099948804",
"oa_license": "CCBY",
"oa_url": "https://advancesindifferenceequations.springeropen.com/track/pdf/10.1186/s13662-020-02887-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "48a48159ecb4dcb46b17fe003203d04099948804",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
61896752 | pes2o/s2orc | v3-fos-license | Operational Fault Diagnosis in Industrial Hydraulic Systems Through Modeling the Internal Leakage of its Components
In this study, a model of a high pressure hydraulic system was developed using the bond graph method to investigate the effect of the internal leakage of its main components (pump, cylinder and 4/2 way valve) on the operational characteristics of the system under various loads. All the main aspects of the hydraulic circuit (like the internal leakages, the compressibility of the fluid, the hydraulic pressure drop, the inertia of moving masses and the friction of the spool) were taken into consideration. The results of this modeling were compared with the experimental data taken from the literature and from an actual test platform installed in the laboratory. Modeling and experimental data curves correlate very well in form, magnitude and response times for all the system’s main parameters. This proves that the present method can be used to accurately model the response and operation of hydraulic systems and can thus be used for operational fault diagnosis in many cases, especially in simulating fault scenarios when the defective component is not obvious. This is very important in industrial production systems where unpredictable shutdowns of the hydraulic machinery have a considerable negative economic impact on cost.
General Description
The bond graph method is used to create a model of a high pressure hydraulic system in order to simulate various degrees of internal leakage in its 4/2 way direction control valve and hydraulic cylinder under different loads. For the modeling, all the main aspects of the hydraulic system were taken into consideration: The internal leakage in the pump the cylinder and the 4/2 way valve, the leakage flow caused both due to pressure gradient (Hagen-Poiseuille flow) and due to the relative motion of the piston to the cylinder body (Couette flow), the compressibility of the hydraulic fluid, the pressure drop due to hydraulic losses, the inertia of moving masses in the cylinder and the friction in the hydraulic cylinder spool. The results of modeling were compared against the experimental data from the literature and from an actual hydraulic test platform installed in the laboratory. The goal is to investigate whether bond graph modeling can be used as a tool for proactive fault finding in high pressure hydraulic systems, by accurately simulating various operating states of the system.
Science Publications
AJAS systems can be found in the textbooks of Meritt (1991); Costopoulos (2009) and Rabie (2009), while the subject of condition monitoring techniques for industrial high pressure hydraulic systems is examined by Skarmea (2003). As far as the modeling of hydraulic systems is concerned, an in depth research has been conducted by Kaliafetis and Costopoulos (1995) on the modeling of axial piston variable displacement pumps, where as Noorbehesht and Ghaseminejad (2013) performed numerical simulation using a computational fluid dynamics method.
The Bond Graph method, albeit comparatively new, is rather popular for modeling physical systems, but its applications in high pressure hydraulic systems remain somewhat limited. A comparison of the Bond Graph method with object oriented modeling is done by Borutzky (2002), where it was found that both methods can be equally reliable, if they are used in appropriate applications and effectively implemented. The basics of bond graph based control and substructuring are analyzed by Gawthorp et al. (2009), while Krishnaswamy and Li (2006) used the Bond Graph method in the passive teleoperation of a hydraulic backhoe. Additional work in the field of bond graph modeling of nonlinear hydro-mechanical systems has been conducted by Margolis and Shim (2005), while Li and Ngwempo (2005) have used bond graph modeling in electro-hydraulic valves. The dynamic response of a high pressure hydraulic system using bond graphs is examined by Barnard and Dransfield (1977), while Dransfield (1981) dealt with the application of bond graphs to hydraulic systems. The concept of using bond graphs for the study of faults and failures in hydraulic systems has been investigated by Athanasatos and Costopoulos (2011) where bond graph modeling is used to predict the response of a hydraulic system with irregular motion of its spool. In the area of maintenance, quality and reliability of industrial systems several works have been published: Zotos and Costopoulos (2012) examined precision maintenance whereas Siam et al. (2012) examined the total quality management, Valarmathi and Chilambuchelvan (2012) investigated a wind turbine through a power quality analysis and Zotos and Costopoulos (2008) give specific orders for the increase of numerical stability and accuracy.
The Hydraulic System to be Modeled
The line drawing of the hydraulic system used as the test platform can be seen in Fig. 1. The main components and measuring instruments of the hydraulic system are explained in Table 1. The system can be used to simulate a variety of operating conditions. Due to the focus on the internal leakage of the hydraulic cylinder (26) and the 4/2 way direction control valve (11) additional flow control valves have been installed in order to simulate an increase of the internal leakage of these two components. The internal leakage on the hydraulic cylinder is simulated via throttle valve (H) which, when open, allows some amount of flow to bypass the cylinder. Similarly, the internal leakage in the 4/2 way valve is simulated via throttle valve (I) which, when open, allows flow directly from port "P" of the valve to port "T".
Symbol Nomenclature
The main symbols that appear in the bond graph components and the resulting equations are the following: = internal leakage m = mass inertia o = outlet P = valve "P" port p = pump T = valve "T" port u = Coulomb friction v = valve rv = relief valve set = opening pressure
Modeling the System's Main Components
To simplify the modeling procedure, each main component of the system was modeled separately. Due to the focus on the hydraulic cylinder and the direction control valve, detailed models are developed for these two components. In order to avoid an overly complex model of the hydraulic system, simpler models were used for the rest of the components, but without any sacrifice in terms of modeling accuracy. The components of the hydraulic system were modeled as following.
Electric Motor, Pump and Main Relief Valve
The electric motor is modeled as a constant flow source (angular velocity) mated to a transformer (pump), resulting in constant flow rate Q p . The pressure at the pump outlet is modeled as a "0" junction, on which the following are considered: The compressibility of the hydraulic fluid at the pump outlet (C p ), the pump internal leakage flow path (R lp ) and the flow path of fluid when the main relief valve of the system opens (R rv ).
Section from Pump Outlet to 4/2 Way Valve Inlet
The pressure drop from the pump outlet until the 4/2 way valve inlet is modeled as a negative effort (pressure) source mated to a common flow "1" junction.
4/2 Way Direction Control Valve
The 4/2 way direction control valve is modeled as a series of fluid flow paths, each with its own flow resistance. The flow resistance is a function of the valve spool position, meaning that the flow resistance varies from infinite (when a particular passage is blocked for a certain operating position of the valve) to a minimum value, when the spool is set in place. Also, the internal leakage of the 4/2 way valve is modeled as a fluid flow path from port "P" to port "T" of the valve.
Section from Valve Outlet to Tank
The pressure drop from the pump outlet until the 4/2 way valve inlet is modeled as a negative effort (pressure) source.
Hydraulic Cylinder and Equivalent Load
The hydraulic cylinder is essentially modeled as two transformers, one converting pressure and flow rate to linear motion and force (which represents the chamber connected to the pressure line) and another converting linear motion and force to pressure and flow rate (to model the connection of the chamber to the return line). For the modeling of the hydraulic cylinder, the internal leakage past the piston was included, taking into consideration both the leakage flow caused by the pressure gradient (Hagen-Poiseuille flow) as well as the leakage flow caused by the relative motion of the piston to the cylinder body (Couette flow). The compressibility of the fluid in the two cylinder chambers as well as the effects of friction (both Coulomb and viscous) and inertia due to the moving mass of the cylinder piston and rods were also included in the modeling. Finally, the equivalent load is simulated as a negative effort (force) source, exerted at the cylinder rod tip. Due to the reversal of the cylinder piston motion and the subsequent reversal in the flow of power, two different bond graphs for the cylinder and load were used: One shown in Fig. 2, for the modeling of motion to the left and another shown in Fig. 3, for modeling the power flow during motion to the right. More details of this modeling procedure may be found in Athanasatos and Costopoulos (2011).
Check Valve "L"
Check valve "L" is used to bypass the electrically operated control valve used to simulate the load in the hydraulic system. The pressure drop it causes is modeled as a negative effort source during the return phase, whereas it is zero during the work phase (since the oil flows through the load control valve).
Hydraulic System Model Bond Graph Model Equation Layout
By combining the bond graph models for the hydraulic system's components, we have created the bond graph model of the entire hydraulic system, as in Athanasatos and Costopoulos (2011).
The equations for the components of the bond graph are the following.
with -W≤X i ≤W
Section from Valve Outlet to Tank
Equation (
Constants and Initial Conditions Definition
Based on manufacturer data and measurements performed, the following constants and initial conditions were determined:
Comparison of Model and Actual System during Normal Operation
To test the accuracy of the bond graph model, the results of the model were compared to the data of a test run performed with the actual hydraulic system.
For the test, load control valve "F" was adjusted as to create a backpressure of 10 bar (equivalent with a constant load of 2.1kN) and flow control valves "H" and "I" were completely shut off. The results of the model run compared to the data from the actual system are shown in Fig. 4-7.
As seen in Fig. 4, the motion of the rod lasts 6.8 sec, with the working phase lasting 3.5 sec and the return phase lasting 3.3 sec and these values are accurately predicted by the model as well.
In Fig. 5 we see the velocity graph of the hydraulic cylinder rod. The average rod velocity in the working phase is 0.158m/s and in the return phase is 0.163m/s.
The internal leakage flow rate through the hydraulic cylinder is shown in Fig. 6. The average internal leakage on the hydraulic cylinder is 1.8e-5 m 3 /s, a value which also accurately predicted by the model.
Finally, in Fig. 7 we see the internal leakage through the 4/2 way valve during the work phase. The average internal leakage flow rate is 8.5e-6 m 3 /s, a value which is also predicted by the model. From the comparison of the actual with the model data it is evident that there is a very good correlation in the shape and maximum/minimum values of the curves in all of the main operating parameters of the hydraulic system.
Results for Internal Leakage of the Hydraulic Cylinder
The results of the model were compared to the ones of the actual system in terms of the equivalent internal leakage flow rate in the working phase and the ratio of the average speed in the working phase to the average speed in the return phase. The results are shown in the following Fig. 8 and 9, where there is a very good correlation between the experimental and the measurement data and the regression curves for both the experimental (continuous line) and the model data (dashed line) have the form of a 3rd degree polynomial equation. Fig. 11. Comparison of working to return piston velocity ratio between experimental and model data for F load = 6.3kN
Results for Internal Leakage in the 4/2 Way Direction Control Valve
Again, the values of equivalent clearances for the 4/2 way valve calculated before were used in the model and the results were compared with the ones of the actual system. In Fig. 10 and 11, the comparison of equivalent internal leakage values and the piston velocity ratio values of the model and the actual system for an equivalent load of 6.3kN are shown. The correlation coefficients of experimental and model data are again very high (r = 0.998), with the regression curves for both the experimental (continuous line) and the model data (dashed line) having the form of a 3rd degree polynomial equation.
CONCLUSION
The Bond Graph method, albeit comparatively new, is rather popular in the modeling of physical systems but its applications in high pressure hydraulic systems remain somewhat limited. A comparison of the Bond Graph method with object oriented modeling is met in the literature, where it is found that both methods can be equally reliable, if they are used in appropriate applications and effectively implemented. In this study, a model of an actual high pressure hydraulic system has been developed, in order to be used for proactive fault finding. The main components of the hydraulic system were modeled separately and then they were combined to form the model of the entire system. The present model is focused on the effects of the internal leakage of the hydraulic cylinder and the 4/2 way direction control valve on system's operation. The results of the model were compared to the results provided by an actual hydraulic system, modified in order to simulate various loads and degrees of internal leakage in the hydraulic cylinder and the 4/2 way valve. Comparisons were made both during normal operation of the system and during simulation of increased internal leakage in the cylinder and the valve. In all cases, the results of the model correlate very well to the data provided by the actual system both in shape and in minima and maxima of the curves. The equivalent internal leakage in the hydraulic cylinder and the 4/2 way valve is a function of the equivalent internal clearance that has the form of a 3rd degree polynomial equation, as it is evident by the regression curves in both the experimental and the modeling data. Additionally, the ratio of the average velocity of the cylinder's piston in the working phase to the average velocity in the return phase is also a function of the equivalent internal clearance of the hydraulic cylinder and the 4/2 way valve in the form of a 3rd degree polynomial equation. This also means that this ratio can be used as an index for the assessment of the internal clearance of the cylinder and the valve, given that it is fairly easy to calculate, unlike the more complex procedure and needed instrumentation required to measure the internal leakage. This could prove useful in real life applications of high pressure hydraulic systems where it is evident that there is a problem, but the source of it cannot be easily traced. This is very important in industrial production systems where unpredictable shutdowns of the hydraulic machinery have a Science Publications AJAS considerable negative economic impact on cost. In situations like this the application of modeling techniques could also help the troubleshooting procedure even further, by allowing the simulation of "fault scenarios" in various components of the system, in order to locate the source of the problem. However, this application should be prepared by highly qualified personnel. | 2019-02-15T14:05:36.532Z | 2013-11-07T00:00:00.000 | {
"year": 2013,
"sha1": "436a7a960e593e2175bd132696b63dd81314ef13",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajassp.2013.1648.1659",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0d74a87e4c5ebe8cc3e0d826322c4539ab228380",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
16966527 | pes2o/s2orc | v3-fos-license | Design and validation of magnetic particle spectrometer for characterization of magnetic nanoparticle relaxation dynamics
The design and validation of a magnetic particle spectrometer (MPS) system used to study the linear and nonlinear behavior of magnetic nanoparticle suspensions is presented. The MPS characterizes the suspension dynamic response, both due to relaxation and saturation effects, which depends on the magnetic particles and their environment. The system applies sinusoidal excitation magnetic fields varying in amplitude and frequency and can be configured for linear measurements (1 mT at up to 120 kHz) and nonlinear measurements (50 mT at up to 24 kHz). Time-resolved data acquisition at up to 4 MS/s combined with hardware and software-based signal processing allows for wide-band measurements up to 50 harmonics in nonlinear mode. By cross-calibrating the instrument with a known sample, the instantaneous sample magnetization can be quantitatively reconstructed. Validation of the two MPS modes are performed for iron oxide and cobalt ferrite suspensions, exhibiting Néel and Brownian relaxation, respectively.
I. INTRODUCTION
Magnetic particle spectrometer (MPS) development is motivated to assess magnetic suspension suitability for magnetic particle imaging (MPI). 1,2 MPI is an emerging biomedical imaging technique addressing drawbacks found in nuclear imaging by using non-radioactive tracers, i.e. the magnetic nanoparticles, with theoretically higher resolution in a short process time. MPI detects nanoparticle density spatially by probing locally their dynamic magnetization in a spatial selection gradient field and finds application in real-time cardiovascular imaging, 3 stem cell tracking 4,5 and hyperthermia. 6 The MPS described herein has no spatial scanning capability, but can assess both particle suspension relaxation and saturation, related to their performance for MPI. 7 The designed system features the measurement of full time-series data, 8 as opposed to discrete FFT components using a lock-in amplifier; 9 multi-mode attenuation/cancellation 9 of the primary excitation signal (99.7 % of the feed-through, i.e. -50 dB); and an estimation of the instantaneous magnetization of the suspension, instead of just the induced voltage.
The dynamic response of the magnetic suspension depends on the strength and frequency of the applied magnetic field. Nonlinearity in response increases with magnetic field amplitude because of particle magnetic saturation, while relaxation effects become more evident with increasing frequency. Observing and quantifying these phenomena is relevant to study the dynamic response of magnetic a suspensions, to improve their synthesis, or to infer on their suitability for diverse applications, such as MPI.
The designed MPS can operate in two modes, herein referred to as "linear DMS" (dynamic magnetic susceptibility) and "nonlinear MPS". At low applied field amplitudes, linear DMS probes only the linear magnetization regime of the nanoparticles, similarly to AC susceptometry. The response to a sinusoidal time-varying magnetic field is a sinusoidal magnetic moment change, from which the complex magnetic susceptibility, characteristic of the suspension rotational dynamics, can be determined. At higher applied field amplitudes, nonlinearity appears in the sample response due to magnetic saturation of the suspension. The magnetization saturates, yielding sharper, non-sinusoidal voltage changes when the magnetization flips. The measured spectrum presents odd harmonics of the fundamental frequency, characteristic of the suspension magnetic nonlinearity. This nonlinear MPS mode characterizes the nanoparticle suspension rotational dynamics, both in amplitude and in frequency, assessing both saturation and relaxation effects.
II. MPS SETUP DESIGN
The MPS provides a spatially uniform, sinusoidally time-varying magnetic field to a nanoparticle suspension in a 1 mL Eppendorf vial. The field is generated by a gapped solenoid excitation coil, with sinusoidal current supplied via a power amplifier fed by a computer-controlled data acquisition (DAQ) system (National Instruments PCI-6115, 12-Bit, 4 MS/s for multi-channel I/O) ( Figure 1). The nanoparticles rotate in response to the applied magnetic field, inducing a change in magnetic flux detected by a pick-up coil system. The induced voltage and driving current are recorded simultaneously by the DAQ.
A. Excitation field
The excitation coil dimensions (AWG 19, 206 turns, ∅ 30 mm, 73 mm long) are optimized to provide a large, homogeneous magnetic field (3 mT/A, 5 % inhomogeneity over the sample volume) while keeping its resistance and inductance low (0.58 Ω, 400 µH respectively), which allows more current to be provided by the power amplifier (AE Techron 7224, voltage gain of 20) at high frequencies. The DAQ outputs the sinusoidal excitation waveform to the power amplifier, and the excitation coil is either directly connected to the power amplifier terminals or connected via a resonant matching circuit in order to achieve higher field amplitudes. The direct connection mode provides wide-band measurements in the linear DMS mode, with maximum field amplitudes of 50 mT for frequencies up to 5 kHz, 10 mT up to 30 kHz, and 1 mT up to 120 kHz. The resonant matching circuit is implemented using pairs of high-voltage capacitors (Cornell-Dubilier) designed for a current gain of 3 at discrete frequencies in the nonlinear MPS mode. In this way, the system can reach 50 mT at 3, 10.8, 16, 19.6 and 24 kHz. Only measurements at 3 kHz and 24 kHz are presented in this paper using resonant matching circuits. The input excitation current shows -50 dBc/Hz phase noise at 1 Hz offset from the carrier (-70 dBc/Hz at 10 Hz offset) and total harmonic distortion of better than -63 dB across all cases, confirming high spectral purity of the excitation magnetic field.
FIG. 1. The nanoparticle suspension is driven by a time-varying magnetic field provided by the excitation field. The sample magnetization change induces a voltage in a pick-up coil system which is recorded by a DAQ.
B. Signal measurement and feed-through cancellation strategy
The DAQ simultaneously measures the current delivered to the excitation coil and the nanoparticle response as measured by a pick-up coil system. The excitation coil current is measured by a current probe (Tektronix, TCP305A probe with TCPA300 amplifier) to assess the reference phase of the excitation magnetic field. A primary design challenge for the sensing coil system is to negate the direct induction from the excitation coil (called "feed-through"), and ideally only measure the magnetic moment of the nanoparticle suspension. To mitigate feed-through, the sensing coil system consists of three coils: the pick-up coil sensing the sample magnetization change, a fixed balancing coil, and an adjustable fine-tuning coil as illustrated on Figure 1. The sensing coil system has a rms noise of 0.32 mV, and a self-resonance frequency of 1 MHz, which sets the upper frequency bound for measurements.
The pick-up coil is internally molded in epoxy resin (AWG 28, 40 turns, ∅10.5 mm) to minimize the distance between the coil and the sample, thereby maximizing the sensitivity to the nanoparticle sample while minimizing feed-through. Next, a balancing coil is mounted in series with the pick-up coil and wound in opposite direction. The excitation field induces a voltage with opposite phase that counteracts the main induction from the pick-up coil. The balancing coil, which is very sensitive to any displacement, is fixed at 25 mm from the pick-up coil, to avoid interaction with the sample. Furthermore, to allow fine cancellation adjustments, a movable short-circuited fine-tuning coil is placed close to the fixed balancing coil. The fine-tuning coil, being inductively coupled to both the excitation and balancing coils, modifies the balancing and further reduces the feed-through.
C. Signal processing for determination of instantaneous magnetization
For the data reported here, the first 0.25 s of all measurements are discarded to eliminate any transients from the electronics. For the linear mode, the sampling rate is set at approximately 30 times the excitation frequency (the DAQ imposes discrete sampling frequencies) and up to 5 s of data is used (minimum of 2500 cycles). For the nonlinear mode, 1 s of data is used (minimum of 3000 cycles) with a sampling rate ≥100 times the excitation frequency so that at least 50 harmonics can be extracted.
In addition to the sensing coil system, which minimizes but does not fully eliminate feedthrough at the hardware level, additional feed-through is cancelled numerically during the postprocessing. Two measurements are performed sequentially without (v blank (t)) and with (v sample (t)) the sample present. The fast Fourier transform (FFT) is applied to both signals (bin width in the FFT spectrum is approximately 1 Hz in all cases) generating frequency-domain spectraṼ blank ( f ) andṼ sample ( f ), each represented by phasor amplitudes A j and phases ϕ j . From here on, only the FFT coefficients at the excitation frequency f 0 and subsequent even and odd harmonics are considered; all other FFT coefficients are discarded, which acts to filter the relevant signal information. Next, V suspension is obtained by subtractingṼ blank fromṼ sample . 10 The time-domain voltage induced by the suspension is then reconstructed, which is also linked to the time-rate-change of the magnetic moment m(t): with K pickup a sensitivity coefficient in A·m 2 /V·s determined experimentally (described later).
The instantaneous magnetization M(t) is thus determined by integration of the induced voltage v suspension (t):
where Vol is the volume of the suspension. In the case of linear DMS, the magnetic moment can be defined by its complex susceptibility χ, the slope of the M-H curve, as m (t) = Vol χ H ext (t). The susceptibility is projected into its real and imaginary components as χ = χ − i χ = | χ| e −iΨ , with | χ| = A 1
K pickup
Vol |H ext | f 0 being the amplitude and Ψ = ϕ 1 + π 2 being the phase.
The calibration coefficient K pickup , which captures the pick-up coil sensitivity, depends on the coil and the sample container geometries and is independent of the magnetic sample. This coefficient is determined by measuring the susceptibility spectra of magnetic particle suspensions using both a commercial calibrated AC susceptometer and the linear DMS.
III. EXPERIMENTAL RESULTS AND DISCUSSION
Linear and nonlinear measurements are performed on two in-house magnetic nanoparticle suspensions obtained by thermal decomposition. 11 The first suspension is made of PEG-coated iron oxide nanoparticles with ∼55 nm hydrodynamic diameter, ∼14 nm magnetic diameter suspended in water, 12 while the second suspension is made of oleic-acid coated cobalt ferrite nanoparticles with ∼22 nm hydrodynamic diameter, ∼5 nm magnetic diameter suspended in 1-octadecene. 13 Figure 2 shows linear DMS measurements for iron oxide and cobalt ferrite particles tested at 1 mT from 500 Hz to 120 kHz. The graphs show the in-phase real (green) and out-of-phase imaginary (brown) susceptibility component spectra as a function frequency. The markers are MPS measurements, while the solid and dashed lines are measurements from a commercial AC susceptometer (Dynomag, Acreo). We observe very good agreement between the two instruments for both particles, but because the sensitivity of the inductive sensing method decreases linearly with the frequency, we observe some discrepancy at lower frequency.
A. Linear DMS characterization
The iron oxide susceptibility spectrum (Figure 2 a) is characteristic of particles relaxing predominantly by the Néel mechanism. The in-phase component remains significantly higher than the out-of-phase component. The cobalt ferrite susceptibility spectrum (Figure 2 b) is characteristic of particles relaxing predominantly by the Brownian mechanism. At low frequency, the in-phase component plateaus with a zero out-of-phase component. Around 9 kHz the in-phase susceptibility drops dramatically while the out-of-phase peaks to its maximum. Finally, both components asymptote to zero as the particle rotations do not respond to overly high frequency excitation field.
B. Nonlinear MPS characterization
Iron oxide and cobalt ferrite nonlinear MPS measurements at 3 kHz and 24 kHz from 5 to 50 mT are presented in Figure 3, displaying the measured time-varying induced voltage (a,d), the voltage FFT spectra (b,e) and the corresponding dynamic magnetic hysteresis curves (c,f). In both cases, the induced voltage increases linearly with magnetic field amplitude. At low excitation fields we see that the voltage response is almost sinusoidal, with few odd harmonics and an almost linear instantaneous magnetization response. However, at high excitation field strength, the voltages change abruptly, resulting in slow decaying FFT spectra and magnetization saturation. For the iron oxide sample, the responses at 3 kHz and 24 kHz are very similar, consistent with Néel relaxing particles with characteristic peak frequency that is much higher than the frequency window of the measurement. The voltage induced at 24 kHz is 8 times higher than at 3 kHz, with the ratio corresponding to the frequency ratio as explained by the magnetic induction phenomena. Moreover, the voltage FFT spectra between 3 kHz and 24 kHz decay at similar rates, providing evidence of induced voltages with the same time variations.
For the cobalt ferrite sample, the responses at 3 kHz and 24 kHz are completely different, consistent with Brownian relaxing particles with a 9 kHz peak frequency that lies between the two studied frequencies. On one hand, the behavior at 3 kHz is similar to the response of the iron oxide suspension, albeit presenting broad voltage peaks and faster voltage FFT decays. On the other hand, the behavior at 24 kHz changes dramatically compared to 3 kHz, which is characteristic of a frequency that exceeds the inverse of the particle's Brownian relaxation time: the voltage switches are even less sharp, the voltage FFT decays more rapidly than at 3 kHz and the instantaneous magnetization reaches a lower magnetization. The two sets of measurements show a shift between two regimes as supported by the linear DMS measurements on Figure 2 since the 9 kHz out-of-phase peak is located between the two frequencies. Figure 4 compares the phase shift between the instantaneous magnetization and the magnetic field for all field amplitudes and frequencies. While increasing the magnetic field amplitude, the magnetic torque increases, shortening the relaxation time and diminishing the phase shift. Increasing the frequency increases the hydrodynamic torque and thus the phase shift. The phase shift change from 3 kHz to 24 kHz is relatively small for iron oxide, but dramatically larger for cobalt ferrite, supporting the dramatic behavior change with the frequency. The direct effect of the phase shift is the appearance of a magnetization curve opening, i.e. a non-zero dynamic remanence and coercivity in the dynamic magnetization curve. As a consequence, curve openings in Figures 3c,f are larger at 24 kHz than at 3 kHz and larger for cobalt ferrite than for iron oxide.
IV. CONCLUSION
This paper presents the design and validation of a magnetic particle spectrometer to study the linear and nonlinear behavior of magnetic nanoparticle suspensions. The setup design is detailed along with the post-processing and calibration procedures. Linear DMS measurements at 1 mT were realized in a wide frequency range (0.5-120 kHz) showing good agreement with a commercial AC susceptometer. Nonlinear MPS measurements require resonant and matching circuits to apply 50 mT at discrete frequencies from 3 kHz to 24 kHz. The pick-up coil system design and the feed-through cancellation procedure allows for fine measurements up to 50 harmonics. The two MPS modes are tested for iron oxide and cobalt ferrite suspensions, which exhibit very different magnetic relaxation behaviors. The measured time varying induced voltage, the voltage FFT, and the reconstructed instantaneous magnetization analysis were used to assess the magnetic suspension rotational dynamics and to investigate their relaxation and saturation effects. | 2018-04-03T05:18:45.532Z | 2017-03-02T00:00:00.000 | {
"year": 2017,
"sha1": "f39d506a5a510c644d4ce3c4d9e206b8e3f7e18a",
"oa_license": "CCBY",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/1.4978003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f39d506a5a510c644d4ce3c4d9e206b8e3f7e18a",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
43923934 | pes2o/s2orc | v3-fos-license | Amortized Inference Regularization
The variational autoencoder (VAE) is a popular model for density estimation and representation learning. Canonically, the variational principle suggests to prefer an expressive inference model so that the variational approximation is accurate. However, it is often overlooked that an overly-expressive inference model can be detrimental to the test set performance of both the amortized posterior approximator and, more importantly, the generative density estimator. In this paper, we leverage the fact that VAEs rely on amortized inference and propose techniques for amortized inference regularization (AIR) that control the smoothness of the inference model. We demonstrate that, by applying AIR, it is possible to improve VAE generalization on both inference and generative performance. Our paper challenges the belief that amortized inference is simply a mechanism for approximating maximum likelihood training and illustrates that regularization of the amortization family provides a new direction for understanding and improving generalization in VAEs.
Introduction
Variational autoencoders are a class of generative models with widespread applications in density estimation, semi-supervised learning, and representation learning [1,2,3,4]. A popular approach for the training of such models is to maximize the log-likelihood of the training data. However, maximum likelihood is often intractable due to the presence of latent variables. Variational Bayes resolves this issue by constructing a tractable lower bound of the log-likelihood and maximizing the lower bound instead. Classically, Variational Bayes introduces per-sample approximate proposal distributions that need to be optimized using a process called variational inference. However, per-sample optimization incurs a high computational cost. A key contribution of the variational autoencoding framework is the observation that the cost of variational inference can be amortized by using an amortized inference model that learns an efficient mapping from samples to proposal distributions. This perspective portrays amortized inference as a tool for efficiently approximating maximum likelihood training. Many techniques have since been proposed to expand the expressivity of the amortized inference model in order to better approximate maximum likelihood training [5,6,7,8].
In this paper, we challenge the conventional role that amortized inference plays in variational autoencoders. For datasets where the generative model is prone to overfitting, we show that having an amortized inference model actually provides a new and effective way to regularize maximum likelihood training. Rather than making the amortized inference model more expressive, we propose instead to restrict the capacity of the amortization family. Through amortized inference regularization (AIR), we show that it is possible to reduce the inference gap and increase the log-likelihood performance on the test set. We propose several techniques for AIR and provide extensive theoretical and empirical analyses of our proposed techniques when applied to the variational autoencoder and the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
importance-weighted autoencoder. By rethinking the role of the amortized inference model, amortized inference regularization provides a new direction for studying and improving the generalization performance of latent variable models.
Variational Inference and the Evidence Lower Bound
Consider a joint distribution p θ (x, z) parameterized by θ, where x ∈ X is observed and z ∈ Z is latent. Given a uniform distributionp(x) over the dataset D = {x (i) }, maximum likelihood estimation performs model selection using the objective However, marginalization of the latent variable is often intractable; to address this issue, it is common to employ the variational principle to maximize the following lower bound where D is the Kullback-Leibler divergence and Q is a variational family. This lower bound, commonly called the evidence lower bound (ELBO), converts log-likelihood estimation into a tractable optimization problem. Since the lower bound holds for any q, the variational family Q can be chosen to ensure that q(z) is easily computable, and the lower bound is optimized to select the best proposal distribution q * x (z) for each x ∈ D. [1,9] proposed to construct p(x | z) using a parametric function g θ ∈ G(P) : Z → P, where P is some family of distributions over x, and G is a family of functions indexed by parameters θ. To expedite training, they observed that it is possible to amortize the computational cost of variational inference by framing the per-sample optimization process as a regression problem; rather than solving for the optimal proposal q * x (z) directly, they instead use a recognition model f φ ∈ F(Q) : X → Q to predict q * x (z). The functions (f φ , g θ ) can be concisely represented as conditional distributions, where
Amortization and Variational Autoencoders
The use of amortized inference yields the variational autoencoder, which is trained to maximize the variational autoencoder objective We omit the dependency of (p(z), g) on θ and f on φ for notational simplicity. In addition to the typical presentation of the variational autoencoder objective (LHS), we also show an alternative formulation (RHS) that reveals the influence of the model capacities F, G and distribution family capacities Q, P on the objective function. In this paper, we use (q φ , f ) interchangeably, depending on the choice of emphasis. To highlight the relationship between the ELBO in Eq. (2) and the standard variational autoencoder objective in Eq. (5), we shall also refer to the latter as the amortized ELBO.
Amortized Inference Suboptimality
For a fixed generative model, the optimal unamortized and amortized inference models are A notable consequence of using an amortization family to approximate variational inference is that Eq. (5) is a lower bound of Eq. (2). This naturally raises the question of whether the learned inference model can accurately approximate the mapping x → q * x (z). To address this question, [10] defined the inference, approximation, and amortization gaps as Studies have found that the inference gap is non-negligible [11] and primarily attributable to the presence of a large amortization gap [10].
The amortization gap raises two critical considerations. On the one hand, we wish to reduce the training amortization gap ∆ am (p train ). If the family F is too low in capacity, then it is unable to approximate x → q * x and will thus increase the amortization gap. Motivated by this perspective, [5,12] proposed to reduce the training amortization gap by performing stochastic variational inference on top of amortized inference. In this paper, we take the opposing perspective that an over-expressive F hurts generalization (see Appendix A) and that restricting the capacity of F is a form of regularization that can prevent both the inference and generative models from overfitting to the training set.
Amortized Inference Regularization in Variational Autoencoders
Many methods have been proposed to expand the variational and amortization families in order to better approximate maximum likelihood training [5,6,7,8,13,14]. We argue, however, that achieving a better approximation to maximum likelihood training is not necessarily the best training objective, even if the end goal is test set density estimation. In general, it may be beneficial to regularize the maximum likelihood training objective.
Importantly, we observe that the evidence lower bound in Eq. (2) admits a natural interpretation as implicitly regularizing maximum likelihood training This formulation exposes the ELBO as a data-dependent regularized maximum likelihood objective. For infinite capacity Q, R(θ ; Q) is zero for all θ ∈ Θ, and the objective reduces to maximum likelihood. When Q is the set of Gaussian distributions (as is the case in the standard VAE), then is Gaussian for all x ∈ D. In other words, a Gaussian variational family regularizes the true posterior p θ (z | x) toward being Gaussian [10]. Careful selection of the variational family to encourage p θ (z | x) to adopt certain properties (e.g. unimodality, fully-factorized posterior, etc.) can thus be considered a special case of posterior regularization [15,16].
Unlike traditional variational techniques, the variational autoencoder introduces an amortized inference model f ∈ F and thus a new source of posterior regularization.
In contrast to unamortized variational inference, the introduction of the amortization family F forces the inference model to consider the global structure of how X maps to Q. We thus define amortized inference regularization as the strategy of restricting the inference model capacity F to satisfy certain desiderata. In this paper, we explore a special case of AIR where a candidate model f ∈ F is penalized if it is not sufficiently smooth. We propose two models that encourage inference model smoothness and demonstrate that they can reduce the inference gap and increase log-likelihood on the test set.
Denoising Variational Autoencoder
In this section, we propose using random perturbation training for amortized inference regularization.
The resulting model-the denoising variational autoencoder (DVAE)-modifies the variational autoencoder objective by injecting ε noise into the inference model Note that the noise term only appears in the regularizer term. We consider the case of zero-mean isotropic Gaussian noise ε ∼ N (0, σI) and denote the denoising regularizer as R(θ ; σ). At this point, we note that the DVAE was first described in [17]. However, our treatment of DVAE differs from [17]'s in both theoretical analysis and underlying motivation. We found that [17] incorrectly stated the tightness of the DVAE variational lower bound (see Appendix B). In contrast, our analysis demonstrates that the denoising objective smooths the inference model and necessarily lower bounds the original variational autoencoder objective (see Theorem 1 and Proposition 1).
We now show that 1) the optimal DVAE amortized inference model is a kernel regression model and that 2) the variance of the noise ε controls the smoothness of the optimal inference model. Lemma 1. For fixed (θ, σ, Q) and infinite capacity F, the inference model that optimizes the DVAE objective in Eq. (13) is the kernel regression model where is the RBF kernel.
Lemma 1 shows that the optimal denoising inference model f * σ is dependent on the noise level σ.
where the weighting w σ (x, x (i) ) depends on the distance x − x (i) and the bandwidth σ. When σ > 0, the amortized inference model forces neighboring points (x (i) , x (j) ) to have similar proposal distributions. Note that as σ increases, w σ (x, x (i) ) → 1 n , where n is the number of training samples. Controlling σ thus modulates the smoothness of f * σ (we say that f * σ is smooth if it maps similar inputs to similar outputs under some suitable measure of similarity). Intuitively, the denoising regularizer R(θ ; σ) approximates the true posteriors with a "σ-smoothed" inference model and penalizes generative models whose posteriors cannot easily be approximated by such an inference model. This intuition is formalized in Theorem 1. Theorem 1. Let Q be a minimal exponential family with corresponding natural parameter space Ω. With a slight abuse of notation, consider f ∈ F : X → Ω. Under the simplifying assumption that p θ (z | x (i) ) is contained within Q and parameterized by η (i) ∈ Ω, and that F has infinite capacity, then the optimal inference model in Lemma and Lipschitz constant of f * σ is bounded by O(1/σ 2 ).
We wish to address Theorem 1's assumption that the true posteriors lie in the variational family. Note that for sufficiently large exponential families, this assumption is likely to hold. But even in the case where the variational family is Gaussian (a relatively small exponential family), the small approximation gap observed in [10] suggests that it is plausible that posterior regularization would encourage the true posteriors to be approximately Gaussian.
Given that σ modulates the smoothness of the inference model, it is natural to suspect that a larger choice of σ results in a stronger regularization. To formalize this notion of regularization strength, we introduce a way to partially order a set of regularizers {R i (θ)}. Definition 1. Suppose two regularizers R 1 (θ) and R 2 (θ) share the same minimum min θ R 1 (θ) = min θ R 2 (θ). We say that R 1 is a stronger regularizer than Note that any two regularizers can be modified via scalar addition to share the same minimum. Furthermore, if R 1 is stronger than R 2 , then R 1 and R 2 share at least one minimizer. We now apply Definition 1 to characterize the regularization strength of R(θ ; σ) as σ increases.
Lemma 1 and Proposition 1 show that as we increase σ, the optimal inference model is forced to become smoother and the regularization strength increases. Figure 1 is consistent with this analysis, showing the progression from under-regularized to over-regularized models as we increase σ.
It is worth noting that, in addition to adjusting the denoising regularizer strength via σ, it is also possible to adjust the strength by taking a convex combination of the VAE and DVAE objectives. In particular, we can define the partially denoising regularizer R(θ ; σ, α) as Importantly, we note that R(θ ; σ, α) is still strictly non-negative and, when combined with the log-likelihood term, still yields a tractable variational lower bound.
Weight-Normalized Amortized Inference
In addition to DVAE, we propose an alternative method that directly restricts F to the set of smooth functions. To do so, we consider the case where the inference model is a neural network encoder parameterized by weight matrices {W i } and leverage [18]'s weight normalization technique, which proposes to reparameterize the columns w i of each weight matrix W as where v i ∈ R d , s i ∈ R are trainable parameters. Since it is possible to modulate the smoothness of the encoder by capping the magnitude of s i , we introduce a new parameter u i ∈ R and define The norm w i is thus bounded by the hyperparameter H. We denote the weight-normalized regularizer as R(θ ; F H ), where F H is the amortization family induced by a H-weight-normalized encoder. Under similar assumptions as Proposition 1, it is easy to see that min θ R(θ ; F H ) = 0 for any H ≥ 0 and that R(θ ; We refer to the resulting model as the weight-normalized inference VAE (WNI-VAE) and show in Table 1 that weight-normalized amortized inference can achieve similar performance as DVAE.
Experiments
We conducted experiments on statically binarized MNIST, statically binarized OMNIGLOT, and the Caltech 101 Silhouettes datasets. These datasets have a relatively small amount of training data and are thus susceptible to model overfitting. For each dataset, we used the same decoder architecture across all four models (VAE, DVAE (α = 0.5), DVAE (α = 1.0), WNI-VAE) and only modified the encoder, and trained all models using Adam [19] (see Appendix E for more details). To approximate the log-likelihood, we proposed to use importance-weighted stochastic variational inference (IW-SVI), an extension of SVI [20] which we describe in detail in Appendix C. Hyperparameter tuning of DVAE's σ and WNI-VAE's F H is described in Table 7. The denoising and weight normalization regularizers have respective hyperparameters σ and H that control the regularization strength. In Figure 1, we performed an ablation analysis of how adjusting Table 1: Test set evaluation of VAE, DVAE, and WNI-VAE. The performance metrics are loglikelihood ln p θ (x), the amortized ELBO L(x), and the inference gap ∆ inf = ln p θ (x) − L(x). All three proposed models out-perform VAE across most metrics. Figure 1: Evaluation of the log-likelihood performance of all three proposed models as we vary the regularization parameter value. The regularization parameter is defined in Table 7. When the parameter value is too small, the model overfits and the test set performance degrades. When the parameter value is too high, the model underfits.
the regularization strength impacts the test set log-likelihood. In almost all cases, we see a transition from overfitting to underfitting as we adjust the strength of AIR. For well-chosen regularization strength, however, it is possible to increase the test set log-likelihood performance by 0.5 ∼ 1.0 nats-a non-trivial improvement. Table 1 shows that regularizing the inference model empirically benefits the generative model. We now provide some initial theoretical characterization of how a smoothed amortized inference model affects the generative model. Our analysis rests on the following proposition. Proposition 2. Let P be an exponential family with corresponding mean parameter space M and sufficient statistic function T (·). With a slight abuse of notation, consider g ∈ G : Z → M. Define q(x, z) =p(x)q(z | x), where q(z | x) is a fixed inference model. Supposing G has infinite capacity, then the optimal generative model in Eq.
How Does Amortized Inference Regularization Affect the Generator?
Proposition 2 generalizes the analysis in [21] which determined the optimal generative model when P is Gaussian. The key observation is that the optimal generative model outputs a convex combination of {φ(x (i) )}, weighted by q(x (i) | z). Furthermore, the weights q(x (i) | z) are simply density ratios of the proposal distributions {q(z | x (i) )}. As we increase the smoothness of the amortized inference model, the weight q(x (i) | z) should tend toward 1 n for all z ∈ Z. This suggests that a smoothed inference model provides a natural way to smooth (and thus regularize) the generative model.
Amortized Inference Regularization in Importance-Weighted Autoencoders
In this section, we extend AIR to importance-weighted autoencoders (IWAE-k). Although the application is straightforward, we demonstrate a noteworthy relationship between the number of importance samples k and the effect of AIR. To begin our analysis, we consider the IWAE-k objective where {z 1 . . . z k } are k samples from the proposal distribution q φ (z | x) to be used as importancesamples. Analysis by [22] allows us to rewrite it as a regularized maximum likelihood objective wheref k (or equivalentlyq k ) is the unnormalized distributioñ andD(q p) = q(z) [ln q(z) − ln p(z)] dz is the Kullback-Leibler divergence extended to unnormalized distributions. For notational simplicity, we omit the dependency off k on (z 2 . . . z k ). Importantly, [22] showed that the IWAE with k importance samples drawn from the amortized inference model f is, on expectation, equivalent to a VAE with 1 importance sample drawn from the more expressive inference modelf k .
A notable consequence of Proposition 3 is that as k increases, AIR exhibits a weaker regularizing effect on the posterior distributions {p θ (z | x (i) )}. Intuitively, this arises from the phenomenon that although AIR is applied to f , the subsequent importance-weighting procedure can still create a flexiblef k . Our analysis thus predicts that AIR is less likely to cause underfitting of IWAE-k's generative model as k increases, which we demonstrate in Figure 2. In the limit of infinite importance samples, we also predict AIR to have zero regularizing effect sincef ∞ (under some assumptions) can always approximate any posterior. However, for practically feasible values of k, we show in Tables 2 and 3 that AIR is a highly effective regularizer. Tables 2 and 3 extends the model evaluation to IWAE-8 and IWAE-64. We see that the denoising IWAE (DIWAE) and weight-normalized inference IWAE (WNI-IWAE) consistently out-perform the standard IWAE on test set log-likelihood evaluations. Furthermore, the regularized models frequently reduced the inference gap as well. Our results demonstrate that AIR is a highly effective regularizer even when a large number of importance samples are used.
Experiments
Our main experimental contribution in this section is the verification that increasing the number of importance samples results in less underfitting when the inference model is over-regularized. In contrast to k = 1, where aggressively increasing the regularization strength can cause considerable underfitting, Figure 2 shows that increasing the number of importance samples to k = 8 and k = 64 makes the models much more robust to mis-specified choices of regularization strength. Interestingly, we also observed that the optimal regularization strength (determined using the validation set) increases with k (see Table 7 for details). The robustness of importance sampling when paired with amortized inference regularization makes AIR an effective and practical way to regularize IWAE. Table 7 for definition) and number of importance samples k. To compare across different k's, the performance without regularization (IWAE-k baseline) is subtracted. We see that IWAE-64 is the least likely to underfit when the regularization parameter value is high.
Are High Signal-to-Noise Ratio Gradients Necessarily Better?
We note the existence of a related work [23] that also concluded that approximating maximum likelihood training is not necessarily better. However, [23] focused on increasing the signal-to-noise ratio of the gradient updates and analyzed the trade-off between importance sampling and Monte Carlo sampling under budgetary constraints. An in-depth discussion of these two works within the context of generalization is provided in Appendix D.
Conclusion
In this paper, we challenged the conventional role that amortized inference plays in training deep generative models. In addition to expediting variational inference, amortized inference introduces new ways to regularize maximum likelihood training. We considered a special case of amortized inference regularization (AIR) where the inference model must learn a smoothed mapping from X → Q and showed that the denoising variational autoencoder (DVAE) and weight-normalized inference (WNI) are effective instantiations of AIR. Promising directions for future work include replacing denoising with adversarial training [24] and weight normalization with spectral normalization [25]. Furthermore, we demonstrated that AIR plays a crucial role in the regularization of IWAE, and that higher levels of regularization may be necessary due to the attenuating effects of importance sampling on AIR. We believe that variational family expansion by Monte Carlo methods [26] may exhibit the same attenuating effect on AIR and recommend this as an additional research direction.
A Overly Expressive Amortization Family Hurts Generalization
In the experiments by [10], they observed that an overly expressive amortization family increases the test set inference gap, but does not impact the test set log-likelihood. We show in Table 4 that [10]'s observation is not true in general, and that an overly expressive amortization family can in fact hurt test set log-likelihood. Details regarding the architectures are provided in Appendix E. Table 4: Performance evaluation when an over-expressive amortization family is used (i.e. a larger encoder). Comparison is made against models that use a smaller encoder. The results show that using a large encoder consistently hurts generalization by over 1 nat.
B Revisiting [17]'s Denoising Variational Autoencoder Analysis
In [17]'s Lemma 1, they considered a joint distribution p θ (x, z). They introduced an auxiliary variable z into their inference model (here z takes on the role of the perturbed inputx = x + ε. To avoid confusion, we stick to the notation used in their Lemma) and considered the inference model They considered two ways to use this inference model. The first approach is to marginalize the auxiliary latent variable z . This defines the resulting inference model This yields the lower bound Next, they considered an alternative lower bound [17]'s Lemma 1 claims that 1. L a and L b are valid lower bounds of ln p θ (x) 2. L b ≥ L a .
Using Lemma 1, [17] motivated the denoising variational autoencoder by concluding that it provides a tighter bound than marginalization of the noise variable. Although statement 1 is correct, statement 2 is not. Their proof of statement 2 is presented as follows We indicate the mistake with ? =; their proof of statement 2 relied on the assumption that Crucially, the RHS is ill-defined since it does not take the expectation over z , whereas the LHS explicitly specifies an expectation over z ∼ q ψ (z | x). This difference, while subtle, invalidates the subsequent steps. If we fix Eq. (28) and attempt to see if the rest of the proof still follows, we will find that Indeed, the inequality will point the other way since Their conclusion that marginalizing over the noise variable results in a looser bound is thus incorrect.
In the text (beneath [17] Eq. (11)), they further implied that the denoising VAE and standard VAE objectives are not comparable. We show in Proposition 1 that the denoising VAE objective is in fact a lower bound of the standard VAE objective. Figure 3: Evaluation of IW-SVI versus IWAE-k for a fixed generative model. IW-SVI out-performss IWAE-k on both computation time and number of importance samples needed. Similar to [11], we conclude that IWAE-k's poor approximation of the log-likelihood is attributable to an overfit amortized inference model. Fig. 3a) IW-SVI computation time depends on the number of gradient update steps. IWAE-k computation time depends on the number of importance samples k. IWAE-100000 still under-performs IW-SVI (k = 5000, = 1, T = 100), demonstrating the efficacy of IW-SVI. Fig. 3b) Comparison of IWAE and IW-SVI (T = 3000) for different values of k. Fig. 3c Comparison of IW-SVI (k = 5000) for different values of T .
C Importance-Weighted Stochastic Variational Inference
We propose a simple method to approximate the marginal ln p θ (x). A common approach for approximating the log marginal is the IWAE-5000 [7,8,27,28], which proposes to compute L 5000 (x ; θ, φ) where However, this approach relies on the learned inference model q φ (z | x), which might overfit to the training set. To address this issue, we propose to perform importance-weighted stochastic variational inference (IW-SVI) where q * x, = arg max q∈Q L (x ; θ, q).
The optimization in Eq. (42) is approximate with T gradient steps. As k and increase, the approximation will approach the true log-likelihood. We approximate log-likelihood over the entire test set using Ep test(x) L k (p θ , q * x, ; x). To reduce speed and memory cost during the per-sample optimization in Eq. (42), we use a large k = 5000 but smaller = 8, and approximately solved the optimization problem using T = 3000 gradient steps. In comparison to IWAE-5000, we consistently observe significant improvement in the log-likelihood approximation. IW-SVI provides a simple alternative to Annealed Importance Sampling, requiring minimal modification to any existing IWAE-k implementation.
D Are High Signal-to-Noise Ratio Gradients Necessarily Better?
Our paper shares a similar high level message with a recent study by [23]: that approximating maximum likelihood training is not necessarily better. However, we approach this message in very different ways. [23] observed that importance sampling weakens the signal-to-noise ratio of the gradients used to update the amortized inference model. In response, they proposed to increase this ratio by increasing the number of Monte Carlo samples m used to estimate the expectation in Eq. (5). Under a fixed budget of T ≥ mk (where k is the number of importance samples and m is the number of Monte Carlo samples), they observed that it may be desirable to trade off k in order to increase m. Given an infinite budget, however, [23]'s hypothesis would still conclude to increase k as much as possible in order to approximate maximum likelihood training.
In contrast, we argue that it may be inherently desirable to regularize the maximum likelihood objective, and that amortized inference regularization is an effective means of doing so. From the perspective of generalization, it is also worth wondering whether high signal-to-noise ratio gradients are necessarily better. The desirability of noisy gradients for improving generalization is an active area of research [29,30,31,32], and an extensive investigation of the role of gradient stochasticity in regularizing the amortized inference model is beyond the scope of our paper. To encourage future exploration in this direction, we show in Figure 4 that the effect of gradient stochasticity is non-negligible. For the standard VAE, we observed that increasing m can cause the model to overfit (on the amortized ELBO objective) over the course of training. Interestingly, we observed that DVAE does not experience this overfitting effect, suggesting that AIR is robust to larger values of m. Table 5. The MNIST validation set was created by randomly holding out 10000 samples from the original 60000-sample training set. The OMNIGLOT validation set was similarly created by randomly holding out 1345 samples from the original 24345-sample training set. Training parameters. Important training parameters are provided in Table 6. We used the Adam optimizer and exponentially decayed the initial learning rate according to the formula where t ∈ {0, . . . , T −1} is the current iteration and T is the total number of iterations. Early-stopping is applied according to IWAE-5000 evaluation on the validation set. MNIST (Appendix A) d1000-d1000-d1000-z64 d300-d300-x784 10 −3 1.5 × 10 6 100 MNIST d300-d300-z64 d300-d300-x784 10 −3 1.5 × 10 6 100 OMNIGLOT d200-d200-z64 d200-d200-x784 10 −3 1.5 × 10 6 100 CALTECH d500-z64 d500-x784 10 −4 4 × 10 5 10 Regularization strength tuning. The denoising and weight normalization regularizers have hyperparameters σ and H respectively. See Table 7 for hyperparameter search space details. We performed a basic grid search and tuned the regularization strength hyperparameters based on the validation set.
F Proofs
Remark. Some of the proofs mention the notion of an infinite capacity F, G or Q. To clarify, we say that F has infinite capacity if it is the set of all possible functions that map from X to Q. Analogously, G has infinite capacity if it is the set of all possible functions that map from Z to P. We say that Q has infinite capacity if it is the set of all possible distributions over the space Z. Lemma 1. For fixed (θ, σ, Q) and infinite capacity F, the inference model that optimizes the DVAE objective in Eq. (13) is the kernel regression model f * σ (x) = arg min q∈Q n i=1 w σ (x, x (i) ) · D(q(z) p θ (z | x (i) )), where w σ (x, x (i) ) = Kσ(x,x (i) ) j Kσ(x,x (j) ) and K σ (x, y) = exp − x−y 2σ 2 is the RBF kernel.
Proof. Definex = x + ε andp(x,x) =p(x)N (x | x, σI). Rewrite the objective as Recall that F has infinite capacity. This lower bound is tight since we can select f * σ ∈ F such that f * σ (x) = arg min q∈Q Ep (x|x) D(q(z) p θ (z | x)).
Theorem 1. Let Q be a minimal exponential family with corresponding natural parameter space Ω.
With a slight abuse of notation, consider f ∈ F : X → Ω. Under the simplifying assumption that p θ (z | x (i) ) is contained within Q and parameterized by η (i) ∈ Ω, and that F has infinite capacity, then the optimal inference model in Lemma 1 returns f * σ (x) = η ∈ Ω, where η = n i=1 w σ (x, x (i) ) · η (i) (15) and Lipschitz constant of f * σ is bounded by O(1/σ 2 ).
Proof. Proof provided in two parts. | 2018-05-30T00:56:27.339Z | 2018-05-23T00:00:00.000 | {
"year": 2018,
"sha1": "fe12eac2d72ed1e2f77578637edfe6563c6187af",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "02853eaf34209269d893e7200098b37962d345ea",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
17763054 | pes2o/s2orc | v3-fos-license | Coexistence in neutral theories: interplay of criticality and mild local preferences
Neutral theories have played a crucial and revolutionary role in fields such as population genetics and biogeography. These theories are critical by definition, in the sense that the overall growth rate of each single allele/species/type vanishes. Thus each species in a neutral model sits at the edge between invasion and extinction, allowing for the coexistence of symmetric/neutral types. However, in finite systems, mono-dominated states are ineludibly reached in relatively short times owing to demographic fluctuations, thus leaving us with an unsatisfactory framework to rationalize empirically-observed long-term coexistence. Here, we scrutinize the effect of heterogeneity in quasi-neutral theories, in which there can be a local mild preference for some of the competing species at some sites, even if the overall species symmetry is maintained. As we show here, mild biases at a small fraction of locations suffice to induce overall robust and durable species coexistence, even in regions arbitrarily far apart from the biased locations. This result stems from the long-range nature of the underlying critical bulk dynamics and has a number of implications, for example, in conservation ecology as it suggests that constructing local specific"sanctuaries"for different competing species can result in global enhancement of biodiversity, even in regions arbitrarily distant from the protected refuges.
Introduction
Statistical mechanics models on lattices or networks constitute a deep-rooted theoretical framework in areas of science where the subjects of study are ensembles of many interacting "building blocks" such as particles, spins, individuals, agents, and so forth [1,2,3]. In particular, the study of genuinely non-equilibrium models, with different types of collective ordering, paved the way for the development of interdisciplinary applications -far beyond tradicional physics problems-in biology, ecology, epidemiology, and social sciences [4,5,6,7]. Within this framework, neutral theories came out -first in population genetics [8,9] and then in ecology [10,11,12,13], and epidemiology [14]-as analytically tractable null models, aimed at capturing the main collective and emerging properties of communities of interacting individuals belonging to a limited set of interchangeable types (alleles, species, pathogens, opinions, etc.). For instance, in the case of population genetics, neutral theories are able to reproduce with remarkable accuracy patterns of relative abundance of different alleles as a result of pure stochasticity and, thus, without making any reference to specific intrinsic differences between them nor to natural selection, leading to a deep conceptual revolution in the field [8]. A similar revolution shattered theoretical ecology after Hubbell's neutral theory of biogeography and biodiversity [10].
Different models fall under the common name of "neutral" theories; for instance, some of them are spatially explicit, while others are not. However, they all necessarily share two common important traits: symmetry upon species exchange and the existence of absorbing or quiescent states, which account for the constraint that once all individual elements are identical (e.g. a given allele fixated through a population or a monodominated forest) the system remains indefinitely unaltered, at least in the absence of mutation, immigration, or other external perturbations. The most paradigmatic example of this class of models is the exactly solvable voter model [1], a two-species parameter-free competition model characterized by two symmetric absorbing states (representing the extinction of one species and the subsequent mono-dominance of the remaining one), with very irregular domain frontiers (which in physical terms stem from the absence of surface tension [1,15,16,17]), and logarithmic coarsening [3,18]. It is noteworthy that owing to the symmetry between the two species, the average growth rate of each of them necessarily vanishes, and thus the voter model sits by construction right at a critical point, with diverging characteristic length and time scales [1,15,16,19]. Variants of the voter model in which ordered and disordered phases emerge as a control parameter is varied have been studied in the literature (see e.g. [16,20] and references therein); right at their critical point these models behave like the pure voter model, confirming that this constitutes a robust, generalized voter (GV), universality class. Owing to the existence of strong fluctuations in the dynamics, any finite-size neutral system is doomed to eventually fall into one of the absorbing states. In the particular case of the voter model the typical time needed to reach absorption, T , can be exactly computed. It shows a power-law dependence on the size of the system T ∼ N α with a dimension-dependent exponent α = α(d), with logarithmic corrections at the upper critical dimension d = 2: T ∼ N log(N ) [21,3]. This behavior is shared by models in the GV class at criticality. Therefore, coexistence in the voter model -in the absence of mutation or immigration-is just transitory or "fragile". On the other hand, a strong signature of the existence of a phase of robust coexistence would be provided by the observation of exponential scaling of T with N as would correspond to the Arrhenius law for the escape from a potential well [2]. If one is interested in the ecological/biological interpretation of neutral theories, the transitory nature of coexistence leaves unanswered the question of how diversity (i.e. alleles/species coexistence) can be preserved over large time scales; thus, one needs to resort to relative large mutation and/or migration rates -which might be unrealistic-to justify the empirically encountered rich diversity.
Alternative mechanisms fostering coexistence have been extensively searched-for in the literature. Coexistence can be stabilized by considering the breaking of neutrality at local but not at global scales. For instance: the introduction of negative density dependence in the ability of a species to invade a new territory [22,23] or considering quenched environmental conditions which favor each of the species in some regions, but without an overall preference for any of them [24,25,23], lead to much larger extinction times (T ∼ exp(cN ), where c > 0 is a constant) than those of the pure voter model, entailing truly stable or "robust" species coexistence. Similarly, models have been studied where the presence of "zealots" -i.e. sites which do not alter their state under any circumstances, thus breaking the local symmetry in a "hard" wayprevents the corresponding absorbing state from be reached, precluding extinction [26,27,28]. Keeping in mind the ecological interpretation of neutral theories, our aim here is to investigate the effect of locally breaking the neutral dynamics in a "soft" way. More precisely, we introduce a slight local bias towards one of the two states only at a few specific locations, with conflicting local preferences existing across the system. Contrarily to the case of the "hard" constraint imposed by zealots (which do not ever alter their state), here all sites are allowed to take any of the two states even if there are local biases.
Definition of the model and Mean Field analysis
For sake of clearness but without lack of generality, we consider the voter model [1] as a minimal model of neutral competition of two species (generalizations to S species are straightforward [29]). Sites on an arbitrary d-dimensional lattice are endorsed with binary variables, σ i ∈ {−1, 1}, i ∈ Z d , encoding the type of species at each location; each node changes its state with a probability proportional to the number of neighbors in the opposite state (see below). Trivially, the model has two symmetric absorbing or mono-dominated states. The perturbation we consider is in the form of spatial environmental heterogeneities -which constitute a key and unavoidable aspect of real ecosystems [30]-that preserve the overall neutral symmetry, as well as the existence of the absorbing states but that locally favor one of the competing species. This is modeled by a quenched external-field (τ ) or intrinsic preference for one of the two states at a limited fraction of the sites as follows. We partition the lattice in three disjoint sets: Λ + where states ("spins") intrinsically tend to conform with the "up" state (i.e. τ = +1), Λ − with an intrinsic preference for the "down" state (i.e. τ = −1), and a neutral set Λ ∅ with no preference (τ = 0). The relative size, | • |, of these three sets is fixed via a parameter η ∈ [0, 1/2], viz. |Λ + | = |Λ − | = ηN and |Λ ∅ | = (1 − 2η)N ; in particular, the fraction η may diminish with system size if a non-extensive amount of biased sites is considered. In continuous time, the model is defined by the flipping rates at any given site i: where z(i) is the set of nearest neighbors of vertex i, the "external fields" τ i are quenched variables taking values 0, +1, −1 if i ∈ Λ ∅ , Λ + , Λ − , respectively, and 0 ≤ ≤ 1 is a constant parameter defining the strength of the local bias. Eq. (1) is the sum of a linear term representing the voter model dynamics and a term that lowers or enhances the flipping rate by a constant amount depending on whether the change results in alignement with τ i or not. For = 0 or for η = 0, we recover the standard voter model. On the other extreme, for = 1, sites with a non-zero external field, τ i , are always aligned with the field (i.e. are "zealots"; in this case, the mono-dominated (absorbing) states are explicitly removed, leading to a different family of models [26,28]). Observe that the model is symmetric in the sense that if the labels of all individuals and the direction of all external fields are switched the system remains unchanged.
To obtain analytical insight, we first consider the mean-field (MF) version of the model. For this, we consider the dynamics on a complete -fully connected-graph where all sites are nearest neighbors of each other, which can be interpreted as a model of mutually interconnected communities (metacommunities) [31]. For example, we could think of two competing species occupying an area composed of patches or islands, and individuals can disperse from one to the other; two of islands could be more favorable for each of the two species respectively, while a third one could be neutral. In this threeisland case the state of the system can be completely defined by three macroscopic variables. Let x be the fraction of sites in the whole system which satisfy the local preference for the up state, i.e. σ i = τ i = +1, y the fraction of sites satisfying the opposite preference σ i = τ i = −1, and z the fraction of sites that are in the up state in neutral sites, σ i = +1, τ i = 0. By construction, x and y are defined in [0, η] while z is defined in [0, 1 − 2η]. In the infinite size limit, the model can be easily verified to be ruled by the set of deterministic equationṡ The system above has three fixed points, a standard linear stability analysis reveals that two of them -corresponding to the symmetric absorbing states at (x, y, z) = (0, η, 0), (η, 0, 1 − 2η)-are unstable while the third one at (x * , y * , z * ) = ((1 + )η/2, (1 + )η/2, 1/2 − η) is a stable attractor of the dynamics. The last point corresponds to a state of symmetrical coexistence, implying a non-trivial and rich biodiversity over all the ecosystem, independently of the local bias strength and on the fraction of biased nodes (observe that the limit η → 0 of Eq.(2) is singular, as for η = 0 variables x and y cannot be defined). This conclusion holds also for large but finite values of N (as can be seen by writing down a Fokker-Plank equation from the microscopic dynamics employing a large-N expansion [2]) where one obtains a stochastic equation with the same deterministic part plus a sub-leading noise term, confirming that the presence of some non-neutral patches prompts robust coexistence in metacommunities. For the nonextensive case, η ∝ 1/N , the perturbation has no effect (in the infinite-size limit) and the coexistence is transitory or fragile, as it is in the pure voter model (VM); T (N ) ∼ N ln(N ). Instead, the exponential behavior in the extensive (with η = 1/10) and sub-extensive (η = 1/ √ N ) cases reveals the existence of robust coexistence. In the inset, we show an example of the collapse of curves obtained varying for different sizes of the system in the extensive case using the ansatz in Eq.(3). Averages are performed over, at least, 1000 realizations.
In Fig.1 we present results of computer simulations using the Gillespie algorithm for the scaling of T as a function of the total number of nodes N on the complete (i.e. fully connected) network for various values of η. In the case in which external fields are applied to an extensive number of sites (i.e. η = const.) it shows a clear exponential scaling, while the non-extensive case (e.g. keeping a fixed number of biased sites as N Observe, for instance, how in the lower row dark (−1) states tend to dominate in the lower half and clear (+1) states dominate above. Thus, a mild bias acting only at the system boundaries fosters overall phase coexistence for extremely long times by locally favoring one of the two, otherwise symmetric, species. See Fig.3 for a more quantitative analysis.
is increased, η ∝ 1/N ) it is linear as expected for the mean field pure voter model. A non-trivial situation arises when ηN is sub-extensive, i.e. η is not a constant nor it decreases as 1/N , as for instance, η = N −1/2 . As we shall study in detail later, this is for example the case when an external field acts only at some system boundaries in a two-dimensional system. As shown in Fig.1 the sub-extensive case still shows an exponential scaling of T with N , but weaker than the extensive case. Mathematically this is related to the aforementioned singular limit, η → 0.
To give an estimate of the scaling form of T , we can revisit the simpler case in which all nodes in a complete graph are exposed to a bias (a half positive and a half negative, i.e. η = 1/2) that was analyzed by some of us in [23]. In that two-variable case it can be analytically shown that T depends on N and via the approximate relation ln T ∼ N 2 /(1 − 2 ) plus sub-leading corrections [23]. In the present case, we need to include the presence of only a limited number of biased sites, as encoded in the parameter η. Taking logs on both sides, the simplest ansatz one can employ to study the generic situations with any value of η is where we have replaced heuristically N by a reduced effective size, described by η α N , where α is an unspecified positive constant and the sub-leading correction T pure (N ) gives the value of T in the pure version of the model (i.e. for = 0). Writing η = kN α (α ≤ 0) and removing sub-leading term ln T pure (N ) we can rewrite the equation above in a more compact form which leads to a quite good curve collapse for fully connected (i.e. mean-field) networks (see the inset of Fig.1).
Biased boundaries in two dimensions
To go ahead and study the consequences of mild biases on spatially-explicit systems, going beyond mean-field predictions, we have considered the model of Eq.(1) in a twodimensional square lattice where the biased sets Λ + and Λ − are taken to be two onedimensional chains with L = √ N sites each, located at the upper and lower boundaries respectively. This corresponds to the sub-extensive case studied above, with η = 1/ √ N . Periodic boundary conditions are assumed along the other direction. Fig.2 portrays an example of a single realization of the stochastic process with and without the soft boundary biases. In the bulk, the dynamics of the system is identical in both cases but -owing to the boundary effects-the biased system reaches the absorbing state in a much longer time and effectively stays in an active/coexistence state. Indeed, as shown in Fig.4, robust, exponential coexistence can be observed, obeying the general collapse formula as that of Eq.(4), with where the last term is simply ln T pure (N ) for the two-dimensional standard voter model [1,3]. To further investigate the origin of the non-trivial N 3/4 factor we performed simulations on a rectangular system, where N = L ⊥ × L to discriminate the effective role of the two directions (L is the length of the biased boundaries and L ⊥ the distance between them). Results are shown in Fig.5 and show that a good collapse -even if not of the same quality as above-is observed replacing N 3/4 by L 1/2 ⊥ L (note that for a square lattice: ). This suggests that characteristic times grow linearly with the size of the biased walls and proportionally to the square-root of the distance between them. While the linear dependence on L seems intuitive, thus far we have not been able to explain the square-root dependence on L ⊥ .
Finally, we have also measured the average value of σ (which in physical terms is the "magnetization", m) as a function of x ⊥ ∈ [0, L ⊥ ], the lattice position along the ⊥ direction. At the biased boundaries m is very close to ±1 depending of the respective external field, while in the bulk it varies linearly with the distance to the boundaries, showing that boundaries propagate their (short-range) influence at arbitrarily large distances. We have also measured two-point correlation function and confirmed the presence of power-law, i.e. scale-free, decays with distance. Both the magnetization m(x ⊥ ) and the correlation function C(x ⊥ ) are computed averaging over the direction without biased boundaries ( ). These quantities are thus defined as Clearly, all these are consequences of the bulk dynamics being critical, i.e. lacking a characteristic correlation length ( Figure 3). Therefore, the interplay between mild biased at distant boundaries and bulk criticality affects the whole system and changes its overall properties, inducing, in particular, stable coexistence. (6), both of them plotted as a function of the re-scaled distance x from the wall with negative bias. In the bulk the linearity of the magnetization is not perfect due to the highly fluctuating dynamics of the voter model. In both plots, scaling are not perfect owing to the relatively small system sizes reported and thus to the persistence of corrections to scaling. As in Fig.(4) but for rectangular landscapes with opposed biased boundaries of length L , separated by a distance L ⊥ (note that is kept fixed constant, = 0.03, while in Fig.(4) it was variable). Averages are performed over at least 1000 realizations.
Discussion
In summary, we have investigated the robustness of the critical voter-model behavior upon introducing soft biases which break locally the neutral (Z 2 ) symmetry. We have shown, both at a mean field level and in spatially explicit two-dimensional systems, that, as long as the number of biased sites grows with system size either extensively or sub-extensively, this type of bias promotes the existence of a well-defined active quasi-stationary state, i.e. they stabilize coexistence between the two competing species even in regions arbitrarily far apart from the biased boundaries. In particular, we have shown that mild biases at some locations can change the dependence of the characteristic extinction times on system size from power-law (with logarithmic corrections in d = 2) to exponential, thus preventing the collapse towards the mono-dominated state and greatly enhancing the coexistence of competing neutral species. This long-ranged global effect stems from the critical, i.e. scale-free, nature of the underlying neutral dynamics in the bulk. Our results are robust to the introduction of non-symmetrical biased, i.e. stronger for one of the species, except for the fact that the state of coexistence is no longer symmetric.
From the theoretical side, the two-dimensional situation discussed above bears some similarities with wetting phenomena [32,33,34,35]. In wetting problems boundary effects can control bulk features arbitrarily far from them [32,33,34,35], but, on the contrary to standard wetting problems, here interfacial descriptions are not useful, as well-defined interfaces separating two different phases are not well defined, i.e. they are too rough and with plenty of overhangs. Thus, theoretical descriptions of the phenomena described here remain elusive. In a future work we shall try to shed further light on these problems by analyzing a field-theoretical version of the voter model [16,19,36,37] equipped with adequate boundary conditions.
To conclude, let us remark that our findings here have a number of interesting implications in conservation ecology -where the concept of "distance of edge influence" quantifying the spatial scale up to which boundaries in fragmented environments have an impact, is highly relevant [38]-as well as in epidemics and social sciences where neutral dynamics plays a relevant role. In particular, it suggests that constructing local specific "sanctuaries" for each of the competing species in a given community can result in global enhancement of biodiversity, even in regions arbitrarily distant from the preserved refuges. | 2014-12-19T11:36:03.000Z | 2014-12-19T00:00:00.000 | {
"year": 2015,
"sha1": "9212cfcfb28eab8cc285c94978ddd3773e35a9f9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1412.6297",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c970f0ec0f0fdd3373e6b9e62366d06514b1836e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Biology"
]
} |
10673233 | pes2o/s2orc | v3-fos-license | An Overview of Risk Factors Associated to Post-partum Depression in Asia
Post partum depression (PPD) is an important complication of child-bearing. It requires urgent interventions as it can have long-term adverse consequences if ignored, for both mother and child. If PPD has to be prevented by a public health intervention, the recognition and timely identification of its risk factors is must. We in this review have tried to synthesize the results of Asian studies examining the risk factors of PPD. Some risk factors, which are unique to Asian culture, have also been identified and discussed. We emphasize on early identification of these risk factors as most of these are modifiable and this can have significant implications in prevention of emergence of post partum depression, a serious health issue of Asian women.
Introduction
Post-partum depression (PPD) is the most common complication of childbearing, occurring in 10-15% of women after delivery. 1 It usually begins within the first six weeks post partum and represents a considerable public health problem affecting women and their families. 2 The effects of postnatal depression on the mother, her married life, and her children make it an important condition to diagnose, treat and prevent. 3 Whilst very severe postnatal depression can easily be diagnosed, less severe presentations of depressive illness can be easily dismissed as normal or natural phenomena of childbirth. Consequently, this untreated postpartum depression can have long-term adverse effects, both on mother and her children. If PPD is to be prevented by clinical or public health intervention, its risk factors (i.e. factors which could significantly predict the occurrence of PPD) need to be faithfully identified. Studies in the developing world have found that risk factors are often culturally determined. This review aims at improving our understanding of the culturally determined risk factors of PPD within Asian culture.
Materials and Methods
We conducted a literature search using PubMed and PsychINFO databases. The keywords used were risk factors, post partum, postnatal, after child birth, Asia and depression. Articles published in English language or providing an abstract (with complete information) in English were included in this study. The studies included have to be the original research articles in the form of observational cohorts and surveys. All the articles which focused on factors associated to PPD in Asian context were then selected. The factors which showed significant association with development of PPD were considered as risk factors for PPD. Assessment of PPD could be done both by structured clinical interview and selfrated questionnaire in studies selected. The studies that relied on special subgroups (e.g., pregnant or postpartum women with HIV infection) were eliminated; 27 studies were thus identified and the data regarding the number of subjects, mode and time period of assessment were further extracted from the identified studies, which are mentioned further.
Results and Discussion
Number of risk factors were identified on analyzing the selected studies. The following table (Table 1) shows the list of studies which have analyzed the possible risk factors for PPD in Asian context. The risk factors thus identified from various studies can be grouped as following categories.
Demographic factors
Among these, the factors which have shown significant association with PPD are age of mother at the time of child birth as well as older age at marriage. 4 Second, being migrant and giving birth to child overseas has also been identified as a risk factor for PPD. In a study examining Japanese women, who were born and raised in Japan but who gave birth to their child in Hawaii, USA, half of the participants experienced emotional dysfunction during their pregnancy. All primipara females experienced post-partum depression. The participants who had maternity blues tended to have PPD. 5 Another study assessing the incidence of PPD symptomatology in a sample of immigrant Asian Indian women found that there was a minor depressive symptomatology rate of 28% and an additional major depressive symptomatology rate of 24%. 6 Different health care attitudes in different cultures and distance from family leading to homesickness could be the possible reasons. However, no consistent association of PPD has been found with maternal education. 7 Lower socio-economic status has been found to be another factor associated with PPD. Although pregnancy and childbirth are generally viewed as a time of joy and pleasure in most of the families, they also put on financial burden in form of expenses for a new member of the family, especially, among low-income families or nuclear families, where the husband is the only one who provides family income. 8,9 This can cause depression in new mothers. Only one study has found the association of religion with PPD. 10
Clinical factors
An important factor which can lead to PPD is parity. It has been observed that frequency of primi-parity is higher in women with PPD. 5,11 Having 5 or more children was responsible for persistence of prenatal depression beyond the first few postnatal months. 8 On the contrary, one study found significant association of multi-parity with PPD. 12 Another significant factor is unplanned/ unwanted or negative attitude toward pregnancy. 13,14 A study by Limlomwongse and Liabsuetrakul (2006) found that negative attitudes towards this pregnancy double the risk of PPD. 10 Premarital pregnancy is another risk factor for PPD which is important in context of Asian countries. It is considered highly unacceptable in most Asian culture, reason being a highly conservative attitude toward sex among Asian people than people in the west. Getting pregnant before marriage may reflect that the woman had experienced a premarital sexual intercourse, which is considered as a shame or taboo in many of the Asian countries. [31][32][33][34][35] However, unplanned or unwanted pregnancy as a risk factor for PPD should be interpreted very cautiously. It merely reflects the circumstances in which the pregnancy occurred and not the feeling of woman towards the growing fetus. History of premenstrual symptoms, 15,16 previous depression or having depression during pregnancy, 9,12 prenatal high anxiety, 14 history of post-partum, 17 or maternity blues, 5,16 have all been consistently demonstrated as putative risk factors for PPD. The studies examining these factors provide preliminary evidence that some hormone-related phenomena are related to the occurrence of post-partum mood disorders. The results in a way support the notion that the etiology for post-partum mood disorders may be related to differential hormonal sensitivity. Such risk factors should be carefully assessed and evaluated clinically in detail when dealing with a woman of PPD.
Others include physical ill-health, 7 pregnancy complications or woman's perception of having complications during this pregnancy, 10 preterm delivery, 18 and history of pregnancy loss. 12 A study by Dindar and Erdogan (2007) found smoking to be significant risk factor for PPD in Turkish women. 9 However, there are some contradictory views in literature too. In one study, risk of PPD was not found to be related to age, level of education, employment status, planned/ unplanned pregnancy, history of abortion and pregnancy-related complications, term and type of delivery, gender of the child, and mother's breast-feeding and other reporting no relation between method of delivery and risk of PPD. 7,11 Psychosocial factors Psychosocial factors that were found to be associated with postnatal depression are living in mixed/conflicting influences of culture, 19 poor accommodation, 20 lack of social support, 17,21,22 lack of instrumental support or medical resources, 23 stressful life events during pregnancy, 9,15 lack of confidant/friend, 7 conflicts/being abused by in-laws, 4,12,24 and conflicts with relatives over child care. 23 There are different types of social support, for example informational support (where advice and guidance is given), instrumental support (practical help in terms of material aid or assistance with tasks) and emotional support (expressions of caring and esteem). As social support has been demonstrated to be important in transition to motherhood and has an impact on emotional coping, lack of such social support can be a potent predictor for postpartum depression in some women. 21,23 Conflicts between mother-and daughter-inlaw are notoriously common in Asian societies. 24 In Asian societies, traditionally, marriage means a daughter-in-law joining the family and adjusting accordingly rather than composing a new household for the newlyweds. The daughter-in-law is commonly entrusted to the supervision and control of her mother-in-law, who is generally portrayed as authoritarian. 24 Thus, appropriately some studies have demonstrated mother-in-law conflicts as a significant problem among married women in those countries. 25,36 These conflicts may stand responsible for emergence of PPD.
Husband/marriage related factors
The factors which fall into this category are psychiatric illness in husband, 25 current alcoholism, 9 poor educational status, 8 uncertainty about husband's work/unemployment, 13,15 husband's polygamous relationships (9), disturbed relationships with husband or marital conflict, 12,14 lack of support from husband, 15,26 regret for marriage, 13 and low involvement of husband over child care. 27
Child-related factors
Regarding child-related factors, health problems of child, 15 dissatisfaction with child's gender (birth of a girl child), 12,21 birth defects in child, 13 child's temper tantrums, 15 child's feeding difficulties, 7 stress with child care, 28,29 and only short period of rest/exhaustion after childbirth, 26 were all associated with PPD. In terms of gender of the child, there are quite a few studies that suggest dissatisfaction in infant's gender (birth of a baby girl) is amongst the risk factors for postpartum depression. This implies the significance of infant's gender in Asian Family. In some Asian cultures, married couples are expected by their family to have at least one son to maintain the continuity of the bloodline. 36 In Turkey, a baby boy is seen as a source of income. Women who cannot give birth to a baby boy may be considered incapable, leading to serious problems in the marriage. 12 Logically, variables relating to the child can be measured postpartum only. It has been found that mothers suffering from postpartum depression give more negative descriptions of their children than control mothers and report more behavioral problems in their children. 37 Therefore, the mothers' symptoms may be a source of bias in the reporting of child characteristics and the results of such studies examining child-related factors must be viewed with caution.
Miscellaneous
Poor self body image with weight conscience, 4 personality disorders (e.g. avoidant, dependent, and obsessive-compulsive), 11 insecure attachment style, 20 are the other factors which have been shown to increase the risk of emergence of PPD in women.
Conclusions
There are several risk factors for this highly prevalent problem of postpartum depression in Asian countries, some of which are unique to Asian culture. It is likely that a interplay of these factors play a role in the causation of postpartum depression. Taking care of these largely modifiable risk factors can prevent development of postpartum depression. A collaborative-care approach (for example, collaboration between a mental-health professional and an obstetrician) would be reasonable to identify mothers who are at high-risk for development of PPD. Resolution of marital and family conflicts before conception, helping the mothers to establish a support plan, have realistic expectations of birth and parenting, addressing self-esteem issues and encouraging them to quit smoking could be some of the ways to prevent PPD.
Although an over-simplification, following scenario can portray the results from the synthesis of literature on risk factors of postpartum depression and can help in better understanding of the results: her clinical history may reveal previous experience of psychiatric illness, and she may have suffered from depressive or anxious symptoms during pregnancy. She may be experiencing difficulties through stressful life events and a poor marital relationship. She perceives that her partner, family and friends are not as supportive as they could be. | 2016-08-09T08:50:54.084Z | 2014-03-04T00:00:00.000 | {
"year": 2014,
"sha1": "a354a7082c2a7c3281d6c4e347f4f79f7ee565d8",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4081/mi.2014.5370",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a354a7082c2a7c3281d6c4e347f4f79f7ee565d8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53208969 | pes2o/s2orc | v3-fos-license | A novel FC17/CESA4 mutation causes increased biomass saccharification and lodging resistance by remodeling cell wall in rice
Background Rice not only produces grains for human beings, but also provides large amounts of lignocellulose residues, which recently highlighted as feedstock for biofuel production. Genetic modification of plant cell walls can potentially enhance biomass saccharification; however, it remains a challenge to maintain a normal growth with enhanced lodging resistance in rice. Results In this study, rice (Oryza sativa) mutant fc17, which harbors the substitution (F426S) at the plant-conserved region (P-CR) of cellulose synthase 4 (CESA4) protein, exhibited slightly affected plant growth and 17% higher lodging resistance compared to the wild-type. More importantly, the mutant showed a 1.68-fold enhancement in biomass saccharification efficiency. Cell wall composition analysis showed a reduction in secondary wall thickness and cellulose content, and compensatory increase in hemicelluloses and lignin content. Both X-ray diffraction and calcofluor staining demonstrated a significant reduction in cellulose crystallinity, which should be a key factor for its high saccharification. Proteomic profiling of wild-type and fc17 plants further indicated a possible mechanism by which mutation induces cellulose deposition and cell wall remodeling. Conclusion These results suggest that CESA4 P-CR site mutation affects cell wall features especially cellulose structure and thereby causes enhancement in biomass digestion and lodging resistance. Therefore, CESA4 P-CR region is promising target for cell wall modification to facilitate the breeding of bioenergy rice. Electronic supplementary material The online version of this article (10.1186/s13068-018-1298-2) contains supplementary material, which is available to authorized users.
Background
Plant cell walls are essential for plant growth and development and provide renewable biomass feedstock for biofuels production [1]. The recalcitrance of plant cell walls leads to a costly biomass process. To reduce recalcitrance, genetic modifications of wall polymers have been applied to enhance biomass saccharification [2][3][4]. However, alterations in wall polymers are mostly associated with defects in plant growth and development; it becomes critical to identify a key gene for cell wall modification that could not substantially affect plant growth but leads to an enhancement in biomass saccharification [5].
Plants typically contain two different types of cell walls: namely, primary cell walls (PCWs) and secondary cell walls (SCWs). During growth, plant cells are surrounded by a strong yet adaptable primary cell wall, which is mainly composed of cellulose, hemicelluloses, pectins, and proteins [6]. Once growth has ceased, an additional secondary wall may be deposited, which is mainly composed of cellulose, hemicelluloses, and lignin. Cellulose, consisting of linear chains of β-1,4linked glucan, is the principal substrate for bioethanol production. Through intra-and inter-chain hydrogen bonding, parallel linear glucan chains are crystalized to form cellulose microfibrils that provide plants with excellent toughness for normal plant growth [1].
Open Access
Biotechnology for Biofuels *Correspondence: kobexu34@live.cn Rice Research Institute, Shenyang Agricultural University, Shenyang 110866, China Crystallinity index (CrI) is a parameter commonly used to quantify cellulose microfibrils crystallinity [7][8][9], which is one of the most important characteristics of cellulose negatively affecting biomass enzymatic hydrolysis [10][11][12][13]. Hemicelluloses are a heterogeneous class of polysaccharides with various sugar units, and arabinoxylans comprise the majority of hemicelluloses in the mature tissues of rice [10,14]. Lignin is a complex phenolic polymer that provide plants with great stiffness for resistance of biotic or abiotic stress [15,16].
Cellulose biosynthesis is a tightly regulated process that is performed by plasma membrane-spanning cellulose synthase (CESA) complexes (CSCs) [1]. The CESA protein in different plant species shares common domains and motifs, such as the zinc fingers, the central cytoplasmic domain with D,D,D,QXXRW motif, and eight transmembrane domains [17,18]. The central cytoplasmic domain contains the plant-conserved region (P-CR) and class-specific region (CSR), which may play a role in CESA protein association and assembly [19,20]. Genetic and biochemical studies suggest that at least three distinct CESA isoforms are required to form functional CSCs [21]. In Arabidopsis thaliana, CesA1, CesA3, and CesA6-like (i.e., CesA2, 5, 6, and 9) CesAs synthesize primary wall cellulose, and CesA4, CesA7, and CesA8 comprise the CSCs necessary for secondary wall cellulose production [21][22][23]. Bioinformatics and mutational analysis have shown that OsCesA1, 3,8 and OsCesA4,7,9 are involved in the cellulose synthesis of the primary and secondary cell walls in rice, respectively [24][25][26].
Up to date, three OsCesA4 mutants have been identified in rice, and all of them show brittleness due to reductions of cellulose content [26][27][28]. Apart from the brittleness phenotype, Tos17 insertional mutant of CESA4 displays the semi-dwarfism and withering of leaf apex [26]. Bc7 (t) has a two-base deletion in exon 10 and a five-base deletion in intron 10 and shows indistinguishable phenotypes from the wild-type plants [28]. Different from BC7, BC11 mutation occurs in the fifth transmembrane domain of CESA4 and causes severe dwarfism and low pollen fertility [27]. These studies demonstrate that mutations in different sites of CESA4 lead to diverse effects on plant growth, indicating a complex role of CESA4 in plant morphological development.
Rice is a major global food crop that produces enormous biomass residues for biofuels and chemical products [29]. In this study, we report on a novel CESA4 conserved site mutation that leads to increased lignocellulose enzymatic hydrolysis and lodging resistance by reducing cellulose CrI and remodeling cell wall.
Results
The fc17 mutant has a brittleness phenotype caused by a CESA4 conserved site mutation A fragile culm 17 (fc17) mutant was isolated from a natural population of the Japonica cultivar ShenNong265 (Fig. 1a). The extension forces of culms and leaves in fc17 were respectively reduced to ~ 20% and ~ 30% of those in the wild-type (WT) (Fig. 1b). Despite brittleness and reduced extension force, fc17 did not show significant reduction in breaking force of basal stem internode, an important physiological properties associated with plant lodging resistance (Fig. 1c).
Using 950 F 2 mutant plants generated from a cross between fc17 and an indica cultivar MH63, we identified via map-based cloning a 91-kb region on chromosome 1 that contains the candidate gene. The region includes four putative candidate genes. The first putative is ORF1 (TIGR ID: LOC_Os01g54620), which encodes cellulose synthase catalytic subunit 4 (OsCESA4), involving in secondary wall cellulose biosynthesis (Fig. 1d). Sequencing analysis revealed a single-base substitution at the eighth exon, changing TTT to TCT, and resulting in a change in the 426th amino acid from Phe to Ser (Fig. 1d). This substitution occurred at the P-CR region (Fig. 1e), which is conserved in all CESA family proteins of rice and Arabidopsis (Fig. 1f ). Quantitative PCR analysis of CESA4 in multiple tissues and organs revealed that CESA4 primarily expressed in the secondary wall-rich tissues (stem, hull and spikelet) (Fig. 1g). The fc17 phenotypes were further complemented by transgenic expression of WT CESA4 in the fc17 mutant background (fc17 + CESA4), confirming that the missense mutation of CESA4 was responsible for the mutant phenotypes (Fig. 2).
The fc17 mutant exhibits a normal plant growth with higher lodging resistance Apart from brittleness, the fc17 mutant exhibited a normal plant growth, similar to that in the WT (Fig. 2a). Despite the relatively short height (Fig. 2b), 2-year field experiments showed that the fc17 mutant had significantly improved plant lodging resistance (lodging index was reduced by 17%) (Fig. 2c; Additional file 1). In addition, no significant differences in other important agronomic traits of the WT and fc17, including dry biomass, tiller number, dry spike, and 1000-grain weight, were observed ( Fig. 2d-g).
The fc17 mutant displays enhanced biomass enzymatic saccharification
Using mature stem materials, we measured biomass enzymatic digestibility (saccharification) in the fc17 mutant by calculating the hexose yields released from enzymatic hydrolysis of pretreated biomass. mutant exhibited significantly higher hexoses yields by up to 1.45-fold than that of the WT, with pretreatments using three concentrations of alkali (0.5%, 1%, and 4% NaOH) and acid (0.5%, 1%, and 2% H 2 SO 4 ) (Fig. 3a).
To evaluate the effect of enzyme dosage on the saccharification of fc17 and WT plants, enzymatic hydrolysis was performed with different cellulase loadings of 3, 6, and 12 filter paper unit (FPU) per gram of cellulose. A more efficient conversion of lignocellulosic biomass to hexose was observed in the fc17 mutants, which was up to 1.68-fold higher than that of WT plants (Fig. 3b).
To confirm the enhanced biomass enzymatic digestibility of the fc17 mutant, we evaluated their stem cell tissues in situ and biomass residues in vitro under SEM (Fig. 4). Without any pretreatment and enzymatic digestion, no visible in situ differences between stems of the mutant and WT were observed (Fig. 4a). However, the mutant exhibited more severe destruction and cell loss 4 Observation of biomass digestion. a Scanning electron microscopy (SEM) images of in situ enzymatic digestion of stems at heading stage after 1% NaOH or 1% H 2 SO 4 pretreatment and sequential enzymatic hydrolysis. b SEM images of in vitro enzymatic digestion of biomass residues released from enzymatic hydrolysis after 1% NaOH or 1% H 2 SO 4 pretreatment than that of the WT after 1% NaOH or 1% H 2 SO 4 pretreatment and sequential enzymatic hydrolysis (Fig. 4a).
In addition, the biomass residues of the mutant showed rougher surfaces in vitro compared to the WT under 1% NaOH or 1% H 2 SO 4 pretreatments and sequential enzymatic hydrolysis (Fig. 4b), which agrees with the findings in other grass plants [11,30]. Both in situ and in vitro observations thus confirm that the mutant has high biomass enzymatic digestibility.
The fc17 mutant shows altered cell wall structure and composition
To understand the improved lodging resistance and enhanced biomass digestibility in the fc17 mutant, we examined its cell wall structure and composition. Scanning electron microscopy (SEM) showed that the fc17 had no alterations in the culm cross section structure and cell size, but exhibited reduced wall thickness ( Fig. 5a). Transmission electron microscopy (TEM) further revealed that the fc17 plants displayed thinner and uneven secondary cell walls compared to the WT plants ( Fig. 5b).
To investigate the underlying cause of the altered cell wall structure in fc17 plants, we performed chemical measurement of three wall polymers in secondary cell wall-rich stems of fc17 and WT plants. The fc17 mutant showed a 19% reduction in cellulose levels and a 16% increase in hemicelluloses level and a 14% increase lignin level compared to that in the WT (Fig. 5c). Furthermore, GC-MS analyses revealed that the increase of hemicelluloses content in fc17 was mainly contributed by the increase in hemicellulosic arabinose and xylose levels ( Fig. 5d). In addition, the fc17 exhibited similar content of galacturonic acid (GalA) and glucuronic acid (GlcA) as compared with WT (Fig. 5e).
To confirm the wall polymer content detected by chemical approach, we compared equivalent transverse sections of fc17 and WT culm internodes labeled with antibodies against xylan (LM10, LM11, and CCRC-M147), homogalacturonan (HG; JIM5 and JIM7) and rhamnogalacturonan-I (RG-I; CCRC-M35). All antibodies directed to xylan showed a significantly increased occurrence of their epitope in fc17, while HG and RG-1 did not show significant difference between the fc17 and WT plants (Fig. 5f ). The fc17 mutant produces structurally aberrant cellulose microfibrils Mutations involving CESA proteins not only affect cellulose level but also influence cellulose structure features [31,32]. To explain the increased biomass saccharification in fc17, we measured the cellulose crystallinity of both the WT and mutant using the mature stem. X-ray diffraction analysis estimated a 20.8% reduction in cellulose crystallinity in the fc17 mutant compared to the WT (Fig. 6a). As cellulose CrI is positively correlated with its degree of polymerization (DP) [12,24], we further examined the cellulose DP of mature stem in both the WT and mutant plants. A 17.7% reduction in the cellulose DP was observed in the fc17 mutant compared to the WT, suggesting that this may have contributed to the observed lower CrI (Fig. 6b). Cellulose crystallization can be prevented using dyes that interact with cellulose chains such as Calcofluor White [33]. To confirm the observed decrease in cellulose CrI in fc17 in vitro, we treated the mutant and WT seedlings after germination with three concentrations of calcofluor and then measured the lengths of their root. Calcofluor dramatically repressed root growth in the WT using the intermediate (0.1%) concentration and completely prevented root elongation using a high (0.2%) concentration (Fig. 6c). However, the fc17 mutant was highly resistant to calcofluor even at the highest concentration (0.2%) (Fig. 6c). The observed greater resistance of fc17 to high-concentration calcofluor demonstrates that its root growth was insensitive to calcofluor (Fig. 6c). The differences in the response of fc17 and WT to calcofluor support that fc17 possesses an altered cellulose microfibril structure.
Comparative proteomic analysis of WT and fc17 plants
To elucidate the molecular mechanism(s) by which FC17 regulates the cell wall formation and maintains plant growth, we used an iTRAQ-based (isobaric tags for relative and absolute quantification) proteomics analysis of rice stems at heading stage to identify changes in protein expression between the WT and fc17 plants. After iTRAQ and liquid chromatography-tandem mass Fig. 6 Detection of cellulose structural features. a Lignocellulose CrI of mature stems using the X-ray diffraction (XRD) method. b Cellulose DP of mature stems using viscometry method. c Root lengths of the germinated seedlings treated with Calcofluor for 48 h (scale bar = 1 cm). *, **Indicate significant differences between the wild-type (WT) and fc17 mutant by t-test at p < 0.05 and 0.01, respectively, with the decreased percentage (%) calculated by subtraction of the values between mutant and WT and divided by WT. The error bar indicates SD values (n = 3) spectrometry (LC-MS/MS) analysis, a total of 237,056 spectra were identified, of which 57,121 spectra could be matched to peptides in the database and 23,687 were unique peptides (Fig. 7a). A total of 6053 proteins were identified using the Oryza sativa UniProt database. We detected 695 differentially expressed proteins (DEPs) with expression changes > 1.5-fold between the WT and fc17. Of these proteins, 348 were upregulated and 347 were downregulated (Fig. 7b). Pearson correlation analysis showed good correlation among different replicates of the same sample, whereas significant differences were observed between the WT and fc17 plants (Fig. 7c).
Based on the considerable effects of the fc17 mutation on cellulose content and CrI, we quantified the expression levels of proteins involved in cellulose production processes by iTRAQ proteasome profile. The protein levels of CESA4, CESA7, and CESA9 forming the secondary wall CSC were respectively reduced by 18%, 15%, and 11% in fc17 (Table 1), which was confirmed by western blot analysis of CESA4, CESA7, and CESA9 using specific antibody (Fig. 7d). In contrast, the other CESA proteins (CESA1/2/3/6/8) in fc17 plants displayed similar expression level compared to that in the WT (Table 1). We also examined the expression levels of BC (Brittle Culm) proteins which contribute to both mechanical strength and cellulose synthesis in rice ( Table 1). Most of the BC proteins quantified by iTRAQ assay did not exhibited significant alterations between fc17 and WT plants, with the exception of BC12 (KIF4 family protein) which was reduced by 12% in the fc17 mutant (Table 1). Therefore, the fc17 mutation probably regulates the cellulose production through impairing CESA subunits numbers or association of the secondary cell wall CSCs. We further employed Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis to reveal the pathway affected by the fc17 mutation (Table 2). Although the detected DEPs may be involved in 83 pathways, the categories of carbon fixation and metabolism (ko00710, ko01200), starch and sucrose metabolism (ko00500), phenylpropanoid biosynthesis (ko00940), and phenylalanine metabolism (ko00360) were apparently affected ( Table 2). Most of lignin biosynthesis-associated proteins that are involved in the phenylpropanoid biosynthesis and phenylalanine metabolism pathways were upregulated in fc17 (Additional files 2, 3). This result coincides with the higher lignin content in fc17 relative to the WT. All of 54 DEPs that exist in both carbon fixation and carbon metabolism pathways were upregulated in fc17 (Additional files 4, 5). In the starch and sucrose metabolism pathway, 12 of the 16 DEPs annotated as glycosyl hydrolase, sucrose synthase and glucanotransferase were upregulated in fc17 (Additional file 6). These proteins produce more substrate for biosynthesis of hemicelluloses and other polysaccharides and might participate in hemicelluloses modification. In addition, two proteins IRX10 (LOC_Os01g70200) and IRX14 (LOC_ Os06g47340) which have been reported to be involved in xyan biosynthesis were upregulated in fc17 (Additional file 7) [34,35]. These results coincides with the higher hemicelluloses content in fc17 relative to the WT. These findings suggest that the fc17 mutation probably triggers feedback responses to the cell wall defects, which in turn regulate the cell wall for the maintenance of plant growth and development.
Discussion
In this study, we identified a novel CESA4 missense allele, fc17. Despite the defect in extension strength, fc17 plants exhibited a relatively normally morphological phenotype compared to the WT, in contrast to previously reported CESA4 alleles that showed significantly impaired plant growth [26,27]. Although the fc17 mutation caused a reduction in CESA4 protein level, the cellulose content in mature stems of the fc17 plants only decreased by 19%, indicating that the mutant might produce enough cellulose to maintain the normal plant growth. Except for CESA, the majority of proteins involved in carbon synthesis and metabolism are upregulated in the fc17 mutant, which might provide sufficient UDPG substrate for the production of hemicelluloses and other polysaccharides. Indeed, the fc17 mutant exhibited increased hemicelluloses and lignin that partly complement the detrimental effect of reduced cellulose on cell wall structure. These data indicate that cell wall formation is relatively normal in the fc17 mutant, which is further supported by the results of SEM observation. Taken together, out results indicate that the CESA4 P-CR site mutation sustains normal plant growth by producing sufficient cellulose content and by triggering the feedback response to support cell wall formation. Notably, the fc17 mutant has exhibited much higher plant lodging resistance than did the wild-type. Plant lodging resistance is a major and integrated agronomic trait, which is predominantly influenced by plant height and stem stiffness (breaking force) [36,37]. Hence, the relatively short height of fc17 might causes its enhanced lodging resistance. The fc17 mutant displays reduced cellulose content, which provides plants with toughness, but its lignin content is significantly increased, which provides plants with excellent stiffness for resistance of biotic and abiotic stress [16,38]. The increase of lignin content in fc17, therefore, is an important contributor for its high lodging resistance, which is consistent with previous studies [38]. Importantly, cellulose crystallinity has also been recently demonstrated as the main factor negatively determining plant lodging resistance in rice [10,24]. Therefore, the fc17 mutant showing much higher lodging resistance should be due to shorter height, increased lignin content and lower cellulose CrI.
Table 1 Expression levels of CESA and BC (Brittle Culm) proteins in the wild-type (WT) and fc17 plants by iTRAQ analysis
Cellulose CrI has been considered to be a negative factor for lignocellulose digestibility because the reduced cellulose CrI increases the accessibility of the enzyme to the lignocellulose substrate [30]. Similarly, it is apparent from the data presented here that the fc17 mutant that exhibited a reduction in CrI displayed significantly improved (30-70% increase) lignocellulose enzymatic digestibility after various chemical pretreatments compared to the WT. These results have been confirmed by observations of increased stem digestion of the fc17 mutant in situ using SEM. Furthermore, fc17 requires approximately three-to fourfold fewer enzymes or two-fold lower concentration of NaOH and H 2 SO 4 to release an equal or greater amount of fermentable sugar than that in the WT plants. These data demonstrate that the fc17 mutant will be more environmentally friendly and can be an economical option as biofuel feedstock compared with the WT.
The observed reduction in lignocellulose CrI in the fc17 mutant may be interpreted in three ways. First, as the fc17 mutation occurs in the P-CR region, it may affect the stability of the plasma membrane-localized CSC particles, thereby causing shorter cellulose chain lengths [19,20,24]. Indeed, fc17 has exhibited a reduced cellulose DP that is negatively correlated with cellulose CrI. Second, the CESA4 subunit mutation may result in intermittent synthesis and inconsistent cellulose chain lengths, thereby disrupting the ability of neighboring cellulose chains to form crystalline structures. Third, it has been reported that hemicelluloses content negative affects cellulose CrI, and thus the increase in hemicelluloses level in the fc17 should be an additional contributor to its lower lignocellulose CrI.
Genetic modification of plant cell walls has been considered to be a promising solution for reducing biomass recalcitrance and maintaining normal plant growth [5]. Since the transgenic approach has been used successfully, utilization of the beneficial target gene in various varieties is a more likely way to be taken. The fc17 is a promising mutation due to its positive effects on biomass digestibility without any major adverse effects on plant growth. Based on these results, we could further introduce the mutated CESA4 into energy crops such as poplar and switchgrass to improve biomass conversion efficiency. Hence, our findings on the CESA4 P-CR region mutation may offer us a potential way for breeding energy crops.
Conclusions
A new CESA4 allele, fc17, shows the substitution (F426S) mutation at the plant-conserved region of CESA4 protein. Despite reduction in cellulose content and cell wall thickness, fc17 plants displayed relatively normal plant growth and higher lodging resistance compared to the wild-type. Multiple techniques further demonstrated that this mutation significantly reduces cellulose crystallinity, thereby enhancing biomass saccharification. Proteomic profiling based on iTRAQ assay suggested that this mutation triggers feedback response by forming a relatively integrated cell wall to maintain normal plant growth. These results suggest that the plant-conserved region of CESA4 is a promising target for genetic modification for breeding energy crop and cost-effective biomass processing. Li et al. Biotechnol Biofuels (2018) 11:298
Plant materials
The fc17 was isolated from a natural population of the Japonica cultivar ShenNong265. The homozygous fc17 mutant and wild-type (WT) plants were grown in experimental fields at Shenyang Agricultural University (Shenyang, China). The mature stem tissues were harvested and dried at 55 °C in an oven to a constant weight, and ground through 40-mesh screen (0.425 mm × 0.425 mm) and stored in a dry container until use.
Map-based cloning of fc17
A 5000 F 2 mapping population was generated from the cross between fc17 and MH63, an indica cultivar in China. All plants were cultivated in the experimental fields at the Shenyang Agricultural University during the natural growing season. The segregation ratio in F 2 population showed that the normal plants and Brittle Culm plants segregated as 3:1. The fc17 gene was localized to 91-kb genomic region that contains the CESA4 gene. The CESA4 gene of the mutant and its corresponding WT were PCR amplified with KOD-PLUS (TOYOBO) and sequenced with a 3730 sequencer (ABI). For complementation analysis, a 8.53-kb genomic DNA fragment containing the entire CESA4 coding region, a 1.8-kb upstream sequence, and a 1.5-kb downstream sequence were cloned into the binary vector pCAMBIA 1305 to generate the transformation plasmid. The binary plasmids were introduced into Agrobacterium tumefaciens strain EHA105 and transformed into the fc17 mutant plants.
Measurements of plant mechanical properties and agronomic traits
The extension force of rice culms and leaves were determined using a digital force/length tester (RH-K300, Guangzhou, China). The breaking force and plant lodging index were measured at 30 days after heading as previously described [10]. Rice dry spike, dry biomass, and 1000-grain weight were respectively weighed after the samples were dried in the oven at 60 °C. All measurements were conducted using nine independent biological duplicates.
Plant cell wall fractionation and determination
The plant cell wall fractionation procedure and total cellulose and hemicelluloses assay were conducted as previously reported [11]. The soluble sugar, lipids, and starch of the samples were successively removed from the dry biomass power samples by potassium phosphate buffer, chloroform-methanol (1:1, v/v), and DMSO-water (9:1, v/v). The remaining pellets were suspended in 4 M KOH containing 1.0 mg/mL sodium borohydride for 1 h at 25 °C, and the combined supernatants were regarded as hemicelluloses. The remaining pellets were regarded as total cellulose. All experiments were carried out in biological triplicate.
Total lignin content including acid-insoluble (AIL) and acid-soluble lignin (ASL) were detected by a twostep acid hydrolysis method as described previously [15]. Hemicelluloses monosaccharide and uronic acids (GalA and GlcA) were determined by GC-MS. The sample preparations and GC-MS analysis were conducted as previously described [11,39].
Immunolabeling and fluorescence imaging
The stem transverse section samples were incubated in PBS (pH 7.4) containing 5% (w/v) milk protein (MP/PBS) and a diluted antibody solution (1:50) for 1.5 h. Samples were then washed with PBS (pH 7.4) at least 3 times and incubated with a 300-fold diluted secondary antibody (GB21302, Wuhan servicebio technology Ltd., Wuhan, China) linked to fluorescein isothiocyanate (FITC) in PBS (pH 7.4) for 1.5 h in darkness.
Fluorescence was observed with a microscope equipped with epifluorescence irradiation and DIC optics (Eclipse C1, Nikon, Tokyo, Japan). Immunofluorescence (green) was observed at wavelength between 510 and 560. All micrographs were captured using equivalent settings, and representative micrographs were shown in this study.
Cellulose CrI and DP detections
The lignocellulose crystallinity index (CrI) was detected by X-ray diffraction (XRD) method using Rigaku-D/ MAX instrument (Ultima III; Japan) as previously described [12]. The relative DP of cellulose was measured by the viscometry method as described previously [12].
Microscopic observations
The second stem internode tissues (0.5 cm sections above the node) at heading stages were cut into 1-2 mm pieces, and were observed and photographed under a scanning electron microscope (SEM TM1000, Hitachi Ltd., Tokyo, Japan).
Observation of biomass residues enzymatic digestion in vitro was conducted as previously described [11]. For plant tissue in situ enzymatic digestion, the second-stem transverse sections at heading stages were treated with 1% NaOH or 1% H 2 SO 4 , washed with distilled water until pH 7.0 and incubated with 1 g/L mixed cellulase for 2 h at 50 °C. After enzymatic hydrolysis, the tissue samples were observed and photographed under a SEM. The mixed cellulase containing β-glucanase (≥ 6 × 10 4 U), cellulase (≥ 600 U), and xylanase (≥ 1.0 × 10 5 U) was commercially available from Imperial Jade Bio-technology Co., Ltd (Ningxia, 750002, China).
Transmission electron microscopy (TEM) was used to observe the cell wall structures in the third leaf veins of three-leave-old seedlings as previously described [24]. The samples were washed in the PBS buffer, and then post-fixed in 2% (w/v) osmium tetroxide (OsO4) for 1 h and embedded with Super Kit (Sigma-Aldrich, St. Louis, MO, USA). Sample sections were cut with an Ultracut E ultramicrotome (Leica) and picked up on formvar-coated copper grids. After post-staining with uranyl acetate and lead citrate, the specimens were viewed under a Hitachi H7700 (Hitachi Ltd., Tokyo, Japan) transmission electron microscope.
Biomass pretreatment and enzymatic hydrolysis
The chemical (H 2 SO 4 , NaOH) pretreatment and sequential enzymatic hydrolysis were performed as described previously [10]. For NaOH pretreatment, the ground biomass powder was added with three concentration (0.5%, 1%, 4%; w/v) of NaOH. For H 2 SO 4 pretreatment, the biomass powder was added with three concentration (0.5%, 1%, 2%; v/v) of H 2 SO 4 and heated at 121 °C for 20 min. The samples of chemical (NaOH, H 2 SO 4 ) pretreatments were shaken at 150 r/min for 2 h at 50 °C, and centrifuged at 3000g for 5 min. The remaining pellet was washed with distilled water until pH 7.0. The remaining residue was collected for enzymatic hydrolysis. All experiments were performed in the biological triplicates.
Effect of calcofluor treatment on plant growth
The germinated seeds of the fc17 mutant and WT plants were transferred onto the MS media contained Calcofluor White dye (Sigma-Aldrich Co. LLC, California, USA) at different concentrations (0.05%, 0.1% and 0.2%). After 24-h incubation, the root length was measured every 12 h.
Protein extraction and western blot
Fresh rice stem tissues at heading stage were ground to a fine powder in liquid nitrogen, and 0.5 g powder was extracted using Plant Total Protein Extraction Kit (Invent Biotech, Beijing, China; SD-008/SN-009) according to the manufacturer's instructions. The protein content was measured with a BCA protein assay reagent (Beyotime Institute of Biotechnology, Jiangsu, China). The protein level of CESA4, CESA7, and CESA9 was determined by western blot analysis as described previously [24]. The dilution of anti-CESA4, anti-CESA7, anti-CESA9, and anti-tubulin (Beyotime Institute of Biotechnology, Jiangsu, China) was performed as 1:500, 1:500, 1:500, and 1:1000, respectively.
Protein digestion
Protein digestion was performed using the FASP method [40]. A total of 300 µg proteins from each sample were placed on an ultrafiltration filter (30 kDa cutoff, Sartorius, Gottingen, Germany) that had 200 µL UA buffer (8 M urea, 150 mM Tris-HCl, pH 8.0). It was then centrifuged at 14,000g for 30 min and washed with 200 µL of UA buffer. About 100 µL of 50 mM iodoacetamide was added to the filter to block reduced cysteine residues. The samples were maintained at room temperature for 30 min in the dark, followed by centrifugation at a speed of 14,000g for 30 min. UA buffer (100 µL) was used to wash the filters twice. Approximately 100 µL of a dissolution buffer (Applied Biosystems, Foster City, CA, USA) was placed on the filter. This was centrifuged at 14,000g for 20 min, and then repeated twice. The protein suspensions were subjected to enzyme digestion with 40 µL of trypsin (Promega, Madison, WI, USA) buffer (4 µg trypsin in 40 µL of dissolution buffer) for 16-18 h at 37 °C. The final filter unit was transferred to a new tube that was spun at 14,000g for 30 min. The peptides were collected as a filtrate and the concentration of the peptides was measured at an optical density with a 280 nm wavelength (OD 280 ).
iTRAQ labeling and fractionation by strong cation exchange chromatography
Protein peptides (100 μg) from each group were labeled using the 8plex iTRAQ reagents multiplex kit (ABI, Foster City, CA, USA) (isobaric tags 113, 114, and 116 for the group without CJD and isobaric tags 118, 119, and 121 for the sCJD group). The 8plex iTRAQ reagents were allowed to reach room temperature, centrifuged, and reconstituted with 50 μL isopropyl alcohol to dissolve the iTRAQ labeling reagent. The iTRAQ labeling reagents were added to the corresponding peptide samples and reacted at room temperature for 1 h. A 100 μL aliquot of water was added to stop the labeling reaction. A 1 μL aliquot of sample was removed from each group to test labeling and extraction efficiency, and the sample was subjected to a matrix assisted laser desorption ionization procedure after Ziptip desalting. The six sample groups were pooled and vacuum-dried. Each pool of mixed peptides was lyophilized and dissolved in solution A (2% acetonitrile [ACN] and 20 mM ammonium formate, pH 10). Then, the samples were loaded onto a reverse-phase column (Luna C18, 4.6 × 150 mm; Phenomenex, Torrance, CA, USA) and eluted using a step linear elution program: 0-10% buffer B (500 mM KCl, 10 mM KH 2 PO 4 in 25% ACN, pH 2.7) for 10 min, 10-20% buffer B for 25 min, 20-45% buffer B for 5 min, and 50-100% buffer B for 5 min at a flow rate of 0.8 mL/min. The samples were collected each min and centrifuged for 5-45 min. The | 2018-11-05T18:44:03.163Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "cfc158f69e6eedca6aff66520715328d61bd2fa6",
"oa_license": "CCBY",
"oa_url": "https://biotechnologyforbiofuels.biomedcentral.com/track/pdf/10.1186/s13068-018-1298-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfc158f69e6eedca6aff66520715328d61bd2fa6",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
9982682 | pes2o/s2orc | v3-fos-license | A Classification of Adaptive Feedback in Educational Systems for Programming
: Over the last three decades, many educational systems for programming have been developed to support learning/teaching programming. In this paper, feedback types that are supported by existing educational systems for programming are classified. In order to be able to provide feedback, educational systems for programming deployed various approaches to analyzing students’ programs. This paper identifies analysis approaches for programs and introduces a classification for adaptive feedback supported by educational systems for programming. The classification of feedback is the contribution of this paper.
Introduction
Programming skills are becoming a core competence for almost every profession [1] and thus, Computer Science education is being integrated in the curriculum of almost every study subject.However, it is well-known that programming courses, which constitute an indispensable part of studies related to Computer Science, are considered a difficult subject by many students [2,3].Addressing this problem, various approaches have been proposed to help students learn solving programming problems.One of the solutions to these problems is to deploy effective technology-enhanced learning and teaching approaches.As a consequence, researchers have identified this gap and have been developing various types of educational systems for programming that are able to analyze students' programs and provide feedback.
Feedback is one of the most powerful ways to enhance learning.In the behavioristic view, through feedback (stimulus), the behavior (output) of a student can be shaped.In an operant conditioning learning setting, feedback can serve as positive or negative reinforcement that shapes behaviors and thus, support learning [4].In the cybernetic view, which assumes that a monitor compares the state of a system at various times with some standard and a controller adjusts the system's behavior, the aim of feedback is to reduce the gap between the student's current and desired states of learning [5].In the constructivist view, which assumes that knowledge is constructed by each individual, feedback provides help so that each individual student can overcome a problem and progress learning with individual activity, e.g., [6].Narciss [7] developed an interactive tutoring feedback (ITF) model based on the view that a student is an active constructor of knowledge and feedback aims at providing students with formative or tutoring information on their current state of learning which help them to regulate their learning process.According to Narciss [7], feedback may have different functions: acknowledge, confirm, reinforce correct response or high quality learning outcomes, and promote the acquisition of the knowledge and cognitive operations necessary for accomplishing learning tasks.In addition, on the motivational level, it can encourage students in maintaining their effort and persistence.
Since this paper discusses about adaptive feedback, it is required to review the notions of adaptive feedback.According to Dempsey and Sales [8], "adaptive feedback, unlike adapted and generic feedback, is dynamic.As they [learners] work through the instruction, different learners will receive different information from the computer.The determination of what feedback is provided, when, and to whom is made by the designer after careful consideration of all available information" ( [8], p. 166).This notion of adaptive feedback has a broad sense.That is, instead of giving a feedback such as "wrong" for different incorrect answers and "correct" for different correct answers, adaptive feedback would not only verify the correctness of an answer, but also should provide different information for different answers.Also in this broad sense, feedback is considered adaptive when "different learners receive different information" [9].For example, in the programming domain, individual information in a feedback could be the location of the erroneous program code, the explanation of the wrongly applied programming concept, an individual hint for improving the student's solution such as "the program is wrong because the code line X violates the concept Y".That means, in order to be able to provide adaptive feedback in the broad sense, an educational system needs to have the intelligent capability to analyze the error(s) in the student's solutions.The representative educational systems for programming that support adaptive feedback in this sense include FIT Java Tutor [10], ITAP [11].Some other researchers proposed another notion of adaptive feedback."Learners differ from each other in many ways including prior knowledge, meta-cognitive skills, motivational and affective state, or learning strategies and styles.Thus, these are many individual factors that may influence how feedback is processed by each learner.Consequently, a variety of individual factors can be used to support the design of personalized feedback strategies and implementation of AESs (Adaptive Educational Systems) which can deliver feedback messages that are tailored to the characteristics of an individual learner or a category of learners" ( [12], p. 59).In this sense, adaptive feedback is generated by an adaptive educational system that requires a student model to represent the student's characteristics (e.g., knowledge, meta-cognitive skills, affective state, learning strategies and styles) and the student model serves to distinguish the level among different students [13].This is a strict sense of adaptive feedback and, according to this notion, an adaptive educational system may provide adaptive feedback on the correctness or quality of a response (e.g., correct/incorrect; excellent/poor) or an elaborated feedback (in terms of the feedback classification proposed by Narciss [7]) depending on the student's model.The student model and its adaptation algorithm determine the type of feedback as well as the feedback strategy.This notion of adaptive feedback is also referred to as "personalized feedback" [12].The representative educational systems for programming that support adaptive feedback in this sense include efforts that gradually provide appropriate feedback information adapting to the students' needs, e.g., [12], and systems that provides feedback based on students' individual knowledge level, e.g., [14] or gender, e.g., [15].
In this paper, both senses of adaptive feedback are adopted to consider educational systems for programming.That is, adaptive feedback is provided differently for individual students by analyzing the student's action (e.g., student's solution attempt) and/or adaptive feedback may be provided by an education system that is based ona student model.Existing educational systems for programming are reviewed and feedback types supported by these systems are classified.The classification of adaptive feedback used in educational systems for programming distinguishes from the generic feedback classifications (e.g., Fleming and Levie [16], Economides [17], Narciss [7]) in that it serves only researchers and developers in the domain of programming to focus on specific types of feedback.
The remainder of this paper is structured as follows.The next section presents the method of identifying specific feedback types supported by educational systems for programming.A classification of feedback types for educational systems for learning programming is classified in Section 3. The impact of different feedback types is discussed in Section 4. Findings in the review of feedback types are summarized and the developed classification of feedback types for programming is compared with related works in Section 5.In addition, research directions are proposed in this section.
Method
With the purpose of classifying specific feedback types that are supported by educational systems for programming, scientific reports of systems that have been published since 1999 at scientific workshops, conferences, and in journals related to the areas learning science and computer technology for education are searched on the Internet.In addition, technical reports, Master's theses, and PhD dissertations that have passed the internal review process and have been published by their universities were taken into consideration.In addition to searching publications on the Internet, authors of the collected publications were contacted to ask for their latest evaluation reports.Only publications since 1999 were collected, because Deek and McHugh [18] published in 1998 a detailed survey on educational systems for programming.
For the collected publications, there are two criteria: (1) publications from 1999 to 2009 have to show an evidence of practical use in courses or a scientific evaluation of the proposed system; and (2) recent publications from 2010 have to document either an implementation or a scientific study of the proposed approach.The first criterion for publications assures that only systems that have been demonstrated in a practical or lab setting are taken into account.The second criterion allows us to consider recently developed systems.
In addition to criteria for publications, several criteria with respect to functionality have been specified to shape the space of reviewed systems.First, the functionality of reviewed systems has to be beyond how compilers (e.g., providing (detailed) information about syntactical mistakes), and integrated programming environments (e.g., supporting syntax highlighting such as Eclipse) work.This criterion filters out two types of systems, compilers and integrated programming environments.In addition, the systems to be reviewed must have at least one interactive functionality.Interactive activity means that the students have the option to produce something (e.g., an answer or a program) rather to consume something.That is, the systems have to offer students more interactive experiences than simply providing online learning materials and test items.
With respect to the learning objectives, educational systems for programming that deal with program code, either by accepting student's solutions written as source code, or by using source code as templates (e.g., implementing a fill-in-the-gap strategy), were considered.That means, this criterion selects only systems that focus on the coding phase of programming and opts out those systems that support students pursue other learning objectives in the analysis, planning/design, and test phases of programming, for example, [19][20][21][22].
The last criterion is selecting only educational systems that support adaptive feedback in either the strict sense or the broad sense discussed in the previous section.
In summary, in addition to technical reports, Master and PhD dissertations, publications from the following conference/workshop proceedings, journals, or books were collected:
A Classification of Adaptive Feedback in Educational Systems for Programming
Before a specific adaptive feedback type used in educational systems for programming is specified, classifications of generic feedback are briefly reviewed.Based on those classifications, the specific classification of adaptive feedback in educational systems for programming is developed.Fleming and Levie [16] classified five types of feedback based on the function dimension.Confirmation feedback indicates whether a solution is correct or incorrect.Corrective feedback provides information about a possible correct response.Explanatory feedback explains why a response is incorrect.Diagnostic feedback attempts to identify misconceptions by comparing the student's solution with common errors.Elaborative feedback provides additional related information.While Fleming and Levie [16] were based on the function dimension to classify feedback, Economides [17] was based on the content of feedback and distinguished seven types of (cognitive) feedback (actually, Economides [17] included a class of "no feedback" in the classification of feedback.We do not mention this class because feedback of this class does not contain any content): (1) Yes/No feedback; (2) answer until correct (feedback asks the student to try again until she answers correctly); (3) correct answer (feedback indicates the correct answer); (4) topic contingent (feedback elaborates on the general topic); (5) response contingent (feedback gives explanation); (6) bug related (feedback presents common errors made by students); and (7) attribute isolation (feedback highlights the central attributes of the target concept).Narciss [7] also classified feedback types based on the content dimension: (1) knowledge of performance (e.g., percentage of correctly solved tasks; number of errors; grade); (2) knowledge of result (information about the correctness (e.g., "correct"/"incorrect") or quality of the actual answer or outcome); (3) knowledge of the correct response (feedback content is a correct response or a sample solution to a given task); and (4) elaborated feedback (containing additional information besides knowledge of result or knowledge of the correct response (e.g., hints, guiding questions, explanations, worked examples)).The last feedback type (elaborated feedback) is divided into at least five sub-types, according to Narciss [7], depending on the content of elaborated feedback: (a) knowledge about task constraints; (b) knowledge about concepts; (c) knowledge about mistakes; (d) knowledge about how to process the task; and (e) knowledge about meta-cognition.Some of these subtypes of elaborated feedback proposed by Narciss [7] are in accordance with the feedback types 4-7 of the feedback classification proposed by Economides [17].Across the three classifications, there are commonalities as Table 1 indicates.Are these types of feedback adaptive?If we adopt the broad sense of adaptive feedback, then corrective, explanatory and diagnostic feedback may be considered adaptive, because its information could be different for different students' solutions and confirmation feedback is not adaptive.If we adopt the strict sense of adaptive feedback, i.e., an educational system is able to adapt different types of feedback according to individual student's preferences, knowledge level or learning style, then confirmation feedback could also be considered adaptive, because the educational system may have a student model that represents the preferences, the knowledge level or the learning style of a student.
In each of the following subsections, educational systems for programming that support a specific type of feedback are reviewed.The review of each system consists of two parts: an example of feedback (if an example of feedback supported by each system is available in the literature) and a brief description of the analysis technique that was developed to provide feedback.In Table 2, educational systems for programming that are mentioned in this paper are summarized and a representative reference for each system is cited (because each system may have been reported in several papers).Quantitative and qualitative + "The participants' knowledge did improve while interacting with the system, and the subjective data collected shows that students like the interaction style and value the feedback obtained" Y, C jTutor [32] Technical evaluation Y, C Ludwig [33] Quantitative and qualitative + "Seven students were able to finish the problem, one was almost completed, and two students were somewhat flummoxed and did not complete it", "7 students thought that the style checker was an "excellent" idea; 2 thought it was a "good" idea; 1 thought it was an "okay" idea."
Y Y MEDD [34]
Technical evaluation Y, I, C M-PLAT [35] Technical evaluation Y QuizJet [36] Quantitative evaluation + "working with the system students were able to improve their scores on in-class weekly quizzes" Y QuizPack [37] Quantitative evaluation + "The students' work with QuizPACK significantly improved their knowledge of semantics and positively affected higher-level knowledge and skills" Y Radosevic's system [38] Quantitative evaluation + "the tools for program analyses and debugging help them to find the cause of their errors" Y, C VC PROLOG [39] Technical evaluation Y Y, I, C Y
Yes/No Feedback
The scarcest feedback type is confirmation feedback.Typical systems that provide this feedback type include QuizJet [36], QuizPack [37], and M-PLAT [35].QuizJet and QuizPack provide individualized assessment questions in Java and C, respectively."QuizJET [ . . .] was designed as a generic system to author, deliver, and assess a range of parameterized questions for Java programming language.QuizJET can work in both assessment and self-assessment modes and is able to cover a whole spectrum of Java topics from Java language basics, to such critical topics as objects, classes, interfaces, inheritance, and exceptions."[36].QuizJET generates an individual Java program using the generic system and asks the student a question of analyzing the presented program to find the result for a specific variable, for instance, "What is the final value of RESULT?" (RESULT is a variable name in the given program).Then, QuizJET evaluates the student's answer and presents a confirmation message "Wrong/Correct" and corrective feedback "Your answer is XXX.Correct answer is YYY" [36].QuizPack works also in the same principle for the programming language C. Since both QuizJET and QuizPack are able to generate parameterized questions, the systems provide adaptive feedback.
M-PLAT does not provide multiple-choice test or fill-in-slot exercises, rather the system requests students to submit a whole program, then the system evaluates the student's program and returns whether the student's program is successful or not.Using M-PLAT to develop a program, "when a student has finished its program, the source code is sent to the server to be compiled and evaluated.Then, a response with the results is sent back to client-side application.After that, the student will receive one of the following messages: "Compilation errors have been found", "There are no compilation errors, but the provided program is not correct", "The program is correct" ( [40], p. 128).Such feedback messages are of type confirmation feedback.Since M-PLAT is an adaptive educational system as the authors claimed "M-PLAT will adapt the level of requirements to the learning pace of each student" ( [40], p. 127), the feedback messages are provided individually to students.
The common analysis technique deployed by the educational systems for programming reviewed above is that they compare the student's solutions with pre-specified correct values.If the student's solution does not match pre-specified correct values, then the system will indicate that the student's solution is not successful.Otherwise, the system will present a feedback message indicating that the student's solution is correct.
Syntax Feedback
Syntax feedback is based on the compiler's output.Most systems just return the compiler's messages to students without additional explanation, e.g., Ludwig [33], Ask-Elle [23], and ELP [24]."When the student offers the program for analysis, the system [Ludwig] will first attempt to compile and link the program; if this step fails, then the compilation failure messages will be displayed to the student" [33].Ask-Elle presents "a syntax-error message generated by the Helium compiler" to the student in case there is a compilation error [41].Using ELP, "if there are syntax errors, compiler error messages are returned to the student" [42].
Instead of solely returning compiler's feedback, VC PROLOG [39] provides detailed syntactic error explanation that is associated with a concept in the defined ontology of Prolog.For example, VC PROLOG displays the following explanations for syntax errors instead of showing the compiler's output: "Probably a closing bracket is missing in this list" indicates that the student has forgotten a bracket; "A variable 'X' was written lower case, therefore it is interpreted as a constant" explains how the system interprets the lower case "x" ( [39], p. 72).Gross and colleagues [43] argued that students are often not able to write programs on their own (e.g., on paper) and do not understand the cause of errors produced by a compiler.Thus, the authors proposed a new tutoring approach, which initiates a dialogue-based discourse between a student and an intelligent tutor in case of a syntactic error.
Semantic Feedback
Feedback on this level points to an error in the student's program with respect to fulfilling the requirements of a programming task.An educational system that supports semantic feedback does not necessarily presuppose a structured problem solving activity.Problem solving can take place in an exploratory learning manner.Feedback messages of this type can be divided into two levels: intention analysis and code analysis.Analysis on the intention level means identifying or hypothesizing the solution strategy intended by the student and giving feedback based on the identified and hypothesized solution strategy.Code analysis is meant identifying erroneous code, which does not fulfill the requirements of a given programming exercise.Some systems are able to provide both intention-based and code-based feedback that is referred to as two-level feedback.Other systems are only able to support only code-based feedback.In the following, these two types of semantic feedback are elaborated.Since semantic feedback results from the analysis of an individual student's program, semantic feedback can fall into one of the classes: explanatory feedback, diagnostic feedback, elaborative feedback in the classification of Fleming and Levie [16]; response contingent, bug-related, or topic contingent feedback in the classification of Economides [17]; or elaborated feedback in the classification of Narciss [7].
Two-Level Feedback: Intention-Based and Code-Based
On the intention-based level, Hong's PROLOG "tries to recognize programming techniques in the hierarchy of relevant programming techniques, which have been used in the student program" ( [26], p. 519).Hong used grammar rules to model Prolog programming techniques, which serve to identify the intention (in terms of programming techniques) of the student.The system iteratively uses the sets of grammar rules to parse the student's program.If the parsing procedure does not finish successfully, that means, the selected set of grammar rules has not been completely exploited and the strategy of the student's program cannot be identified.In this case, the system uses one of the possible solution strategies (in terms of programming techniques) specified for a given programming problem to guide the student.Otherwise, the solution strategy has been identified, and the system diagnoses errors in the student's program.The system uses the same set of grammar rules, with which the solution strategy has been identified, to parse a corresponding reference program.For each possible solution strategy specified for a problem, there is a corresponding reference program.The parse tree of the student's program is compared against the parse tree of the reference program.Hong's PROLOG is able to analyze a student's program not only on the intention level, but also on the code level.The system's domain model contains not only domain's knowledge on the intention level (i.e., sets of programming technique represented by grammar rules for each class of programs), but also coding-related knowledge, which is represented by reference programs for each anticipated programming technique.For example, the system returns the following code-based feedback to the student: "You have used an appropriate programming technique for this exercise.The base case in your program is, however, not quite right.It is supposed to say that reversing an empty list gets the empty list itself.The second argument of your base case, List, is a variable but it is supposed to be an empty list, [].Now can you correct your base case?" ( [26], p. 523).
For JavaBugs [29], unfortunately, sample feedback messages were not available in literature.However, feedback may be reasoned from the following excerpt: "The task of automatic bug library construction entails detecting the most similar correct program (intention expressed as reference programs), extracting the superficial differences (discrepancies) between the student's and the correct program and forming misconception definitions (error hierarchies) described by discrepancies based on similarity and causality heuristics."( [29], p. 185).The authors applied a multi-strategy machine learning approach to automatically construct a library of Java errors, which novice programmers often make.Using this bug library, the developed educational system is able to examine "small" Java programs.The intention of the student is identified by comparing the student's program to a set of reference programs.After the student's intention has been identified, differences between the reference program and the student's solution are extracted.Misconceptions are learned based on the discrepancies identified in the first phase.These misconceptions are used to update the error library and to return appropriate feedback to the student.
Applying the similar multi-strategy machine learning approach for constructing a library of errors, while JavaBugs was developed to support Java learning, MEDD [34] is able to identify the student's intention in a Prolog program.Intentions are defined by specific reference programs.The similarity between the student's program and the reference programs is determined by matching surface as well as abstract features.Both JavaBugs and MEDD use a bug library, which is built automatically by learning from incorrect solutions in order to provide feedback on code analysis.Based on discrepancies between the incorrect student's program and its associated reference program, the systems provide semantic feedback on the code analysis level to students.
Ask-Elle [23] is able to derive the intention of the student using a set of pre-specified reference programs.Then, using the identified programming strategy of the reference program, many program variants are generated.All generated solutions are normalized before they are used to compare with the student's program.Through the comparison between the student's program and the generated program variants, the system not only has the ability of analyzing the student's intention underlying a program, but also is able to analyze the student's program on the code level.The system is able to provide different types of feedback based on the identified intention of the student.It can provide a description of a particular reference program, propose an implementation for a function, or emphasize one particular implementation method.For instance, Ask-Elle shows an elaborative feedback such as "Unexpected right hand side of f on line 3" ( [41], p. 252), this feedback indicates the location of the error in the student's program (on the right hand side of the function f ).
VC PROLOG is also able to provide semantic feedback by comparing between the student's program and one or more reference programs.Unfortunately, an example for intention-based feedback provided by VC PROLOG could not be found in literature.Only code-based feedback was presented: "A clause was terminated with a colon instead of a period, which causes the following fact to be interpreted as an addition goal." ( [39], p. 72).The process of analyzing a student's program consists of three phases.First, a parser analyzes the student's program.In case there are syntax errors, they are marked and corrected.If there are several possible interpretations, the analyzer will produce all alternatives.Second, the syntactically correct student's program is semantically analyzed.That is, the meaning of the student's program is compared against the meaning of the reference program.Through this phase, the system is able to derive the intention of the student's strategy and to give feedback not only on the intention level, but also on the code analysis level.The third phase is analyzing the layout and the structure of the student's program based on the reference program.
INCOM [27] is also able to provide intention-based feedback: "INCOM favours to formulate feedback messages in terms of programming techniques to help students develop the skills of using high level programming concepts in logic programming."( [27], p. 106).The author of INCOM proposed a weighted constraint-based model (WCBM) to hypothesize the solution strategy intended in the student's program.First, on the strategy level, the system generates hypotheses about the student's intention by iteratively matching the student's solution against the solution strategies that are specified in the semantic table.Next, once a solution strategy has been matched, the process initiates hypotheses about the student's solution variant by matching components of the student's solution against corresponding components of the selected solution strategy.As the final step, hypotheses generated on the solution variant level are evaluated, and the most plausible variant of the student's solution (within a strategy) is chosen.Using the best hypothesis on the strategy level and the best hypothesis on the solution variant level, the errors in the student's solution are identified.
We note that many systems used programming techniques, correct solutions, specific exemplars, model solution, or example solutions to examine the semantic fulfillment of students' solutions.All these terms convey the notion of a reference program that captures semantic requirements of a programming exercise.
Code Level Feedback
There is another class of systems that only support code-based feedback without analyzing the intention of students.J-Latte provides feedback on the code-based level, for example, "All statements require a semi-colon at the end (to say the statement is finished)" ( [31], p. 59).The system uses pre-specified constraints that represent the principles of the Java domain in order to check the correctness of students' programs.JITS [30], which is based on the ACT-R cognitive theory [44], is able to display erroneous code of "small" Java programs for a specific programming problem.Although JITS also analyzes the intention of the student using the Intent Recognition module first, then analyzes the programming code, this system only provides code-based feedback like "I think you meant the keyword 'for'.Is this correct?"( [45], p. 617).
ELM-ART [14] provides exercises for problem solving.The domain knowledge of ELM-ART consists of a hierarchy of concepts and rules.Using this type of knowledge, the system is able to analyze the student's program on the code level as the authors described: "The system gives feedback by providing a sequence of help messages with increasingly detailed explanation of the error or suboptimal solution."( [46], pp.368-369).
jTutors [32] allows instructors to author exercises in form of quizzes by specifying correct values and hints.Using these pre-specified correct values and hint messages, the system is able to test student's knowledge by requesting him/her to fill the blanks in a quiz with correct values.If incorrect values are detected, instructor's authored hint message will be provided.Or, "if the student struggles to answer a quiz, jTutors will provide an example or another quiz with lower difficulty level."( [32], p. 12).
Radosevic and colleagues [38] developed a specific learning interface for C++.They introduced the notion of semaphore, which represents a number of new programming lines after the last compilation.Radosevic's learning programming interface provides feedback not in textual form, rather using three colors: green, yellow, and red.Semaphore works in a way that each new programming line and each semicolon are enumerated.If a semaphore has 0-4 new lines, then the student's program is inside green light.If the semaphore of the student's program has 5-10 new lines, then it is yellow.If the light is red, the student's program cannot be compiled and the student has to reduce number of lines.It is not clear that this type of feedback is adaptive because many students may get the same color (green, yellow or red) as feedback.
Goshi's ITS for Hoare Logic [25] is able to provide appropriate advice or information on the code analysis level as the author stated: "This feedback may include appropriate advice or information in addition to indicating whether the answer supplied was correct or incorrect" [25].This kind of advice of information is based on accessing the student's behavior sequence.
JACK [28] follows a special way of providing feedback.The authors of JACK used automated trace generation for supporting students.A trace of a program execution is a list of single execution steps that are indicated by the system states in terms of variable values.The authors claimed that reading the trace of a program execution, a student could understand how a program works, i.e., the system state evolves from the given input to the final state.However, students do not need to read a trace from the beginning to the end.A trace may indicate the erroneous location in the student's program and the student needs to "search for program steps that directly affected the final erroneous output.This way, students can try to identify the reasons for the erroneous output and correct the related program statements" ( [28], p. 304).This way, the trace of the student's program execution is a kind of feedback.Since the trace is different for different students' programs, feedback in form of traces is adaptive.
FIT Java Tutor [10] and ITAP [11] deployed the data-driven approach to diagnose errors in the student's solution.That is, a mass of correct and incorrect students' solutions for a programming problem is collected and used for analyzing a submitted student's solution.FIT Java Tutor is able to provide code-based feedback by comparing the student's solution with one of the pre-specified reference programs or students' correct solutions.The students' correct solutions have been collected in advance (maybe in previous courses) and should solve a given programming problem correctly.The students' solution attempts enhance the solution space of correct solutions for a programming problem.For example, the following feedback message indicates that the student's solution has been matched to one of the pre-specified reference programs or students' solution attempts: "The system has determined a certain similarity between your program and the program shown above.Compare the two programs and modify your program if necessary."([10], p. 26).To provide automated feedback, the authors applied machine learning methods that compare the similarity between the student's solution and the pre-specified reference programs (or the collected students' correct solutions).
ITAP is able to provide code-based feedback, e.g., "in line 1 replace 'saturday' with 'Saturday' in the right side of the comparison operation" ( [11], p. 17).In order to be able to provide feedback, the system requires two kinds knowledge: (1) at least one reference program for a given programming problem; and (2) a test method that can score, e.g., pairs of expected input and output of code.Based on the reference program, the system constructs a solution space for a problem.The authors defined a solution space as a graph of intermediate states that the student passes through as he/she works on a given problem.Each state is represented by the student's current code.The system starts the process of path construction by inserting new "correct" states into the solution space.The correct states are checked by the test methods.These "correct" states serve to define the optimal path of actions to solve a given problem and to generate feedback by recommending the action a student should take to move from their incorrect state to a correct one.
Layout Feedback
Layout feedback is intended to help students write a code according a specific coding convention so that the student himself/herself and peers can understand the student's code easier.The second purpose of layout feedback is to help students have a clear code so that they can find errors easier if their code is erroneous.Feedback on the layout level is only supported by few existing systems, e.g., Ludwig and VC PROLOG.Ludwig can perform stylistic analysis on a student's program.The stylistic analysis includes: proper indentation, cyclometric complexity, global variables, etc.The authors stated that the stylistic tests could be individually enabled by the instructor because not all instructors would need them ( [33], p. 58).Whereas Ludwig focuses on stylistic analysis, VC PROLOG provides layout feedback by comparing the layout and the structure of the student's program with a reference program.
In principle, if there were a coding convention for every programming language, a component for checking style for that specific programming language can be developed and integrated into a learning environment.It may help students structure and debug their programs easily.Unfortunately, most of the systems reviewed in this paper did not implement layout analysis.A possible explanation for the rare support of layout feedback in educational systems for programming is that most researchers focus on instructing syntax and semantic issues of a programming language, while the layout of a program is a secondary issue.
Quality Feedback
ELP and JACK are the systems among the reviewed ones that support quality analysis based on the metrics for software engineering.Quality analysis focuses on how a program is executed efficiently by a corresponding machine model of a programming language.Quality analysis not only checks whether an algorithm works, but also whether that algorithm is efficient in terms of time and memory resource.For example, ELP is able to assess that the expression "if (a==true){}" is a poor programming practice and the expression "if (a){ }" is a good programming practice ( [24], p. 114).ELP's quality analysis includes two phases: structural similarity analysis and software engineering analysis.Structural similarity analysis indicates the location of codes that have high complexity, lengthy, or poor programming practice by comparing the student's program with a set of reference programs.Software engineering analysis also serves to identify the location of complex or poor codes.
The author intended to apply both analysis approaches in order to enhance the analysis accuracy, i.e., results of the structural similarity analysis is used to verify results of the software engineering analysis.
In addition to providing semantic feedback, JACK [28] is also able to provide quality analysis based on generated trace.The system compares the trace of a reference program with the trace of the student's program.If the trace of the student's program is longer than the reference program, "in this case, the solution seems to be unnecessarily complicated.The code executed during the trace steps without alignment are consequently most likely superfluous" ( [28], p. 306).Thus, the length of the trace is an indicator for the quality of the student's program.
It seems that the issue of quality analysis is not the primary goal of the majority of programming novices.As a result, not many educational systems for programming focused on this issue.
Impact of Adaptive Feedback
In this section, the impact of adaptive feedback on supporting students in developing their programming skills is investigated.For this purpose, the evaluation studies that have been conducted for the systems were taken into account.
The objectives of the evaluation studies conducted for the educational systems for programming were very various.In general, two classes of evaluation objectives for those systems are distinguished: (1) pedagogical evaluation studies that investigated the impact of systems on learning on cognitive, meta-cognitive and motivational dimensions; and (2) technical evaluation studies that investigated technical properties of a system (e.g., regarding the algorithms' accuracy, validity of test cases, and system's performance) and its usability.For example, Hong evaluated his PROLOG Tutor regarding the recognition ability of programming techniques applied in students' solutions and error analysis based on identified programming techniques.MEDD's performance was evaluated regarding: (1) program classification; and (2) error discovery.JavaBugs' library for detecting bugs in students' Java programs was evaluated regarding its accuracy in identifying correct and incorrect students' programs, respectively (cf.Table 2).The methods for pedagogical evaluation studies that have been conducted for educational systems can be divided into quantitative and qualitative classes.Qualitative evaluation studies rate the systems by asking users' attitudes using questionnaires or conducting interviews.Quantitative evaluation studies are usually conducted in labors or in real class settings where students are requested to use to the educational systems.Data collected from interactions between the students and the systems serve to analyze the systems' effectiveness.
Table 2 lists educational systems for programming that have been reviewed in this paper.In the case that a pedagogical evaluation study for an educational system has been conducted and demonstrated positive results, then it is marked as "+" in the column "Result" and a quote is cited from the corresponding literature to prove this.If there was only technical evaluation for a system, no results with respect to learning effectiveness were available and thus, the column "Result" is empty.In addition, Table 2 also summarizes the feedback types that are supported by an educational system."Y" indicates that a specific feedback type is supported."I" and "C" stand for intention-based feedback and code-based feedback, respectively.
Summary, Related Work, and Research Directions
Five types of feedback that are supported by educational systems for programming have been identified: Yes/No feedback, syntax feedback, semantic feedback, layout feedback, and quality feedback.Semantic feedback can be divided into two levels: intention-based and code-based feedback.While Yes/No feedback and semantic feedback are in accordance with the generic feedback classifications [7,16,17], syntax feedback, layout feedback, and quality feedback are specific in the domain of programming.Thus, the classification of feedback types that have been identified in reviewed educational systems for programming can be clearly distinguished from those generic feedback classifications.
The first lesson we can learn is that there are various forms of feedback.Most systems support textual feedback, while few other systems provide feedback in special forms: e.g., execution trace of a program (e.g., JACK) or semaphores (red, yellow, and green, e.g., Radosevic's system).To the best of my knowledge, no study has been conducted to compare the effectiveness of different feedback forms yet.
With respect to feedback types, we can learn that numerous systems support semantic feedback, while only three systems (QuizJET, QuizPack, and Radosevic's system) provide Yes/No feedback for improving programming skills of students.Interestingly, there are several systems that support both semantic feedback and syntax feedback/layout feedback.Due to the specified selection criteria of the systems (cf.Section 2), any educational system for programming that supports syntax feedback must provide more information than a compiler, all reviewed systems that support syntax feedback must also provide semantic feedback, or layout feedback (e.g., VC PROLOG, Ludwig), or quality feedback (e.g., ELP).To the best of my knowledge, in literature no study has investigated the effectiveness of syntax feedback in isolation yet.Among the systems that support semantic feedback, several program analysis approaches (Ask-Elle, Hong's PROLOG, INCOM, JavaBugs, MEDD, VC PROLOG) are able to analyze the student's intention.Another open question to be investigated is whether systems that support intention-based feedback in addition to code-based feedback are better than systems that provide only feedback on the code level.
In order to diagnose errors in student's solutions and to provide appropriate feedback, various program analysis techniques have been developed: approach using pre-specified correct values (QuizJET, QuizPack, M-PLAT), syntax analysis using programming language compilers (Ludwig, Ask-Elle, ELP), approach using concept ontology (VC PROLOG), program analysis using grammar rules (Hong's PROLOG), approach using multi-strategy machine learning (JavaBugs, MEDD), approach using program transformation (Ask-Elle), approaches based on cognitive learning theories (model-tracing (JITS), constraint-based modeling (J-Latte), weighted constraint-based model (INCOM)), and data-driven approach (ITAP, FIT Java Tutor).An interesting finding is that although many various modeling techniques have been devised, each modeling technique has been deployed in a single system.That is, the developed modeling techniques do not have widespread use in practice or across several tools for programming education.Several reasons for this phenomenon can be derived.First, each developed modeling technique might be applicable for a specific programming concept or for a specific programming language, whereas the number of programming concepts of a programming paradigm is high and the number of programming languages is not predictable.Second, time required for devising a modeling technique for a programming language or a programming concept might be very high, so that researchers, after having tested a devised modeling technique, may not have human and time resources for testing the applicability of devised modeling techniques in a new programming language or with a new programming concept.One thing in common between the program analysis approaches is that they use a reference program to compare the difference between a reference program and a student's program (e.g., Ask-Elle, ELP, and Ludwig), or they deploy an abstract representation capturing required components to analyze students' programs (e.g., Hong's PROLOG, INCOM, and VC PROLOG).The reference program and the abstract representation of required components are required to understand a student's program and to detect errors in a student's program.
From the evaluation studies of the reviewed educational systems for programming (Table 2), we can learn that most systems have been evaluated and demonstrated their pedagogical effectiveness.To my best knowledge, no study has been conducted to compare the effectiveness of each feedback type nor has a between-system study been carried out to compare the effectiveness of adaptive feedback yet.This can be explained by the fact that the positive impact of an educational system is contributed by many features of the whole system: e.g., adaptive feedback, usability, and algorithms' accuracy.
The review being presented in this paper addresses adaptive feedback types that are supported by existing educational system for programming and this purpose has not been focused by other existing reviews.
In 1988, Ducassé and Emde [47] reviewed 18 environments and classified them into: (1) systems related to tutoring task; (2) general purposes debugging systems; and (3) enhanced Prolog tracers.Although the purpose of this review was to classify debugging knowledge and techniques deployed in these systems, there were only seven systems for educational purposes.
One decade later, Deek and McHugh [18] published a survey in 1998 that reviewed 29 computer-supported systems for learning programming and identified four categories of tools: (1) programming environments; (2) debugging aids; (3) intelligent tutoring systems; and (4) intelligent programming environments.Since the detailed survey of Deek and McHugh [18], there were several other surveys on computer-supported systems for learning programming, however, most of them focused on specific aspects.For instance, Deek and colleagues [48] reviewed web-based environments for program development.Guzdial [49] investigated programming environments for novices.Douce and colleagues [50] reviewed computer-supported systems that automatically assess students' programming assignments.While these surveys focused on intelligent features (e.g., program analysis and assessment of programming assignments) of systems for supporting students, it is worth noting, on the contrary, Gomez-Albarran [51] focused their survey on less sophisticated and less intelligent approaches.That is, those approaches to learning/teaching programming do not support automatic program analysis nor automatic diagnosis of errors in student's programs.Gomez-Albarran reviewed nearly 20 "most outstanding" tools of the following types: (1) tools including a reduced development environment (e.g., BlueJ [52]); (2) example-based environments (e.g., ELM-ART [14]); (3) tools based on visualization and animation (e.g., ANIMAL [53]); and (4) simulation environments (e.g., Karel++ [54]).In another survey, Le and colleagues [55] reviewed AI-supported instructional approaches for learning programming and classified them into: example-based, simulation-based, collaboration-based, dialogue-based, program analysis-based, and feedback-based approaches.
The review being presented in this paper focuses on the aspect of adaptive feedback in educational systems for programming.It is now the right time to investigate feedback types and analysis techniques that have been deployed in existing educational systems for programming due to two reasons.First, feedback is one of the most important features of educational systems (especially for the domain of programming).It is used by programming learners to revise a solution for a programming problem.Second, over the last three decades, not only have new systems been developed, new program analysis approaches have also been proposed to enhance the diagnosis accuracy, while most conducted surveys of educational systems for programming rather focused on classifying the systems ( [18,47,51]).
The classification of adaptive feedback that has been developed in this review serves researchers and developers of educational systems for programming two purposes.First, it helps researchers and developers apply and advance existing programming analysis techniques that have been evaluated and validated.They may enhance these analysis techniques by optimizing the analysis accuracy.Second, the classification of adaptive feedback provides researchers and developers a means to focus on specific feedback types when developing an educational system for programming that is intended to support adaptive feedback.
Table 1 .
Commonalities between the three classifications of feedback.
Table 2 .
Impact of the educational systems on students' programming. | 2016-06-10T08:59:46.098Z | 2016-05-23T00:00:00.000 | {
"year": 2016,
"sha1": "3b2a34e6d00f341dd05fb5c5d5439fea5f589da9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-8954/4/2/22/pdf?version=1464008053",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3b2a34e6d00f341dd05fb5c5d5439fea5f589da9",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
223559639 | pes2o/s2orc | v3-fos-license | Evaluation of the antitumor effects of PP242 in a colon cancer xenograft mouse model using comprehensive metabolomics and lipidomics
PP242, an inhibitor of mechanistic target of rapamycin (mTOR), displays potent anticancer effects against various cancer types. However, the underlying metabolic mechanism associated with the PP242 effects is not clearly understood. In this study, comprehensive metabolomics and lipidomics investigations were performed using ultra-high-performance chromatography-Orbitrap-mass spectrometry (UHPLC-Orbitrap-MS) in plasma and tumor tissue to reveal the metabolic mechanism of PP242 in an LS174T cell-induced colon cancer xenograft mouse model. After 3 weeks of PP242 treatment, a reduction in tumor size and weight was observed without any critical toxicities. According to results, metabolic changes due to the effects of PP242 were not significant in plasma. In contrast, metabolic changes in tumor tissues were very significant in the PP242-treated group compared to the xenograft control (XC) group, and revealed that energy and lipid metabolism were mainly altered by PP242 treatment like other cancer inhibitors. Additionally, in this study, it was discovered that not only TCA cycle but also fatty acid β-oxidation (β-FAO) for energy metabolism was inhibited and clear reduction in glycerophospholipid was observed. This study reveals new insights into the underlying anticancer mechanism of the dual mTOR inhibitor PP242, and could help further to facilitate the understanding of PP242 effects in the clinical application.
Results
Physical observations. The body weights of the mice in each group (NC, normal control; XC, xenograft control; and PP242-treated) were measured once in every two days and compared (Fig. 1). Compared to the NC group, the body weights of the XC group mice decreased, but the changes were not significant. In contrast, the body weights of the PP242-treated mice decreased significantly and quickly in the first couple of days after treatment compared to the NC group, and later, the body weights gradually recovered. However, no significant change in body weight was observed between the XC and PP242-treated groups, indicating that the change in tumor size and drug treatment did not seriously affect body weight within 21 days.
Tumor-bearing mice were treated with vehicle and/or PP242 over the course of 21 days, and the size of the tumor in the right flank region was measured at two days interval using digital caliper in the LS174T xenograft groups (XC and PP242-treated) in order to evaluate the drug therapeutic effects in vivo. After 3 weeks of PP242 treatment, a very significant difference in the tumor volumes was observed (p < 0.001) before the mice were sacrificed ( Fig. 2A,B). The tumors were separated after sacrifice and weighed, which further displayed the significant decrease in tumor weight for the PP242 treatment compared to the XC group mice (Fig. 2C).
Serum biochemical parameters and histopathological tests.
The results of the serum biochemical parameters test are shown in Fig. 3. All the measured serum biochemical parameters related to liver toxicity were in the normal range for all groups. No significant abnormalities were observed in the kidney toxicity biomarkers CRE and BUN either. BUN was significantly reduced only in the PP242-treated group compared to the NC group, but all were in the normal range. Alterations in cholesterol levels were observed, where LDL levels increased and HDL levels decreased in XC groups compared to NC group. However, those levels were not recovered significantly after 3 weeks of PP242 treatment.
Histological examination of the liver showed no abnormal findings in all subjects tested in the NC, XC and PP242-treated groups (Supplementary Table S1, Supplementary Fig. S1). In the histologic examination of the kidneys, no abnormal lesions were observed in any group except for cortical renal tubule hypertrophy that was observed in only one mouse among the five mouse of the PP242-treated group (Supplementary Table S1). However, it was thought that there was no kidney toxicity since the degree of hypertrophy was very minimal and the serum biochemical parameters for kidney function were normal ( Supplementary Fig. S1). Identification of altered endogenous metabolites. In order to evaluate the underlying mechanism of action of the dual mTOR inhibitor PP242, the LS174T xenograft mouse model was treated with PP242 for 3 weeks, and plasma and tumor samples were analyzed via metabolomics and lipidomics approaches. The representative base peak intensity (BPI) chromatograms of the plasma and tumor tissues for metabolomics and lipidomics studies in both positive and negative ion modes are displayed in Supplementary Figs. S2, S3. To clearly visualize the metabolic differences among the NC, XC, and PP242-treated groups in plasma and the XC and PP242-treated groups in tumor tissues, multivariate statistical analysis was used to analyze the processed mass spectrometric data. Both the principal component analysis (PCA) and partial least-squares-discriminant analysis (PLS-DA) score plots of plasma and tumor tissue metabolomics and lipidomics are shown in Figs. 4, 5, and Supplementary Figs. S4, S5 respectively. In the plasma metabolomics and lipidomics analyses, the xenograft groups (XC and PP242-treated) were clearly separated from the NC group, indicating metabolic differences due to tumor formation, but the XC and PP242-treated groups slightly overlapped (specifically in positive metabolomics; Fig. 4, Supplementary Fig. S4). In addition, both the PCA and PLS-DA score plots of tumor tissues displayed a clear separation between the XC and PP242-treated groups (Fig. 5, Supplementary Fig. S5), which clearly indicates that metabolism in tissue was altered due to the inhibition of mTOR by PP242 and that these metabolic changes were clearer in tissue than in plasma. The R 2 and Q 2 values for plasma metabolomics and lipidomics were in the ranges of 0.43 to 0.61 and 0.1 to 0.20, respectively, whereas those for tumor tissue were in the ranges of 0.82 to 0.99 and 0.04 to 0.87, respectively ( Supplementary Fig. S5). R 2 denotes the explanation capacity of the model, whereas Q 2 represents the predictive ability of the model and the R 2 and Q 2 values close to 1 suggest an excellent model. The overall R 2 and Q 2 values showed that the model was reliable and had good predictability. The obtained low Q 2 value could be because of the partial overlap of the XC and PP242 groups.
In the metabolomics and lipidomics profiling of plasma, comparisons were carried out among the NC, XC and PP242 groups. The xenograft groups (XC and PP242-treated) were compared with the NC group to investigate the metabolic differences that occurred due to tumor formation and how much was recovered after PP242 treatment. Initially, a total of 49 significantly altered metabolites were identified. Of these, 22 metabolites were finally selected and used for further analyses by considering a VIP value > 1.0 and a p value (obtained by Student's t test) < 0.05. Detailed information on the identified metabolites in plasma is listed in Table 1 and the metabolomics standards initiative (MSI) descriptions are listed in Supplementary Table S2. A heat map was also used to visualize the change pattern and is displayed in Fig. 6. The identified metabolites were mostly limited to glycerophospholipids, fatty acids and a few organic compounds. Compared to the NC group, the overall change www.nature.com/scientificreports/ patterns in metabolites in both xenograft groups were similar. Interestingly, none of the identified metabolites were significantly altered in the XC group compared with the PP242-treated group, and there was no consistency in the alteration pattern of all metabolite classes. Consequently, a potential effect of PP242 was not observed in plasma. However, when we observed the tumor volume and weight (Fig. 2), the changes were significant after the treatment with PP242, indicating an obvious therapeutic effect of PP242. Hence, in order to investigate the underlying meaning of the PP242 effect in tumor size reduction, we further analyzed tumor tissues.
In tumor tissue, compared to the XC group, a total of 93 significantly altered metabolites were identified in the PP242-treated group via metabolomics and lipidomics analyses. Considering a VIP > 1.0 and a p value < 0.05, 59 metabolites were ultimately selected and used for further analysis ( , and PP242-treated to check the adverse effects associated with PP242. Error bar was presented as mean ± SD. *p < 0.05; **p < 0.01; ***p < 0.001; (n = 5 per group).
Discussion
Mammalian target of rapamycin (mTOR) is frequently activated and overexpressed in a variety of cancers, including colon cancer. Hence, inhibition of mTOR is one of the crucial therapeutic steps in the course of cancer treatment. PP242 is an ATP-competitive inhibitor of mTOR, and the activity of PP242 in the inhibition of colon cancer growth in vitro and in vivo has been previously reported 14,[20][21][22] . However, the underlying metabolic mechanism behind the effects of PP242 is still not clear. In the present study, using comprehensive metabolomics and lipidomics approaches, we identified that the antitumor effects of PP242 are associated with the inhibition of energy metabolism pathways, including glycolysis, the TCA cycle, fatty acid β-oxidation (β-FAO) and glycerophospholipid metabolism, in an LS174T cell-induced colon cancer xenograft mouse model. According to the serum biochemistry and tissue histopathology test results, no unusual toxicological changes were observed in the liver and kidney when PP242 was administered at a daily dose of 60 mg/kg/day for 21 days, and these data were supported by previously published data 23 .These results mean that the change in metabolites level was not affected by the toxicity, but the effect of PP242. www.nature.com/scientificreports/ Metabolomic and lipidomic approaches were performed with plasma and tumor tissue before and after treatment with PP242. In plasma, a total of 22 metabolites were identified which were significantly altered in either the XC or PP242-treated group. However, no metabolites were significantly altered between the XC and PP242-treated groups and in fact the changing patterns were inconsistent (Table 1). We observed that despite having the visible effect of PP242 in reducing tumor size (Fig. 2), changes in the plasma metabolome were not consequential to conclude the underlying mechanism.
On the other hand, 59 metabolites were significantly altered in tumor tissue after 3 weeks treatment of PP242 (Table 2), which indicates that the comprehensive metabolomics and lipidomics profiling of tumor tissues provided a better understanding and delineation of the underlying mechanism of PP242 compared to that provided by the plasma analysis.
The major perturbed metabolic pathways were mainly related to energy metabolism (glycolysis, the TCA cycle, and β-oxidation of mitochondrial fatty acids) and glycerophospholipid metabolism.
In order to maintain growth and survival, most cancer cells rely highly on glycolysis to fulfill the elevated demand for nutrients and energy, which finally leads to the elevation of lactic acid levels [24][25][26] . This elevation in lactic acid levels causes acidosis in the extracellular tumor microenvironment to maintain pH homeostasis and supports the migration and invasion of cancer cells [27][28][29] . In the present study, after PP242 treatment, the lactic acid level was significantly decreased in the tumors, indicating inhibition of glycolysis and thereby inhibiting cancer growth and invasion. We also observed a significant reduction in the level of the TCA cycle intermediate aspartic acid after PP242 treatment. Aspartic acid is the degradation metabolite of the TCA cycle, which is produced from the glutaminolysis pathway. In the glutaminolysis pathway, the production of α-ketoglutarate from glutamine via glutamic acid is another main metabolic pathway for tumor growth and survival that replenishes TCA cycle energy demand by acting as a carbon source 23,30 . Thus, the decrease in the aspartic acid level reflects the alteration of metabolites that may reduce the tumor growth of colon cancer xenograft mice by inhibiting the supply of intermediate metabolites that replenishes the energy demand of TCA cycle.
In addition to glycolysis, to meet the increased energy demand, cancer cells carry out other metabolic strategies, such as β-FAO, to produce more energy to support cancer cell growth and survival. Carnitine plays a major role in transporting long-chain fatty acids inside the inner membrane of mitochondria, which facilitates β-FAO www.nature.com/scientificreports/ to generate and supply acetyl-CoA to the TCA cycle for energy production [31][32][33] . Previous studies have reported that higher blood carnitine levels denote higher energy and functioning of cells, which is the main requirement of cancer cells, where increased levels of carnitines were reported in breast cancer and chronic lymphocytic leukemia 34,35 . According to our results, as the level of carnitines, including l-carnitine, l-acetylcarnitine, and l-palmitoylcarnitine, decreased after mTOR inhibition by PP242, the transport of fatty acids also decreased, resulting in an increase in fatty acid levels. Hence, it could be speculated that inhibition of mTOR signaling by PP242 acts by blocking β-FAO in cancer cells by reducing carnitine levels. Treatment with PP242 also significantly reduced the glycerophospholipid (PC, PE, and PI) levels. Phosphatidylcholine (PC) is a fundamental element of the cell membrane and plays a pivotal role in the structure and function of cell membranes 36,37 . Upregulation of PC has been observed in numerous cancers, including colon cancer, and is considered one of the hallmarks of cancer growth and progression [38][39][40] . In this experiment, after PP242 treatment, the level of PC was significantly downregulated, suggesting that PP242 is able to inhibit cancer growth and progression. On the other hand, one LysoPC was significantly increased when the mice were exposed to PP242. LysoPC is normally generated from PC through the catalysis of phospholipase A 2 (PLA 2 ) and could increase due to the inhibition of its conversion from PC, which also decreases the level of PC 41 . Previous studies reported that the downregulation of PC and upregulation of LysoPC could trigger the induction of apoptosis [42][43][44] . Table 1. List of altered identified metabolites in plasma due to the effects of PP242 using metabolomics and lipidomics approaches. VIP value was obtained from the PLS-DA model and p value was calculated using Student's t-test (*p < 0.05; **p < 0.01; ***p < 0.001). Fold change > 1 represents increased metabolites (given as bold), < 1 represents decreased metabolites (given as italics). In addition, since PP242 shows apoptotic effect on several cancer cells 10,45 , therefore, the decrease in PC and increase in LysoPC level could be due to the apoptotic effects of PP242. However, further study is required to confirm the relation of PC and LysoPC with the apoptotic effect of PP242. Phosphatidylethanolamine (PE) is another vital element of phospholipids, and in mammalian cells, PE accounts for almost 15-25% of the total lipid content and is also associated with a vast number of physiological cellular processes [46][47][48] . In the normal cellular state, the existence of PE is only found in the inner leaflet of the cell membrane. However, upregulation of PE on the outer surface of cancer cells has been previously reported 47,49 . PE was significantly downregulated after PP242 treatment in our study, suggesting the potential of PP242 in treating cancer by decreasing PE levels.
We also observed a significant reduction in all phosphatidylinositol (PI) species after exposure to PP242 for 3 weeks. PI is another important phospholipid class, accounting for approximately 5.6% of total lipids, mainly exists in the inner leaflet of the cell membrane, and plays a role in regulating cell survival, signaling and membrane trafficking 50,51 . PI overexpression is also associated with cancer progression, and thus, a decrease in PI levels has shown cancer growth suppression against various cancers 52-54 . This evidence of PI level reduction supports our results and indicates the potent antitumor activity of PP242.
Among the anionic phospholipids, phosphatidylserine (PS) is the most abundant and is located in the plasma membrane's inner leaflet of most mammalian cells. It has also been reported that cancer cells possess elevated levels of PS on their external surface 55,56 . Hence, the decrease in PS levels, could suggest the suppression of cancer growth by the effects of PP242 treatment. Diacylglycerol (DG) and sphingomyelin (SM) were significantly increased and decreased, respectively. In cellular signaling and lipid metabolism, DG plays a very important role as an intermediate 35 . SM plays important roles in maintaining cell barrier functions and fluidity as a structural component of the cell membrane and regulate various cellular processes 57,58 . Additionally, depending on the tumor biology, the sphingolipid level could be increased or decreased 59 . Thus, the decrease in SM levels could express the therapeutic effects of PP242. However, the association of DG in this study is not clearly understood. A summary of the main altered metabolic and lipidomic pathways due to 3 weeks of PP242 treatment in LS174Tinduced colon cancer xenografts is shown in Fig. 8.
In conclusion, herein, we observed that due to the antitumor effects of PP242, the metabolic changes were clearer in tumor tissue than in plasma of the LS174T cell-induced colon cancer xenograft mouse model. The metabolic and lipidomic investigation of tumor tissues revealed that PP242 displayed its antitumor activity by Table 2. List of altered identified metabolites in tumor tissues due to the effects of PP242 using metabolomics and lipidomics approaches. VIP value was obtained from the PLS-DA model and p value was calculated using Student's t-test (*p < 0.05; **p < 0.01; ***p < 0.001). Fold change > 1 represents increased metabolites (given as bold), < 1 represents decreased metabolites (given as italics). Fifteen male BALB/c nude mice aged 6 weeks weighing 23-27 g were purchased from Nara Controls Inc. (Seoul, Korea). The mice were then housed for 1 week to acclimate to the ambient environment, and during the acclimation period, the temperature (25 ± 2 °C), the humidity (50-60%) and a 12 h light/dark cycle (8:00 am-8:00 pm) were maintained. The mice were initially divided into two groups: the normal control group (NC; n = 5) and the tumor xenograft group (n = 10). After 1 week, one million LS174T cells were subcutaneously injected into the right flank of the nude mice for xenograft model development, and blank media was injected into the normal control group following the same xenograft model procedure. Once the tumor size reached approximately 150-200 mm 3 , the xenograft group was further divided into two groups: a xenograft control group (XC; treated with vehicle, n = 5), and a PP242-treated group (treated with PP242; n = 5). PP242 solution (formulated with 2% ethanol, 20% PEG400 and 1% Tween 80 in distilled water) was administered orally to the xenograft mice at a dose of 60 mg/kg/day for 21 continuous days. The same volume of vehicle was orally administered to the NC and XC group mice for the same duration. Measurements of tumor volume in the right flank region and body weights were carried out in 2 day intervals. The tumor size was measured using a digital caliper and calculated with the formula 60 : After 21 days of treatment, at the next day morning, the tumor size was measured. After that, the mice were anaesthetized, and blood was collected through cardiac puncture. Liver, kidney and tumor tissues were collected and immediately snap frozen in liquid nitrogen. Plasma was then obtained from the blood samples upon centrifugation. All the samples were stored at − 80 °C until further analysis.
Serum biochemical parameters and histological analysis.
In order to investigate the toxicity-related effects of PP242, serum biochemical parameters were measured. Aspartate aminotransferase (AST), alanine aminotransferase (ALT) and total bilirubin were measured to determine liver health; blood urea nitrogen (BUN) and creatinine (CRE) were measured to evaluate kidney health; and low-density lipoprotein (LDL) and highdensity lipoprotein (HDL) were measured to evaluate heart health.
Histopathological examination was conducted with hepatic and renal tissues fixed in primary 10% neutral formalin solution. The left lobe of the liver and kidneys were cut to an appropriate size and thickness and fixed in the secondary formalin solution. The cut tissue was embedded in paraffin through a general tissue treatment process. Paraffin-formatted tissues were then cut to a thickness of 3 μm and subjected to hematoxylin and eosin staining, and histopathological examination was performed under an optical microscope (Olympus CX41, Japan).
Sample preparation for metabolomics and lipidomics study.
Plasma. For the metabolomics study, plasma samples were processed through a simple protein precipitation technique using 150 µL of ice-cold methanol using plasma methanol ratio of 1:3. After the addition of methanol, the mixture was then vortexed and centrifuged at 20,800×g for 10 min at 4 °C. The resulting supernatant was transferred to a new tube and diluted with water containing IS (reserpine; 4 µg/mL) at a ratio of 2:1. Finally, 5 µL of sample was injected into the UHPLC-Orbitrap-MS system after slight vortexing and spinning down. A quality control (QC) sample was made by gathering identical volumes of plasma samples and then diluting them with water containing IS (reserpine; 4 µg/mL) using the same ratio mentioned above. QC samples were used to evaluate the repeatability and robustness of the instrumental system and were analyzed before starting the sequence for column conditioning and after every ten samples in the analytical batch. Additionally, test mixtures containing a few commercially available validated standards were run at the beginning, middle and end of the analytical batch. The test mixture contained the following compounds: caffeine (0.5 µg/mL) and acetaminophen (0.5 µg/mL) for positive mode and glycocholic acid (0.5 µg/mL) and hippuric acid (0.5 µg/mL) for negative mode.
For the lipidomics study, 25 µL of PC (16:0/18:1)-d 31 (4 µg/mL; internal standard for positive mode), 25 µL of arachidonic acid-d 8 (4 µg/mL; internal standard for negative mode), and 50 µL of 0.1 M NaCl were added to 50 µL of the plasma samples in an Eppendorf tube. Lipid extraction was then performed by the addition of 250 µL of ice-cold chloroform/methanol (1:2; v/v) to the plasma mixture. The mixture was then vortexed for 1 min, kept at room temperature for 1 h, and centrifuged at 20,800×g for 10 min at 4 °C. The clear supernatant was transferred to a new tube and evaporated to dryness under a nitrogen stream at 37 °C. Finally, the dried residue was reconstituted using 60 µL of ice-cold chloroform/methanol (1:1; v/v) before being injected into the instrument for analysis. A QC sample was also prepared by gathering identical volumes from each sample after reconstitution in order to assess the repeatability and robustness of the instrument. All QC samples were run following a sequence similar to that of the metabolomics analysis.
Tumor tissue. The snap-frozen tumor tissues were lyophilized and homogenized into powder. Approximately 10 mg of homogenized tissue was transferred to a 2 mL Eppendorf tube, and 400 µL of methanol was added. The mixture was then sonicated for 2 min and 1 mL of MTBE (methyl tert-butyl ether) was added followed by 1 h of shaking in a shaking water bath at room temperature. Separation was induced by the addition of 250 µL of water followed by the incubation at room temperature for 10 min. After 15 min of centrifugation at 20,800×g Scientific RepoRtS | (2020) 10:17523 | https://doi.org/10.1038/s41598-020-73721-w www.nature.com/scientificreports/ at 4 °C, the upper (organic) and lower (polar) layer were separated from the precipitate and then mixed together in an another 1.5 mL Eppendorf tube. For the metabolomics study, 200 µL of mixture was taken and evaporated to dryness under a nitrogen stream at 37 °C. The dried residue was then reconstituted in 100 µL of 80% methanol, and 50 µL was loaded into the UHPLC-Orbitrap-MS system for analysis. For the tissue lipidomics study, the same volume of mixture was taken, processed following the same method and finally reconstituted in 100 µL of chloroform/methanol (2:1). In order to normalize tissue metabolomics and lipidomics data, the protein concentration of each tumor sample was quantified. Each tumor sample was diluted with distilled water, and the protein concentration was measured by Nano-MD (SINCO, Korea) using 10 µL of sample. For both the tissue metabolomics and lipidomics analyses, QC samples were made by taking identical volumes from each sample and running the samples following a sequence similar to that of plasma analysis.
Instrumental conditions. Instrumental analysis was carried out using an Ultimate 3000 UHPLC system coupled to an LTQ Orbitrap Velos Pro mass spectrometer system (Thermo Fisher Scientific, San Jose, CA, USA) with a heated electrospray ionization (HESI) source. The same instrumental conditions and methods were used for the plasma and tumor tissue metabolomics and lipidomics analyses. For the metabolomics analysis, an ACQUITY UPLC BEH C18 column (2.1 × 100 mm, 1.7 µm, Waters, Milford, MA, USA) was used for the chromatographic separation by maintaining the autosampler and column oven temperature at 4 °C and 50 °C, respectively. Formic acid (0.1%) in distilled water (v/v, mobile phase A) and methanol (v/v, mobile phase B) was used as the mobile phase and eluted at a flow rate of 0.4 mL/min throughout the entire analysis. The elution gradient was regulated as follows: the elution started with 100% A and was maintained for 1 min, then gradually decreased to 80% A over next 4 min, and a linear decrease of mobile phase A was made from 80 to 30% from 4 to 10 min. Mobile phase A was then decreased to 0% at 14 min followed by a rapid increase to the initial conditions for re-equilibration at the initial conditions for 2 min.
For the lipidomics analysis, chromatographic separations were performed using an ACE Excel 2 Super C18 column (2.1 × 100 mm, 1.7 µm, Advanced Chromatography Technologies Ltd., Aberdeen, Scotland, UK), and the autosampler and column oven temperature were maintained at 4 °C and 50 °C, respectively. The mobile phase was composed of 10 mM ammonium acetate in either 40% acetonitrile (v/v, mobile phase A) or acetonitrile: isopropanol (10:90, v/v, mobile phase B) and eluted at the same flow rate as that of the metabolomics study. The elution gradient was initiated using 60% A that was maintained for 1 min, and from 1 to 3 min, the concentration of A decreased to 35%. Over the next 2 min, mobile phase A decreased to 15%; then, over the next 4 min, it further decreased to 0%, and this condition was maintained for 3 min. Finally, mobile phase A was rapidly increased to reach the initial conditions for 0.5 min (60% A) and re-equilibrated for 3.5 min. The injection volume was 5 µL for all analyses.
Mass spectrometric (MS) conditions were the same for the metabolomics and lipidomics analyses of the plasma and tumor tissue samples, respectively. MS detection was operated in both positive and negative modes using a full scan ranging from m/z 50 to 1600, and data were acquired in centroid mode at a resolution of 60,000. High-energy collision dissociation mode was employed for the dissociation of metabolites using a normalized collision energy of 35% with an isolation width of 1 m/z and an activation time of 10 ms. The detailed MS parameters were as follows: heater temperature, 40 °C; sheath gas flow rate, 45 arb; auxiliary gas flow rate, 10 arb; spray voltage, 4 kV; capillary temperature, 320 °C; and S-lens RF level, 61%. For both the sheath gas and auxiliary gas nitrogen was used.
Data processing and statistical analysis. After the data acquisition using LTQ-Orbitrap, the raw data were imported and processed (peak alignment, detection and identification) using the Compound Discoverer 2.1 software system. The retention time and m/z data for each peak were determined by the software. For plasma metabolomics and lipidomics analyses, all data were normalized using the peak areas of internal standards. For metabolomics and lipidomics profiling of tumor tissue samples, all data were normalized to the total protein concentrations. All multivariate data analyses were performed using SIMCA version 14.1 software (Umetrics, Inc., Ume, Sweden) system and MetaboAnalyst 4.0 (https ://www.metab oanal yst.ca/). Multivariate data analyses such as principal component analysis (PCA) and partial least squares-discriminant analysis (PLS-DA) were used to visualize the separation between groups, and performed with Pareto scaling. PCA was used to visualize the general clustering among the groups, whereas PLS-DA was employed to identify the differential metabolites between groups in the plasma and tumor samples. The components with a variable importance in the projection (VIP) value exceeding 1.0 were selected as potential compounds that contributed remarkably to the clustering and showed differences between the groups. Student's t-test was used to evaluate the statistical significance of each group of metabolic changes and considered significant when the value was less than 0.05.
Putative identification and searching was carried out based on the mass adducts ([M+H] + , [M+Na] + , [M−H] − , etc.), mass fragment (MS 2 ) ions and retention time. The accuracy tolerance window of the mass was set to 10 ppm while searching the metabolites. The metabolites were searched and determined from online databases such as METLIN (https ://metli n.scrip ps.edu/), HMDB (https ://www.hmdb.ca/), lipidblast and KEGG (https ://www. genom e.jp/kegg) utilizing the detected m/z value and mass fragmentation patterns.
Data availability
Most of the data generated during this investigation are included in this manuscript and the supplementary materials section. | 2020-10-18T13:05:40.821Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "0bb1125e61911b883cbdbb4d6a92dace4485d025",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-73721-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68525bc97bc5d5e4aace4fa4136a9720fc222e8a",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
3605489 | pes2o/s2orc | v3-fos-license | The effects of phenylalanine on exercise-induced fat oxidation: a preliminary, double-blind, placebo-controlled, crossover trial
Background When combined with exercise, dietary amino acid (AA) supplementation is an effective method for accelerating fat mobilization. However, the effects of single AAs combined with exercise on fat oxidation remains unclear. We hypothesized that consumption of a specific amino acid, L- phenylalanine, may result in the secretion of glucagon, and when combined with exercise may promote fat oxidation. Methods Six healthy, active male volunteers were randomized in a crossover study to ingest either phenylalanine (3 g/dose) or placebo. Thirty minutes after ingestion each subject performed workload trials on a cycle ergometer for 1 h at 50% of maximal oxygen consumption. Results Oral intake of phenylalanine caused a significant increase in the concentrations of plasma glycerol and glucagon during exercise. The respiratory exchange ratio was also decreased significantly following ingestion of phenylalanine. Conclusion These results suggested that pre-exercise supplementation of phenylalanine may stimulate whole body fat oxidation. No serious or study-related adverse events were observed. Trial registration UMIN000027502 Registered 26 May 2017. Restrospectively registered.
Background
Regular exercise is an important strategy to implement to help prevent obesity [1]. According to the exercise prescription recommended in the American College of Sports Medicine [2] guidelines, 45-60 min of exercise should be targeted to ensure sufficient energy expenditure in obese people. In particular, the blood levels of several hormones including catecholamines, glucagon, growth hormone, and cortisol are increased during exercise compared with levels in rest periods [3]. Moreover, several studies reported on acute exercise and lipid utilization, and related hormones in human [4][5][6]. Therefore, we hypothesize that a combination of exercise and factors that affect the secretion of some of these hormones may be effective for increasing fat catabolism and energy expenditure.
When combined with exercise, dietary amino acid (AA) supplementation is an effective method for accelerating fat mobilization [7,8]. It has been reported that previously that pre-ingestion of a single dose of a mixture of specific amino acids (AAs) enhanced lipolysis and hepatic ketogenesis during and after exercise by stimulating glucagon secretion [9,10]. However, the effects of single AAs combined with exercise on fat oxidation remains unclear. L-phenylalanine (Phe), a tyrosine precursor, is an essential amino acid, and is a substrate for tyrosine hydroxylase, the enzyme that catalyzes the rate-limiting step in catecholamine synthesis [11]. It is known that Phe is a dietary requirement for protein synthesis. Nuttall et al. showed that a single oral administration of a large amount of Phe (1 mmol/ kg lean body mass) acted as a nutrient-signaling molecule that stimulated an increase in insulin and glucagon concentration, and regulated glucose metabolism [12]. However, whether or not ingestion of small amounts of Phe combined with or without exercise has similar effects has yet to be determined. Therefore, the purpose of this randomized, double-blind, placebocontrolled, crossover trial was to investigate hormone secretion and substrate catabolism induced by Phe during exercise.
Trial design
This was a randomized, double-blind, placebo-controlled crossover study, conducted at Ritsumeikan University at Shiga, Japan. The study protocol was approved by the Ethics Committee for Human Experiments at Ritsumeikan University and the Meiji Institutional Review Board. All study participants provided written informed consent prior to participation in the study. The study was performed in accordance with the ethical standards of the 1964 Declaration of Helsinki and its later amendments. The study protocol was registered in the UMIN Clinical Trials Registry (UMIN000027502) on May 26, 2017 (restrospectively registered).
Subjects
The inclusion criteria were healthy young men aged 20 to 40 years old. Exclusion criteria consisted of individuals with a history or current condition of severe disease (such as liver disorder, cardiovascular disorder, respiratory disorder, renal disorder, and hypertension), anemia, and those who were judged ineligible by the study physician due to medical examination consultation history or other reasons. Six healthy, active young men were recruited as study volunteers. The baseline characteristics (mean ± deviation (SD)) of the participants were: age, 23.7 ± 1.0 yr.; height, 172.6 ± 7.9 cm; body mass, 67.7 ± 4.9 kg; body mass index (BMI), 22.7 ± 1.1 kg/m 2 ; and maximum oxygen uptake, 52.4 ± 4.2 mL/min/kg.
Experimental procedures
The study involved three visits to the laboratory. At the first visit, maximal oxygen uptake VO 2max (mL/ kg/min) and maximal heart rate (HR) (beats/min) were measured using an incremental cycle exercise test on a cycle ergometer (828E Monark cycle ergometer). The incremental cycle exercise began at a work rate of 60 W (30-90 W) with power output being increased in 15 W·min −1 steps until the subject could not maintain a fixed pedaling frequency of 60 rpm. The subjects were encouraged to exercise at maximum intensity during the ergometer test. Heart rate and rating of perceived exertion (RPE) were monitored every minute during exercise. RPE was obtained using the modified Borg scale. VO 2 was monitored by breath-by-breath assessment using a respiratory gas analyzer (Aeromonitor AE-310SRD, Minato Medical Science Co., Ltd., Osaka, Japan). The highest 30-s averaged value of VO 2 during the exercise test was designated as VO 2 peak if three of the following four criteria were met: (1) plateau in VO 2 with an increase in external work, (2) maximal respiratory exchange ratio ≥ 1.1, (3) HR ≥ 200 beats/min. The results of exhaustion testing were used to calculate the power output equivalent to 50% VO 2max. The remaining two visits were separated by at least six days. Dietary intake was self-recorded by the subjects during the study period. The subjects were instructed to refrain from binge eating, strenuous exercise, or drinking alcohol for 24 h prior to each trial and were also instructed to sleep more than eight hours the evening before each visit. At approximately 21:00 on the day before the second and third visits the subjects consumed the same meals that contained 694 kcal (carbohydrate:fat:protein ratio; 57:28:15). The subjects had no food or drink except water between the last meal and the start of each trial. Individual trials were performed at a similar time of the day for each subject (±3 h) to avoid any influence of circadian rhythm on the results.
During the second and third visits the subjects participated in the main experimental trials. Blood samples were drawn from the antecubital vein. The subjects were then randomized to ingest 150 mL of ordinary tap water and either a cellulose capsule containing 3 g of Phe (Kyowa Hakko Bio Co, Ltd., Tokyo, Japan) as the active sample or an empty cellulose capsule (Matsutani Chemical Industry Co., Ltd., Hyogo, Japan) as the placebo (designated as 0 min). The treatments were switched at the crossover phase of the study. After sitting for 30 min (rest period), the subjects mounted a cycle ergometer and commenced cycling for 60 min at a constant power output equivalent to 50% VO 2max (exercise period). After exercising, the subjects rested for 60 min in the supine position (post-exercise period). Blood samples were collected at before ingestion of test sample, and 30, 60, 90, and 150 minutes after ingestion. HR was recorded and exhaled air samples were collected throughout the rest, exercise, and post-exercise phases. The tests were conducted in a quiet environment in a controlled room at a temperature of 21 ± 2°C and humidity of 45 ± 5%. The study design is summarized in Fig. 1.
Exhaled gas analysis
Exhaled oxygen and carbon dioxide concentrations were measured by the breath-by-breath method using a same respiration metabolism monitor system de scribed above. Respiratory exchange ratio (RER) was calculated using the expiratory gas measurements 30 s before and after every 10 min period during exercise and every 30 min during recovery.
Blood sampling
Whole blood was collected in a vacutainer containing sodium fluoride and ethylenediaminetetraacetic acid (EDTA)-2Na and stored at 4°C for later analysis of glucose and lactate concentrations. Whole blood from a EDTA-2Na vacutainer with added aprotinin was centrifuged immediately at 1200 g for 10 min at 4°C, and the plasma separated and frozen immediately at −80°C for later analysis of glucagon concentration. Whole blood from a EDTA-2Na vacutainer without added aprotinin was centrifuged immediately at 1200 g for 10 min at 4°C, and the plasma stored at 4°C for analysis of cortisol concentration. Whole blood from a plain vacutainer was allowed to stand at room temperature for 20 min and then centrifuged at 1200 g for 10 min at 4 o C, followed by separation of the serum into two vials. One vial was stored at 4 o C for analysis of FFA and growth hormone concentrations, while the other vial was frozen at -80 o C for later analysis of acetoacetic acid, 3-hydroxybutyrate, and glycerol concentrations.
Statistical analysis
Data were expressed as mean ± SD and analyzed using Microsoft Excel (Microsoft Corp., Redmond, WA, USA). All variables were tested for normal distribution by the F-test using StatView-J 5.0 software (Abacus Conceps, Berkeley, CA, USA). If the data were normally distributed, repeated measures twofactor analysis of variance (ANOVA, time-treatment) was used to examine differences between the biochemical parameters from the two trials. Moreover, when the ANOVA revealed significant effects or interactions between factors, Tukey's post-hoc test was used to detect significant differences between the two treatments. On the other hand, if the data were skewed distribution, Freidman test was used to examine differences. Statistical significance was set at P values <0.05.
Cardiorespiratory responses
In RER, there were no outliers, and the data were normally distributed. The changes in RER are shown in Table 1. ANOVA showed a significant treatment × time interaction for RER (treatment P = 0.286; time P < 0.01; interaction P < 0.01), with Tukey's post-hoc test showing significant differences between treatments at 40 and 120 min (P < 0.05).
Biochemical parameters
In blood glycerol, FFA, glucose, and ketone bodies, there were no outliers, and the data were normally distributed. However, in blood lactate, there were outliers, and the data were not normally distributed. The biochemical parameters of the two treatments are summarized in Table 2. ANOVA showed a significant treatment × time interaction for plasma serum glycerol concentration (treatment P = 0.022; time P < 0.01; interaction P < 0.01), with Tukey's post-hoc test revealing significant differences between treatments at 90 min (P < 0.05). The blood concentrations of glucose, lactate, FFA, and ketone bodies were not significantly different between the two treatments throughout the experimental period.
Circulating hormones
In blood glucagon and cortisol, there were no outliers, and the data were normally distributed. However, in blood lactate, there were outliers, and the data were not normally distributed. The concentrations of circulating hormones during each treatment are shown in Table 3. ANOVA showed a significant treatment × time interaction for plasma glucagon (treatment P = 0.025; time P = 0.035; interaction P = 0.039), with Tukey's post-hoc test showing significant differences between treatments at 30, 60, 90, and 150 min (P < 0.05). Growth hormone and cortisol concentrations did not differ significantly between the two treatments throughout the experimental period.
Discussion
This study in healthy active young men investigated the acute effects of Phe supplementation combined with exercise on hormone secretion and substrate oxidation. The study showed that, compared with ingestion of a placebo, ingestion of the Phe supplement significantly increased the concentrations of glycerol and glucagon. The RER was also decreased significantly by Phe ingestion. These findings suggest that whole body lipid oxidation increased and that pre-exercise supplementation of Phe stimulated fat oxidation. To the author's knowledge this is the first study to examine the effects of Phe on human fat oxidation combined with exercise. Although considerable evidence exists on the safety of Phe consumption in humans [13] there is no evidence on the functional effects related to fat oxidation combined with exercise. The data presented in this study lay the groundwork for further investigations on Phe supplementation in sport.
The main mechanism for the stimulation of fat oxidation following Phe administration may be via glucagon secretion. Glucagon is a key hormone involved in fat catabolism during exercise [14,15]. Previous reports have noted that several widely divergent effect of glucagon appear to be mediated by a effector, adenosine 3′,5′-cyclic monophosphate (cAMP) [16]. The best-known mechanism mediating lipolysis is the cAMP pathway, wherein increased levels of cAMP activate cAMP-dependent protein kinase A (PKA). In this pathway, hormone sensitive lipase (HSL) is phosphorylated by PKA and then translocates from the cytoplasm to the lipid droplet surface, where it interacts with perilipin A, the result of which is a subsequent release of free fatty acids [17,18]. HSL is the most important lipase in lipolysis and is subject to hormonal regulation [19].
In addition to HSL, adipose triglyceride lipase is expressed predominantly in adipose tissue and is considered the rate-limiting lipolytic enzyme in adipocytes [20,21]. Therefore, an additional effect by Phe supplementation combined with exercise may stimulate the cAMP-dependent cascade pathway.
Previous studies have suggested that glucagon secretion is regulated by gut hormones including glicentin, GLP-1, GIP, and GLP-2 and other peptides secreted by the gastrointestinal tract such as gastrin-releasing peptide (GRP), cholecystokinin (CCK), and secretin [22]. Further studies are necessary to determine whether or not these hormones are involved in the stimulation of fat oxidation caused by Phe administration. It has been reported that a single pre-ingestion dose of mixtures of either 17 specific AAs [9] or 3 AAs [10] enhanced lipolysis and hepatic ketogenesis during and after exercise by stimulating glucagon secretion. However, in the current study preexercise ingestion of Phe increased glycerol concentrations but not ketone body levels. The results suggest that pre-exercise ingestion of Phe markedly stimulate lipolysis but not hepatic ketogenesis. Taken together, these results indicate that ingestion of a specific AA stimulates fat oxidation efficiently during exercise.
In 1970 Harper et al. [23] published a review of the effects of disproportionate levels of AA intake, with the information subsequently being updated in 1984 by Benevenga and Steeles [24]. There is concern that Phe supplementation may be associated with abnormal brain development known to occur in humans with phenylketonuria, a condition that results in a buildup of Phe and its metabolites in the blood [25]. However, no adverse effects were noted in humans given either a single oral dose of up to 10 g [26],~30 g i.v. [27], or 3-4 g orally as aspartame [26]. No serious or study-related adverse events were observed in the current trial and therefore we conclude that Phe supplementation may be a safe method for accelerating fat oxidation during exercise.
This study has several strengths worth mentioning. First, the trial was a placebo-controlled, doubleblind, crossover, randomized, controlled design and therefore the findings are highly reliable. Second, evaluation of the respiratory exchange ratio is regarded as the gold standard for evaluating whole body fat oxidation. Third, the study only administered 3 g of Phe prior to exercise, a low-dose of amino acid supplementation that could be used easily elsewhere.
In contrast, we must also note some limitations. First, the number of enrolled participants was only six of young male. Therefore, the data robustness may not be high. Second, we failed to measure blood insulin levels. Insulin signaling is a key factor for glucagon secretion [12], so further studies are needed to demonstrate the effect of Phe combined with exercise for insulin secretion. However, stimulation of sympathetic nervous system by exercise suppresses insulin secretion [3]. Moreover, in our previous report, pre-ingestion of the amino acid mixture containing 1.5 g of Phe did not stimulate insulin secretion during 50% VO 2 max exercise [10]. Therefore, we thought that the influence of insulin secretion on this fat oxidation effect mediated by Phe ingestion combined with exercise is small. Third, we did not present the data of energy expenditure, so it remains unclear this higher fat oxidation leads to obesity prevention or weight control. However, ingestion of Phe may increase fat mobilization, and this effect might to lead, at least in part, to an increase in fat oxidation during exercise. Further studies are needed to investigate whether this acute response induced by the administration of Phe is sustained if the supplement is ingested over several weeks.
Conclusions
In conclusion, pre-exercise ingestion of a Phe supplement significantly accelerated secretion of glucagon during both rest and exercise. Furthermore especially serum glycerol levels increased significantly during exercise, indicating a shift towards fat oxidation. | 2017-09-13T01:59:47.630Z | 2017-09-12T00:00:00.000 | {
"year": 2017,
"sha1": "ff3c2f34f036f85ea149122343d5c7601de897f8",
"oa_license": "CCBY",
"oa_url": "https://jissn.biomedcentral.com/track/pdf/10.1186/s12970-017-0191-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff3c2f34f036f85ea149122343d5c7601de897f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202807497 | pes2o/s2orc | v3-fos-license | Both in vitro T cell proliferation and telomere length are decreased, but CD25 expression and IL-2 production are not affected in aged men
Introduction Aging is a natural process involving dysfunction of multiple organs and is characterized by increased susceptibility to infections, cancer and autoimmune diseases. The functionality of the immune system depends on the capacity of lymphocytes to proliferate in response to antigenic challenges, and telomere length has an important role regulating the number of cell divisions. The aim of this study was to determine the possible relationship between telomere length, interleukin 2 (IL-2) production, CD25 expression and proliferation of peripheral blood mononuclear cells (PBMCs) in aged men. Material and methods Telomere length was measured by RT-PCR in PBMCs from young and aged men. IL-2 production and CD25 expression were determined by ELISA and flow cytometry, respectively. Cell proliferation was measured by CFSE dilution assays upon in vitro stimulation with concanavalin A (Con A). Results PBMCs from aged men showed a shorter telomere length and a reduced capacity to proliferate in vitro, compared to young men. In contrast, no significant differences in the level of CD25 expression on T lymphocytes, and in vitro production of IL-2 were detected in both groups. In addition, no significant correlation was detected between levels of CD25 expression, IL-2 production, cell proliferation, and telomere length in aged men. Conclusions In aged men the telomere length shortening and the reduced T cell proliferation are not related to the capacity of IL-2 production and CD25 expression on T lymphocytes.
Introduction
Aging is a natural process involving dysfunction of multiple organs and is characterized by increased susceptibility to infections, cancer and autoimmune diseases [1][2][3][4][5]. In this regard, the immune response in aged individual is usually ineffective, inappropriate, and potentially harmful. The altered functionality of the immune system associated with aging Basic research Immunology has been termed immunosenescence, and is associated with an enhanced risk of severe infections, and a low efficacy of vaccination [6][7][8][9][10]. In addition, immunosenescence is associated with abnormalities in proportion of different leukocytes subsets in primary and secondary lymphoid organs, as well as defects in humoral and cellular immune responses [11,12]. Moreover, thymus involution begins at early aging, and aged mice have shown a significant decrease in development of T lymphocytes [13,14], with a progressive reduction of the proportion of naïve CD4+ T cells. In contrast, CD8+ T lymphocytes usually remain without any alteration in elderly men [15][16][17]. Immunocompetence in aging men has been assessed in vivo to recall antigens by delayed-type hypersensitivity tests [18] as well as antibodies production upon vaccination [19,20]. In addition, several in vitro studies have shown a poor T cell capacity to proliferate in response to specific antigens or mitogenic stimuli [21,22]. Furthermore, T lymphocytes from aging men have displayed short telomeres [23,24], restricted T cell receptor repertoire [25], diminished capacity to synthesize IL-2 and reduced expression of CD25 upon in vitro stimulation [22,26].
On the other hand, telomeres are specialized molecular complexes at the end of chromosomes, consisting of tandem hexanucleotide sequences (5'-TTAGGG-3') and other proteins called shelterins such as telomere repeat binding protein factors 1 and 2 (TRF1 and 2), and protection of telomere (POT1), etc. [27]. In humans, telomere length ranges from 10 to 20 kb, and a shortening occurs during each cellular division. When telomere length reaches a critical point (4-5 kb) it may trigger cell cycle arrest, cellular senescence or apoptosis. Genetic and epigenetic factors, sex, hormones, cellular stress and inflammation are involved in telomere shortening, and an inverse correlation between telomere length and aging has been widely reported [28][29][30].
On the other hand, clonal expansion of T and B-cell is necessary for an adequate immune response, and the ability of these cells to proliferate is highly dependent on telomere shortening. Interestingly, lymphocytes are able to up-regulate activity of telomerase, an enzyme that stabilize the telomeres during the process of DNA replication and cell division, increasing cellular survival [31]. Thus, telomere shortening in T lymphocytes could be inhibited by several factors including IL-7 and IL-15, which increase telomerase activity or induce the expression of proteins members of the Bcl-2 family [32,33]. Interestingly, telomere attrition could be used as a marker for reduced proliferative reserve in hematopoietic progenitor cells [34].
Interleukin-2 receptor is structured by a (CD25), b (CD122), and g (CD132) chains, which interact with IL-2 to regulate differentiation, function and homeostasis of several effector T-cell subsets; mainly CD4+ Th1, and CD8+ cytotoxic lymphocytes. Signaling through IL-2R plays a critical role regulating the adaptive immune system and controlling T cell proliferation and survival. The above mechanisms are required for maintenance of immune tolerance and cellular homeostasis [35][36][37]. Therefore, telomere dynamics is critical in maintaining immune cell function, immunosenescence and pathogenesis regulation of several diseases such as autoimmunity.
However, the possible relation between IL-2, CD25, telomere shortening and the proliferation capacity of T lymphocytes in aging has not been completely elucidated.
The aim of this study was to determine the possible association of telomere length with IL-2 production, CD25 expression and proliferation capacity of T lymphocytes in aging men.
Subjects
Blood samples were obtained from twenty healthy men from 20 to 25 years old (young men group), and 20 men from 60 to 65 years old (aged men group). All individuals included in this study were healthy and individuals with any clinical disease manifestation, smokers or those with alcohol or drug consumption were excluded. A standardized questionnaire was applied which included questions regarding personal medical history. Anthropometric measurements were taken by a standardized method. The blood pressure was taken using a mercury sphygmomanometer with the subjects seated for 10 min prior to measurement. To determine the body mass index (BMI) values, a daily calibrated digital scale and stadiometer were used to measure body weight and height (Table I). This protocol was approved by the local bioethics committee and written informed consent was obtained from each volunteer.
Isolation of mononuclear cells, CFSE dilution and in vitro cell stimulation assays
Peripheral blood mononuclear cells (PBMCs) were isolated by Ficoll-Hypaque density gradient centrifugation (Sigma-Aldrich, St. Louis, MO). PB-MCs were loaded with cell tracker dye CFSE (0.5 μM; Molecular Probes) to monitor proliferation. Cells were adjusted at a concentration of 1 × 10 6 /ml in RPMI-1640 tissue culture medium (GIBCO, Eugene, OR) supplemented with 10% fetal bovine serum, 2 mM L-glutamine, 50 U/ml penicillin, and 50 mg/ml streptomycin. Finally, CFSElabeled PBMCs were cultured with or without 2.5 μg/ml of concanavalin A (Con A, Sigma Aldrich) for 72 h at 37ºC, 100% humidity and 5% CO 2 . At the end of cell culture, cell proliferation (by CFSE dilution) and CD25 expression were determined by flow cytometry on CD3+ gated cells.
Flow cytometry analysis
To determine the expression of CD25 on T lymphocytes, freshly isolated peripheral blood mononuclear cells were stained with anti-CD8 FITC, CD25 PE, CD4 PerCP and CD3 APC. In all cases, cells were analyzed with a FACSCalibur flow cytometer (Becton-Dickinson, San Jose, CA) using the Cell Quest software (Becton Dickinson). Also, for the histograms, the region of negativity (auto fluorescence) was calculated using a fluorescent control isotype. The lymphocytes were gated using forward and side scatter to exclude debris and dead cells, and 10,000 events were acquired in each assay.
Measurement of IL-2 in cell culture supernatant
Interleukin-2 was quantified in supernatant from unstimulated and Con A-stimulated PBMCs from each individual, using the Quantikine ELISA kits (R&D Systems) according the manufacturer's recommendations.
Telomere length measurement
Total DNA was isolated from PBMCs before and after in vitro stimulation with Con A with phenolchloroform. We performed a pico-green quantitation using a Molecular Devices 96-well spectrophotometer and results were confirmed using a NanoDrop SD-1000 spectrophotometer. Relative average telomere length was assessed by a modified version of the real-time PCR-based telomere assay previously reported by Cawthon [38]. The telomere length (kb) was determined using a RT-PCR assay (LightCycler, Roche) to generate the standard curve for each run and to determine the dilution factors of standards corresponding to T and S amounts in each sample. Briefly, the telomere repeat copy number to single gene copy number (T/S) ratio was determined using a RT-PCR assay (LightCycler, Roche). The following primers were used for tel 1, 5'-G2T5GAG3TGAG3T-GAG3TGAG3TGAG3T-3 and tel 2,3'-TC3GACTATC-3TATC3TAT-C3TATC3TATC3TA-5'. The constitutive gene primers 36B4u: 5'-CAGCAAGTG3A2G2TG-TA2TCC-3' and 36B4d: 3'-C3AT2CTACATCA2CG-3TACA2-5' were used as a control. For telomere determination the reaction conditions were as follows: 1 cycle at 95°C for 5 min, followed by 40 cycles at 95°C for 15 s, 54°C for 60 s, and 72°C for 28 s; and for the gene 36B4, 1 cycle at 95°C for 5 min, followed by 40 cycles at 95°C for 15 s, 58°C for 20 s, and 72°C for 28 s. All samples for both telomeres and 36B4 were run in triplicate; and the threshold value for both reactions was set to 0.5. Additionally, to assess and compensate intra-assay variations in PCR efficiency, a four-point standard curve from 12.5 to 100 ng using genomic DNA was prepared.
Statistical analysis
The Mann-Whitney U and Wilcoxon's tests were used to determine significant differences between two groups. In addition, the Spearman correlation test was used to analyze the degree of association between two variables. To analyze the relation between length telomere and T cell proliferation, Fisher's exact test was used. Statistical analysis was performed using the PASW Statistics 18 software.
Results
Telomere length in PBMCs before and after Con A stimulation As expected, in freshly isolated PBMCs shorter telomeres were detected in aged men compared to young men (6.38 ±0.06 and 12.42 ±0.08 kb, respectively, p < 0.05, Figure 1). When lymphocytes were in vitro stimulated with Con A, cells from young men tended to increase their telomere length upon mitogenic stimulation, whereas cells from aged men did not (Figure 1 A). No difference was detected between telomere length of fresh and non-stimulated PBMCs (data no shown). Because Con A in vitro stimulation induces non-specific T lymphocyte proliferation, a slight increase in the proportion of CD3+ lymphocytes in young men, but not in aged men, was observed (Figures 1 B and 2 B), displaying to CD3+ T lymphocytes as proliferating cells (Figure 2 B). While in young men the proportion of CD3-cells showed a slight decrease after Con A stimulation, it was not observed in aged men (Figure 1 B). The telomere length in PBMC from young or aged men was measured by RT-PCR in fresh cells and after in vitro Con A stimulation. Graph in A corresponds to median telomere length in kilobase (kb) in PBMCs from twenty young and twenty aged men. *p < 0.05 by Wilcoxon test. In B representative flow cytometry histograms are shown the proportions of CD3+ (M2) and CD3-(M1) PBMCs from young (upper panels) and aged subjects (lower panels), without stimulus (fresh cells) or after Con A stimulation. The lymphocytes were gated using forward and side scatter to exclude debris and dead cells, and 10,000 events were acquired in each assay
Cell division after in vitro T cell stimulation
Freshly isolated PBMCs were in vitro stimulated with Con A and proliferation of CD3+ cells was determined. In most cases, a higher percent of PBMCs reached more divisions in young men than in aged men (Figure 3 A). Correspondingly, CD3+ lymphocytes were the proliferating cells in young men (Figures 2 A and 3 B). Likewise, a significant increase in the percentage of CD3+ CD4+ T cells was observed upon stimulation with Con A in young men, but not in aged men (Figure 2 B). Accordingly, a significant decrease in the percentage of CD8+ cells was observed in young men, while no difference in CD8+ cells was detected in aged men (Figure 2 C).
CD25 expression and IL-2 production by CD3+ lymphocytes
To explore the possible role of IL-2 on CD3+ lymphocyte proliferation, both the expression of CD25 on CD3+ lymphocytes and IL-2 production were determined upon in vitro stimulation with Con A. Thus, in non-stimulated cells, low levels of CD25 were detected on CD3+ lymphocytes from both young and aged men (Figure 4 A). After in vitro stimulation with Con A the percentage of CD3+CD25+ tended to be higher in young men, but no significant difference was detected with respect to aging men (Figure 4 B). Likewise, similar levels of CD25 expression were observed on CD4+ and CD8+ lymphocytes (data no shown).
On the other hand, IL-2 levels in cell culture supernatants increased upon Con A stimulation in both groups. However no significant differences in levels of this cytokine were detected in young and aged men (Table II).
Relation of T cell proliferation capacity and telomere length in young and aged men
Above we have shown that a higher percent of CD3+ lymphocytes reached more divisions in young men than in aged men ( Figure 3). Also, we detected shorter telomeres in cells from aged men compared to young men, on both freshly isolated cells and after Con A stimulation, respectively ( Figure 1). Therefore, we analyzed the relation between telomere length and the proliferation capacity of CD3+ T lymphocytes. For this, we considered the percentage of subjects in whom cells have reached ≥ 3 cell divisions, and a cut-off of 3 cell divisions was considered as a low proliferation capacity. Also a rate of ≤ 6 kb was considered as a shorter telomere length. Using Fisher's exact test, our data show a direct relation between low proliferation capacity (≤ 3 cell divisions) and shorter telomere length (≤ 6 kb), in aged men ( Figure 5). In addition, in both aged and young men no signif- PBMCs from young or aged men were cultured with or without Con A for 72 h. Then, cells were stained with anti-CD3, CD4, and CD8 monoclonal antibodies and analyzed by flow cytometry. A -Represents the arithmetic mean and SD of the percentage of non-stimulated (white bar) or Con A in vitro stimulated CD3+ lymphocytes (black bar). B and C -Represent the arithmetic mean and SD of the percentage of non-stimulated or Con A in vitro stimulated CD4+ and CD8+ cells gated on CD3+, respectively. *p < 0.05 by Mann-Whitney U test
Discussion
Immunosenescence is characterized by several immunological alterations, which are associated with increased risk of infections, cancer and autoimmunity. The functionality of the immune system depends on the lymphocytes' capacity to respond to any antigenic challenge and induce clonal expansion. Thus, the telomere shortening Table II. Levels of IL-2 in the cell culture supernatants of non-stimulated and Con A stimulated PBMCs. PBMCs from young or aged subjects were cultured with or without Con A for 72 h, and the synthesis of IL-2 was quantified by ELISA. The results correspond to the arithmetic mean and SD of IL-2 in pg/ml. The levels of IL-2 in the cell culture supernatants increased upon Con A stimulation, and no significant differences in the synthesis of this cytokine were detected between young and aged men, by Mann-Whitney U test However, studies of telomere attrition and cellular mechanisms of the immune system are uncertain and controversial. Because the proliferation of T cells is dependent of both IL-2 and the expression of its high affinity receptor CD25, we decided to assess the possible relation between the reduced capacity of T lymphocytes from aged men to proliferate and CD25 expression as well as in vitro production of IL-2. As previously reported by others [21,24,39], we detected that PBMCs from aged men have shorter telomeres as well as a significantly reduced proliferation capacity of T cells. However, Peres et al. [17] did not observe differences in mitogen-induced proliferation of lymphocytes from young and old individuals. Similar results were reported by Wallace et al. [40]. It is feasible that these discordant results may be due to the subjects' age in the studied groups, the number of individuals included, and cell culture conditions used in each study.
It is well known that telomere length may be influenced by sex, hormones, reactive oxygen species, as well as by genetic and epigenetic factors. Thus, to control the influence of sexual hor-mones, only healthy men were included in our study. Therefore, telomere length shortening and reduced capacity of T cell proliferation observed in aged men could be attributed merely to aging, because these alterations were not observed in young men. However, it is worth mentioning that blood pressure tended to be higher in aged men included in our study compared to young men. Thus, our results might also be influenced by this phenomenon, since it has been reported that telomere length is affected by arterial hypertension and cardiovascular disease [41,42].
It is important to note that we evaluated the telomere length in PBMCs purified from fresh whole blood by Ficoll-Hypaque density gradient centrifugation. Most monocytes and granulocytes are excluded with this procedure. Therefore, telomere length measurement corresponds to T and B lymphocytes, which are present in a greater proportion in purified PBMCs. Furthermore, it was referred to as the T lymphocyte subset, because the stimulus used in our experiments was Con A. Con A is a lectin that induces non-specific T cell proliferation, but has no effect on B cells. Consequently, upon PBMC in vitro Con A stimulation, the cells on which the telomere length could be modified by cell division are T cells. As we expected, fresh PBMCs or non-stimulated cells (data not shown) from aged subjects had shorter telomeres than cells from young men. Moreover, when lymphocytes from aged subjects were induced to un-dergo in vitro proliferation with Con A, increased shortening of telomeric DNA was observed, whereas in cells from young men it was not. On the other hand, it is feasible that cells from young men increased the level and activity of telomerase upon mitogenic stimulation, because an increase in telomere length was detected. Telomerase is Young Aged Young Aged A C B D an enzyme that stabilizes the telomeres in the process of DNA replication and cell division, extending the capacity for replication and inducing the expression of some anti-apoptotic proteins [23,31]. In this regard, it has been reported that aging subjects have a high percentage of memory T cells, which bear short telomeres, and have shown a decreased ability to respond to antigen recall [43,44]. Therefore very low levels or low activity of telomerase could be associated with shorter telomeres as well as reduced capacity of T cell proliferation in aged men. Likewise Yang et al. [45] observed the relation between telomere length and cellular proliferation as a function of donor age. They found that cells from adrenal tissue cultured in vitro from donors of different ages showed a strong age-related decline in total replication capacity.
On the other hand, IL-2 is an important growth and differentiation factor for T cells [36,37], and it has been shown that lymphocytes from aged individuals show a diminished capability to synthe-size IL-2 and defective expression of CD25 [22,26]. However, our results showed that CD25 expression on CD3+ lymphocytes and in vitro production of IL-2 were similar in cells from young and aged men. These results are in agreement with previous reports showing no apparent differences in the function of the IL-2/CD25 axis in cells from young and aged subjects [17]. However, it is necessary to carry out further studies to explore the intracellular pathways induced through IL2-CD25 interaction in cells from aged men.
It is clear that age-related changes in telomere length of PBMCs in vivo occur at different rates in different individuals and cell types, revealing that changes in the length of telomeres in T cells with aging are influenced by several factors, such as telomerase activity and health conditions [46]. Although our study has some limitations such as having measured the telomere shortening in total PBMCs and not in purified T cell subsets, the fact that there is a deficiency of CD3+ T cell proliferation after in vitro stimulation in aging, although the expression levels of CD25 and IL2 production were similar between young and aged individuals, suggests that the responsiveness of CD3+ T lymphocytes is due to the inability to proliferate by telomere shortening and not by the IL2-CD25 interaction. As mentioned above, future studies are needed to test this hypothesis. Additionally, it is well known that in vitro Con A stimulation strongly induces non-specific T cell proliferation. It is worth highlighting that in this study a low dose (2.5 μg/ml of Con A) was used to prevent overstimulation-induced cell death; despite the low proliferation index shown here, we were able to detect differences in lymphocyte proliferation capacity between young and aged men.
In conclusion, our data confirm that cells from aged men had a short telomere length and a restricted capability to proliferate; however, these phenomena are not related to defects in either IL-2 production or CD25 expression on CD3+ T lymphocytes. | 2019-09-17T02:59:26.804Z | 2019-09-04T00:00:00.000 | {
"year": 2019,
"sha1": "f39802a4f2d74016bf538f50ff5ed6ad3c7cff88",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.archivesofmedicalscience.com/pdf-91931-64243?filename=Both%20in%20vitro%20T%20cell.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65e67660ae14adb2aee12c1a3c95e039b71597b5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203033445 | pes2o/s2orc | v3-fos-license | Exploratory Research on Green Information Technology Knowledge
In this study we focused on the extent of green IT knowledge and the effort to minimize energy use and electronic waste (e-waste). It was conducted by distributing questionnaires to students of the Faculty of Engineering and Informatics, Universitas PGRI Semarang. By using random sampling technique, we considered 91 students as the total sample size. It consists of 20 participants from informatics study program, 7 participants from architecture study program, 4 participants from electrical engineering study program, 20 participants from civil engineering study program and 33 participants from mechanical engineering study program. The results show that our participants are 16-25 years old. They have been using laptop/PC for about 5-10 years and 3-5 hours per day. In average, they owned only one laptop/PC. In a day, they print more than 50 sheets using one-sided mode. It implies that our students actually understand enough the essential of green IT. Nevertheless, efforts to minimize e-waste must be increased.
Introduction
Along with the development of technology, electronic waste (or the so called e-waste) rises worldwide problem. It is because the content of more than 1000 large categories of hazardous and hazardous substances (B3), such as heavy metals (mercury, lead, chromium, cadmium, arsenic, silver, cobalt, palladium, copper, etc.) [1]. Actually, there are many things that the government can do to create environmentally friendly environments, such as paperless policies, virtualization, changing conference calls, changing computers with laptops, blank screensaver options, hibernating and replacing lights if you don't need server space and so on. However, up to now the government does not provide clear regulations and socialization on how to safely use the electronic device or on how to wisely dispose the electronic waste. Green information technology or the so-called green IT is all about research and practice in the design, production, and use of computers, software, hardware, and computer systems. Effective communication without negative impacts on the environment. Green IT also participates in the use of IT in support, guidance and environmental awareness [2]. This method might be a solution of the above mentioned problem. Production and use of IT equipment consume a lot of energy, and this contributes about 2% of total carbon emissions in the world [3].
In this work, we investigate the awareness and green IT knowledge among our students in the Faculty of Engineering and Informatics, Universitas PGRI Semarang, Central Java, Indonesia. In this case, the term of Faculty of Engineering and Informatics, Universitas PGRI Semarang, Central Java, Indonesia will be abbreviated as FTI UPGRIS for simplicity. In the first step, Green IT behavior is being investigated to minimize e-waste. The above problems are in line with the strategic issue of UPGRIS Research Master Plan. The research topics in the Information Technology and Communication Sector, especially the development of green technology. The purpose of this study was to find out the extent of green IT knowledge and the effort to reduce e-waste amongst our students in FTI, UPGRIS. The advantages of this study are to improve green IT knowledge and contribute in the implementation of policy making especially in the information technology infrastructure sector. In addition, it also can be used as reference in the formulation of green IT and science studies related to green IT.
Many studies in this field has been reported to our knowledge. Among them, a paper entitled Impact of Green Computing Behavior on Efforts to Minimize E-Waste was reported by S. J. Prasetiono et al. in 2016 describes the behavior of green computing for students [1]. T. B. Chiyangwa in 2014 published their paper entitled Belief and Actual Behavior in Green Information Technology within a South African Tertiary Institution. It evaluates the actual beliefs and behaviors of IT users regarding green IT in South Africa by conducting surveys. A hypothesis model based on the Theory of Planned Behavior (TPB) was used to evaluate the main factors that contribute to Green IT awareness in empirical studies [4]. Another paper on Green computing entitled "Future of Computers" has been reported by J. M. Prakas in 2014. It described how the concept of green computing is applied to computers in industry, the relationship between the application of green computing and the system used. Finally, he obtained a key which was used to identify the most relevant and influential factor on the application of green computing. He successfully used several approaches to support system performance [5]. Murugesan et al. through his research also explained that there are 4 approaches to carry out Green IT project, such as green use, green manufacturing, green design, and green disposal of IT system. This approach can be used to analyze the impact of IT usage.
Methods
The method employed in this work is a scientific method used to obtain data with specific purposes and uses [6]. Firstly, we define the number of samples require in this exploratory study. Table 1 where n, N, and e are the number of samples, the population size and the error limit, respectively. Then according to eq. (1), the total samples is Secondly, we prepared the multiple choice questionnaire. In total, there are 36 questions which include participant's age, the length of time to use a laptop in total and in a day, the number of laptops owned, the number of pages printed per day, the type of paper used for printing, and the opinion regarding to the green IT. There are 3 types of answers for each question which scored 1-3. Score 1 indicates the lack of green IT knowledge and e-waste effort. Higher score 2 or 3 indicates the better understanding.
Before distributing them to the students, we tested the validity and reliability of the questions. The validity test was carried out to find out whether the questions are logic and reasonable. The questions are valid if follows the following relations.
where = = number of sampling = 91, and is the significance level at 5% = 0.204. We obtained for those 36 questions are in the range of 0.232 -0.624. Therefore, all questions are logic.
On the other hand, the reliable test was carried out to find out whether the questions are consistent. It was calculated using Cronbach's Alpha approach [7]. The questions are reliable if follows the following relations.
Results and discussion
Based on the distributed questionnaires, we summarized the results as shown in Table 2. It shows that most of our participants are 16-25 years old. They have been using laptop/PC for about 5-10 years and 3-5 hours per day. In average, our students owned only one laptop/PC. They print more than 50 sheets per day using one-sided mode. Figure 1 shows the opinion regarding to Green IT knowledge and the effort to minimize e-waste amongst students in FTI, UPGRIS. In the case of the participant's opinion regarding to the green IT knowledge, they are 40% strongly agree, 44% agree and 16% disagree. On the other hand, the participant's opinion regarding to the effort to minimize e-waste, they are 67% strongly agree, 30% agree and 3% disagree.
Conclusion
We have successfully performed an exploratory study on green IT knowledge and efforts to minimize e-waste of our students in the Faculty of Engineering and Informatics, Universitas PGRI Semarang, Central Java, Indonesia. The results show that our students understand enough about the importance of the green IT and minimize the e-waste. This research is a good start which implies that our students have high awareness to keep clean our environment. Nevertheless, efforts to minimize e-waste are still not optimal. Therefore, our plan to do the next research on how to categorize and how to re-use the electronic e-waste is clear. | 2019-09-17T02:48:51.782Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "db4fe386cb21cbf49b8d7cb17de94c8ec930d126",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1179/1/012109",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b247e44be823ba76c47106fea36380b423adb415",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
202073137 | pes2o/s2orc | v3-fos-license | Study on Development of Fiber-enriched Noodles using Moringa Leaves (Moringa olifera
Though noodles are popular, it holds a notion of ‘low fiber food' as they are generally prepared from refined wheat flour, which by its nature lacks fiber. Hence an attempt was made to develop fiber enriched noodles by incorporating leaves of moringa (Moringa olifera) as a fiber source at different levels viz., 3, 4.5 and 6 % in which 3% was found to be the best based on cooking characteristics and sensory evaluation. The noodles thus prepared were packed in flexible polyethylene pouches and stored at room temperature and were further analyzed for physicochemical, cooking, and sensory characteristics at regular intervals of 60 days till 180 days of storage. All the analysis showed the non-significant difference during the storage period stating that the product is good till 180 days of storage. The dietary fiber was found to increase from 3.3 to 4.1%, and in vitro method of glycemic index (GI) analysis showed that the moringa fiber noodle was a low GI food when compared to the medium GI of control. The MUFA and PUFA content of moringa fiber noodles were increased to 3.4 and 2.2% respectively when compared to control.
IntroductIon
T he consumption rate of noodles has been steadily increasing not only in Asia but in other parts of the world making noodles a popular food product mainly due to its easy cooking procedure, lower cooking time and yummy taste (Hou et al., 2010). There is a drastic increase in market size of noodles in India during the last decade, wherein the size has increased from 0.31 to 1.87 billion US dollars from 2009 to 2019 (Anon, 2019). Noodles have been categorized into white salted noodles and yellow alkaline noodles based on the type of salt used that affects the color, flavor, and texture of the final noodle product. Noodles made with regular salt (NaCl) are called as white salted noodles and with alkaline salts (Na 2 CO 3 or K 2 CO 3 ) are yellow alkaline noodles (Fu, 2008 andXiaoting et al.,2016). Common steps in noodle production involve dough kneading, conditioning, sheeting and compounding, cutting, steaming, and drying (Xiaoyan et al., 2013). These methods were common for all machine made noodles with minor adjustments being made by processors. Commercially noodles are made with refined wheat flour, which by its nature lacks the dietary fiber (Guoquan, et al. 1998). Many research works have been conducted to incorporate various fiber content in food like., carrot pomace (Prashant Sahini and Shere, 2017), wheat fiber, banana fiber, etc., the present study involves the development of fiber enriched noodles using leaves of moringa (Moringa olifera).
Moringa (Moringa olifera Lam.) is native to the Indian subcontinent and has become naturalized in the tropical and subtropical areas around the world. This plant can grow well in the humid tropics or hot dry lands and can survive even in less fertile soils and are also less affected by drought (Anwar et al., 2007). The tree is known by various regional names like benzolive, drumstick tree, horseradish tree, kelor, marango, mlonge, mulangay, sahijan, and sajna (Fahey, 2005). The tree is very important for its medicinal value because its various parts possess antitumor, antipyretic, anti-inflammatory, antiulcer, antidiabetic, diuretic, antioxidant, antibacterial, and antifungal properties. Apart from these properties, the leaves are also rich in dietary fiber content (Sodamade et al.,2013). Hence a study was carried out to develop fiber enriched noodles by incorporating the moringa leaves as a fiber source and to assess its shelf life during the storage period.
Materials
The raw materials like moringa leaves, refined wheat flour, salt were procured from the local vegetable market. The utensils and accessories made of food grade stainless steel (SS 304) were used for the noodle preparation. The compatible noodle making machine: Make: M/s Atlas Pvt Ltd (completely made of food-grade stainless steel SS 304) consists of a set of plain rollers and two sets of cutting rollers (cutting width 4 mm and 7 mm). Removable arm handle is provided for easy rotation of rollers. An adjustable screw was provided for plain rollers for adjusting the clearance between the rollers (Plate 1).
Noodle Preparation and Storage
The ingredients for the preparation of fiber-enriched noodles were refined wheat flour, water, salt, and fiber source. Salt was first dissolved in water, and the solution was added to the refined wheat flour along with moringa leaves powder at the rate of 3, 4.5 and 6% and mixed well. The resultant dough had a crumbly consistency similar to that of moist breadcrumbs. Conditioning of noodle dough is carried out for about half an hour to make the dough to be good for sheeting. The dough was first formed into a sheet by process of folding and passing through the plain rollers of the noodle machine several times. The thickness of the sheet was reduced stepwise by minimizing the roller spacing before cutting into strands of 4 mm thick. Steaming of freshly prepared noodles is done for about 15-20 minutes until noodles were partially cooked. The steam cooked noodles were arranged upon trays and were kept for sun drying for about 6-8 hours for effective drying and thereafter kept in a hot air oven at 60ºC for 6-8 hours till the product has reached about 12% moisture content. The product was then allowed to cool for 30 minutes in the ambient conditions at room temperature thereafter packed in polyethylene bags, sealed and stored at ambient temperature. The flowchart for the preparation of fiberenriched noodles is shown in Flow chart 1.
The fiber-enriched noodle samples were analyzed for physicochemical parameters viz., moisture (wet basis) and total ash as per the standard procedures described in AOAC, 2006 and the total and dietary fiber were analyzed as per the standard procedure AOAC, 2000.
Fatty Acid Profile
Gas chromatography is a technique applied for carrying out the separation and measurement of mixtures of materials that can be volatilized. The fatty acid profile was analyzed by gas chromatography (Jana et al., 2015).
Glycemic Index
Glycemic index (GI) measurement is carried out based on in-vitro technique on carbohydrate-containing foods. GI is a relatively new way of counting the total amount of carbohydrates in foods in their unconsumed state (Crosbie and Ross, 2004).
Texture
Texture for uncooked noodles is measured in terms of gradient force which acts opposite to the gravitational force and is calculated negatively on mathematical expression. To determine the gradient force using texture analyzer, probe PS 06 was used. For cooked noodles, the stickiness is calculated using Texture profile analysis with the compression batten probe (Fu, 2008).
Colour
Noodle color was measured with a hunter colorimeter (spectrophotometry). L* denotes the lightness, a*denotes redness or greenness and b * values represents yellowness.
Cooking Characteristics
The cooking characteristics considered for optimization of fiber level in noodles includes the cooking time, gruel solid loss and water uptake which was calculated respectively by the methods given by Omeire (2015), Poongodi et al.,(2010) and Taneya et al., (2014).
A B
A: Plain roller, B: Cutter rollers Plate 1: Noodle making machin
Sensory Evaluation
The sensory analysis was assessed by subjecting the cooked noodles samples to the sensory scores for color and appearance, taste, texture, chewability and overall acceptability from six untrained panelists from College of Food and Dairy Technology, the scores were obtained on a 9 point hedonic scale.
Shelf Life Study
The packed samples were analyzed at an interval of 60 days until 180 days of storage. Physicochemical properties like moisture content, crude fiber, and total ash determined during the storage study during 0th, 60th, 120th, and 180th day of storage. The color and texture of the noodle samples were analyzed at the 0th and 180th day of storage whereas the glycemic index and fatty acid profile were analyzed at the initial stage (Hou et al., 2010).
Statistical Analysis
The data obtained were analyzed statistically using IBM SPSS® 20.0 for Windows® software as per the standard procedure of Snedecor and Cochran (1994).
results A n d d I s c u s s I o n
The preliminary trials were conducted with three different levels of fiber incorporation for noodle preparation. The level of incorporation of the selected fiber was optimized based on the results of cooking characteristics viz., minimum cooking time, lowest gruel solid loss and highest water uptake ratio and superior sensory scores of cooked noodles (Table 1 and 2). From Tables 1 and 2, it is obvious that 3% incorporation was chosen as the best level, and further studies were carried out for noodles prepared using 3% level of incorporation. The physicochemical characteristics like moisture, crude fiber, total ash, and dietary fiber were analyzed at an interval of 60 days, and the results are denoted in Table 3. From the table, it is seen that there is an increase in moisture, crude, and dietary fiber wheres a reduction in total ash were found for moringa fiber noodles when compared to control. The in vitro method of the glycemic index was measured and fatty acid profile was estimated by chromatographic technique. The color (L*, a* and b*) and texture (gradient force) for both cooked and uncooked noodles were analyzed by the above-mentioned method of analysis. The results were tabulated in Tables 1 to 5.
conclusIon
The study on fiber enrichment in noodles resulted that the noodles prepared by incorporating moringa leaves show increased fiber content than the control sample. The presence of MUFA and PUFA was also found at the increased range on developed noodles by incorporating moringa leaves. The developed noodles can also be categorized into low glycemic food as it possesses a lower glycemic index. The ambient shelf life storage study was conducted for the moringa leaves incorporated noodles, and the storage studies for 180 days showed no significant difference in characteristics of fiber noodles. Therefore, the shelf life of fiber-enriched noodles was found to be 180 days. | 2019-09-10T00:27:57.150Z | 2019-08-26T00:00:00.000 | {
"year": 2019,
"sha1": "9b420663a55256dcf3a1cf02fbc73addfcbba7f9",
"oa_license": null,
"oa_url": "https://doi.org/10.18805/ajdfr.dr-1451",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6c7c1a7e82b7f22c294cce34ee3bcfb4c568854c",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
232065109 | pes2o/s2orc | v3-fos-license | Fluoride release from two types of fluoride-containing orthodontic adhesives: Conventional versus resin-modified glass ionomer cements—An in vitro study
Introduction Development of white spot lesions (WSLs) during orthodontic treatment is a common risk factor. Fixation of the orthodontic appliances with glass ionomer cements could reduce the prevalence of WSL’s due to their fluoride release capacities. The purpose of this study was to evaluate differences of fluoride release properties from resin-modified and conventional glass ionomer cements (GICs). Methods The resin-modified GICs Fuji ORTHO LC (GC Orthodontics), Meron Plus QM (VOCO), as well as the conventional GICs Fuji ORTHO (GC Orthodontics), Meron (VOCO) and Ketac Cem Easymix (3M ESPE) were tested in this study. The different types of GICs were applied to hydroxyapatite discs according to the manufacturer’s instructions and stored in a solution of TISAB III (Total Ionic Strength Adjustment Buffer III) and fluoride-free water at 37°C. Fluoride measurements were made after 5 minutes, 2 hours, 24 hours, 14 days, 28 days, 2 months, 3 months and 6 months. One factor analysis of variance (ANOVA) was used for the overall comparison of the cumulative fluoride release (from measurement times of 5 minutes to 6 months) between the different materials with the overall level of significance set to 0.05. Tukey’s post hoc test was used for post hoc pairwise comparisons in the cumulative fluoride release between the different materials. Results The cumulative fluoride release (mean ± sd) in descending order was: Fuji ORTHO LC (221.7 ± 10.29 ppm), Fuji ORTHO (191.5 ± 15.03 ppm), Meron Plus QM (173.0 ± 5.89 ppm), Meron (161.3 ± 7.84 ppm) and Ketac Cem Easymix (154.6 ± 6.09 ppm) within 6 months. Analysis of variance detected a significant difference in the cumulative fluoride release between at least two of the materials (rounded p-value < 0.001). Pairwise analysis with Tukey’s post hoc test showed a significant difference in the cumulative fluoride release for all the comparisons except M and MPQM (p = 0.061) and KCE and M (p = 0.517). Conclusion Fluoride ions were released cumulatively over the entire test period for all products. When comparing the two products from the same company (Fuji ORTHO LC vs. Fuji ORTHO from GC Orthodontics Europe GmbH and Meron Plus QM vs. Meron from VOCO GmbH, Mannheim, Germany), it can be said that the resin-modified GICs have a higher release than conventional GICs. The highest individual fluoride release of all GICs was at 24 hours. A general statement, whether resin-modified or conventional GICs have a higher release of fluoride cannot be made.
Introduction
Multi-bracket (MB) treatment during orthodontic treatment is commonly associated with the development of white spot lesions (WSL) with an occurrence mostly reported between 24% and 72,9% [1], but can even vary between 2% up to even 97% [2]. In comparison, the prevalence of WSL's in patients without orthodontic treatment is only between 11% and 24% [3]. Most of the time lateral incisors and canines are affected by WSL's [4] and usually occur around the orthodontic fixed appliances [5].
White spot lesions are demineralizations that occur due to bacterial plaque activity and metabolism of carbohydrates resulting in an acid release [6], thus demineralize the dental hard tissue [7] and appear as a patch on the surface of the tooth [8]. Bands, brackets and arches increase plaque retention [2], block access to those plaque retaining areas and complicate adequate cleaning [9].
Therefore it is very important to implement and improve an effective remineralization mechanism [6], which counteracts this process, inhibits acid production and improves resistance to demineralization [7].
Due to their caries-preventive effect through the release of fluoride ions, glass ionomer cements are already widely used for the fixation of orthodontic bands [10] and could be an aid in prevention of white spot lesions.
In restorative materials, the fluoride released has an effective zone of 1 mm [11] and can inhibit demineralization up to 7 mm from the edge of the glass ionomer cement [12]. GICs are capable of maintaining a fluoride concentration of 0.03 ppm in oral saliva after one year [13]. Fluoride release rates in the range of 200-300 μg/cm 2 per month are considered sufficient to inhibit enamel demineralization [14].
Many studies only cover a time period of one day to 2 months [6,15,16]. However, there is little to no information on how fluoride release changes, especially in the first hours, after one month of the so-called "early burst" phase and the long-term release in the range of up to 6 months.
The purpose of this study was the evaluation of the continuous fluoride release of different GICs over a period of 6 months and to compare the differences between resin-modified GICs and conventional GICs.
Materials and method
In the present study, five different GICs were investigated. The first two groups were resinmodified GICs, groups 3 to 5 were made up of conventional GICs. Group 6 represents a compomer and like group 7 served as an untreated control group (Table 1).
Flouride-free hydroxyapatite discs (HiMed Inc., Old Bethpage, USA) were used as carrier discs (5mm x 2mm) for the test materials. To control the exact amount of material applied an analytical balance was used (analytical balance, Secura CPA124S, Sartorius AG, Goettingen, Germany). On average 0,025g of test material was used on each disc. If there was a notable difference in weight of the test specimen the process had to be repeated.
The test material was applied to the carrier discs according to manufacturer instructions (Table 1). Surface conditioning was only necessary for Fuji Ortho (FO) with Fuji Ortho Conditioner (GC Orthodontics Europe GmbH, Breckerfeld, Germany). No material was applied to the carrier discs of the seventh group.
Sample size consisted of ten test specimens per group. The selection of n = 10 per group was based on previously published studies [17-20]. The storage solution in which the carrier discs were placed contained a TISAB III solution (Total Ionic Strength Adjustment Buffer, Thomas Scientific, Swedesboro, USA) with bi-distilled water (Megro GmbH & Co. KG, Wesel, Germany) with a mixing ratio of 5%:95%.
For measurements, TISAB III was used to maintain a low pH value of 5-5.5 with constant ionic strength and to unmask the fluoride ions of the samples.
The fluoride measurement was performed with a fluoride electrode (Orion 9609BNWP, Thermo Fisher Scientific Inc., Chelmsford, USA). The measurement was repeated five times for each test specimen to prevent fluctuation. Calibration of the electrode is essential to obtain reliable measured values. For this reason, four fluoride standard solutions were prepared with sodium fluoride and diluted TISAB III containing fluoride of 0.38, 3.8, 38 and 380ppm.
Thereby a two-point calibration could be carried out with solutions containing an exact ion concentration. If the measured values deviated by more than 5%, the calibration had to be restarted. The required slope of the electrode in mV mode had to be between 56mV and 60mV. A further solution served additionally as a precision control to ensure consistency of the electrode on each measuring day. This precision control solution was obtained in preliminary tests, in which a pool of cryotubes containing a solution with known fluoride ion concentration was created. They were split up into two separate pools P1 and P2, consisting of a group with low and high amounts of fluoride (P1: 0,6566 ppm, P2: 16,7 ppm).
The 70 test specimens for this study were each kept in an individually labeled cryotube with 1.5ml diluted TISAB III and were stored in an incubator (WTB Binder, Tuttlingen, Germany) at 37˚C for the entire duration of the experiment (MELAG Medizintechnik oHG, Berlin, Germany) except for measurement procedures. Fluoride ion measurements were carried out at the following times: After 5 minutes, 2 hours, 24 hours, 14 days, 4 weeks, 2 months, 3 months and 6 months. At each measurement time, the carrier discs were placed in new cryotubes containing 1.5ml diluted TISAB III and were replaced in the incubator at 37˚C.
After separation of the carrier discs, the old cryotubes were placed in a vortex mixer for 10 seconds and subsequently 500μl of the solution was removed with a pipette and inserted in an empty test tube (4ml) for the measurement of the fluoride ion concentration. This procedure was repeated five times per tube to detect any fluctuations.
The fluoride ion electrode was rinsed with bi-distilled water before and after every contact with the tested solution.
Descriptive analysis for the primary endpoint (cumulative fluoride release) was performed with boxplots and nonparametric location parameters (minimum, first quartile, median, third quartile, maximum) supplemented with arithmetic mean and standard deviation of the cumulative fluoride release for every material.
One factor analysis of variance (ANOVA) was used for the overall comparison of the cumulative fluoride release (from measurement times 5 minutes to 6 months) between the different materials (FOLC, MPQM, FO, M, KCE) as primary statistical analysis of this study with the overall level of significance set to 0.05. As there was no fluoride release for UBLOK and the control these two materials were excluded from the ANOVA (see results section). Levene's test was used to assess the equality of variances (a p-value greater than 0.05 indicates that the equality of variances cannot be rejected). Quantile-quantile plots (QQ-Plots) will be used for comparing the probability distribution of data with the normal distribution. Tukey's post hoc test was used for post hoc pairwise comparisons in the cumulative fluoride release between the different materials. Results will be presented with difference in means with standard deviation and adjusted p-values. Additional descriptive analysis for every measuring time is presented with means and line plots. All the statistical analysis of data was done with R Version 3.6.0 (Software R Core Team, 2019).
Results
After 5 minutes a fluoride ion release could already be observed for FOLC with 2.14 ± 0.47 ppm and KCE with 28.97 ± 1.76ppm.
A rise in fluoride at measuring time after 2 hours ion release was observed for all test materials except for KCE and the control groups. After 24 hours all materials with fluoride release (MPQM, FOLC, FO, M, KCE) had the highest individual fluoride ion release. FO achieved the highest value at this time of measurement with a release of 62.06 ± 13.45 ppm whereas the lowest value was seen for KCE with 31.66 ± 2.48 ppm. Between 24 hours and 14 days, a clinically relevant decrease in fluoride ion release was observed for every material with fluoride release. In the following period from 4 weeks to 6 months a steady fluoride release plateau was established (Fig 2). There were only relatively small fluctuations within the intervals in the individual groups. The highest fluctuations were found for MPQM and the lowest fluctuations for M.
Analysis of the control groups revealed a fluoride ion release of UBL after 5 minutes of 0.046 ± 0.02 ppm while the untreated control group had a delivery of 0.059 ± 0.01 ppm. After 2 hours the untreated control showed a small increase to 0.1 ± 0.25 ppm, with a subsequent decrease to a level of 0.01 ± 0.00 ppm after 24 hours. After 14 days to 6 months changes in fluoride ion release for UBL and the untreated control group remained negligible. Compared to the other materials (MPQM, FOLC, FO, M, the control groups didn't show any relevant fluoride release (Fig 1). FOLC and MPQM show a similar pattern of fluoride ion release over time with a peak after 24 hours, declining after 14 days and then building a plateau. This additional analysis was executed by descriptive means. No time dependent statistical tests were performed for the comparison of FOLC and MPQM. But post hoc test in the primary analysis of this study showed a significant cumulative fluoride release over time (p < 0.001) (Fig 1).
Primary analysis was done via one factor (material) analysis of variance for the cumulative fluoride release. The following materials were included in this analysis: FOLC, MPQM, FO, M, KCE. The control material as well as UBLOK were excluded from the analysis of variance due to the missing fluoride release (see above). The normality-assumption of the data was not refused as no qq-plot showed great or systematic deviations from the baseline. A significant difference in total cumulative fluoride ion release between at least two of the investigated materials FOLC, MPQM, FO, M and KCE could be found (p < 0.001).
Following pairwise comparisons with Tukey's post hoc test showed no significant difference (p > 0.05) in cumulative fluoride release between Meron and Meron Plus QM (Fig 1). All other materials had statistically different cumulative fluoride release in the pairwise comparison (p < 0.001).
The values of conventional GICs peak after 24 hours and drop to lower values after 14 days than those of resin-modified GICs. The pattern of fluoride ion release was comparable for all conventional GICs materials. As this additional analysis was conducted with descriptive statistics no statistical tests were performed for this comparison with regard to FO, M and KCE over time. But pairwise comparison (post hoc Tukey Test in the primary analysis) of conventional and resin-modified glass ionomer cements of the same manufacturer showed a higher cumulative fluoride release from 5 minutes to 6 months in the resin-modified glass ionomer group. Cumulative release was significantly higher when comparing FOLC and FO (p < 0.001). MPQM showed higher cumulative fluoride release in comparison to M but the difference was not statistically significant (p > 0.05).
Cumulative fluoride ion release, over all measurement times, of all tested materials can be seen in Fig 2.
Discussion
Many studies have already investigated the fluoride release of both resin-modified glass ionomer cements and conventional glass ionomer cements but show differing results [6,21,22]. This might depend on several different conditions regarding the experimental setup. Factors such as the size of the contact surface to test medium [14], the ratio of powder to liquid during the mixing process and the preparation of the coated surface areas [23] should be taken into account, as well as the decisive influence on fluoride release caused by the length of lightcuring time on the used GICs [24].
In artificial saliva, fluoride ion release has shown to be lower than in deionized water, because of its higher saturation caused by calcium and phosphate ions [25]. A varying pH value also influences the speed of fluoride release [26]. In solutions with lower pH values, fluoride is released more frequently [27]. To keep the pH value on a persistent level TISAB III with bi-distilled water was used as storage medium in this study. The usage as storage medium is in concordance with other previously published studies [15,28,29].
A lot of studies concerning the fluoride release of fluoride varnishes used hydroxyapatite discs as carrying objects [30][31][32]. Fluoride release is a three-dimensional process in which every surface area will release ions, as long as there is contact with the surrounding solution. In in-vivo situations GICs have a much smaller contact area to saliva due to the bracket and the adhesive gap. Therefore there is a reduced possibility to release fluoride ions than in comparison to GIC covered hydroxyapatite discs in in-vitro procedures.
The use of fluoride free hydroxyapatite discs as carrier for the GICs in this study minimizes the effective fluoride releasing surface and leads to an overall more realistic fluoride release measurement. Human or bovine teeth as a carrier object would bear the risk of possible higher fluoride release due to additional fluoride ion release from the tooth surfaces itself into the surrounding solution [33] and are therefore not suitable for this process.
Fluoride release is also highly dependent on the temperature at which the test samples were stored during the testing period. At 55˚C the fluoride ion release is higher and at 4˚C lower as compared to 37˚C [34][35][36]. Due to this effect most studies do not use thermocycling and stored the test specimens at a temperature of 37˚C continuously [16, 37,38]. It has been advocated to use 5-55˚C as a thermocycling process for dental materials in the last twenty years [39] and it has been shown that hot food intake can even achieve higher temperatures [40]. For purposes of scientific standardization this study investigates fluoride concentration dynamics in the surrounding solution at 5-55˚C.
Fluoride release occurs through three mechanisms: washing off the surface, diffusion through the pores and cracks and mass diffusion from deeper layers [36]. A high release is desirable without changing the physical properties or provoking excessive cement degradation [14]. Apart from the influencing factors of the test setup, the filler composition of GICs, solubility of glass particles and reactivity of powder and liquid determine the amount of released fluoride [41,42]. Particle size also plays an important role. The smaller the filler particles, the higher is the amount of fluoride released. This is due to an increased surface area created by smaller particles [43].
An increased fluoride release of the resin-modified glass ionomer cements compared to conventional glass ionomer cements might be explained by higher porosity and larger pore size of resin-modified GICs [44][45][46].
Differences in size of particles or pores of the tested products could explain that a significant difference was found when comparing conventional and resin-modified GIC from the same manufacturer but no statistical significance could be shown in fluoride ion release performance when comparing different manufacturers to each other.
In principle, no general statement can be made by this study as to whether resin-modified or conventional glass ionomer cements release more fluoride as the p-value of Levene's test was 0.3196 (greater than 0.05) the assumption of equality of variances could not be rejected.
Other studies could also not find a clear difference between these two groups [21,47].
A comparison in the present study of resin-modified and conventional GICs from the same manufacturer showed a higher cumulative fluoride release of resin-modified GICs. A direct comparison is difficult to determine due to different compositions of the products and is not universally transferable. The potential fluoride release of resin-modified GICs is similar to those of conventional GICs, but are affected by several variables. Such as the formation of complex fluoride compounds, the type and quantity of resin used in the photochemical polymerization reaction and their interaction with polyalkenoic acids [48].
The highest amount of fluoride was released within the first 24 hours [21, 34,49], which is also determined in this study with the highest individual release being achieved at 24 hours across all groups. A reduction in the amount of fluoride ions released can be observed on the second day. This first phase, which can last up to a month, is called the "early or initial burst" phase [50,51] and could be confirmed by our findings.
After the initial burst which may occur due to loosely bound fluoride [52] a difference in pattern can be seen in conventional and resin-modified glass ionomer cements (28 days, 2 months). It is stated that a stabilization phase of the released fluoride occurs after 7 to 11 days [53]. In this study, the stabilization phase of the released fluoride was found after 2 weeks. The plateau of fluoride release seems higher for resin modified glass ionomer cements (Fig 1). A possible explanation has been described by Cabral et al. stating that the resin components slow down the acid-based reactions and thereby the ionic matrix is able to release more fluoride over time [54]. Interestingly a minimal amount <0.01ppm of fluoride concentration is sufficient to facilitate remineralization in in vitro studies. The minimum concentration in vivo could not be determined [55]. Therefore the differences in plateaus might have clinical relevance.
As the discs were not coated simultaneously at one single time in this study, it can be assumed that there may be minimal differences in the mixing ratio of powder and liquid for the test materials. The measured values of fluoride ion concentration are in a low range that even with the use of a highly sensitive fluoride, as in this study, accuracy cannot be guaranteed. This could also be one of the reasons why a minimal amount of fluoride release was measured in the control group with hydroxyapatite discs. Contamination of the storage solution as possible error could not be ruled out. Other studies have also shown minimal fluoride release from hydroxyapatite discs serving as a control group [54].
This study has an in-vitro character therefore further research should focus on clinical research in order to validate our findings and their effect on the incidence of WSL under clinical conditions.
Conclusion
All used products released fluoride ions over the entire test period. The highest release was achieved with Fuji Ortho LC (FOLC), the lowest release was achieved with Ketac Cem Easymix (KCE). Comparing the resin-modified and conventional GICs from the same manufacturer (FOLC vs FO by GC Orthodontics and MPQM vs M by VOCO), it could be noted that resinmodified GICs showed a higher cumulative fluoride release rate than conventional GICs. But comparing the performance of conventional GIC to resin-modified GIC of different manufacturers to one another revealed no statistical significance. Therefore a valid statement for higher fluoride release capacity in general for resin-modified GIC can not be supported. | 2021-02-28T06:16:47.365Z | 2021-02-26T00:00:00.000 | {
"year": 2021,
"sha1": "74f6f1f49dad2838afc88ebf020e584688f22950",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0247716&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ffd698cd657b883d673b42c7e56812e6d71aaef2",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
133387845 | pes2o/s2orc | v3-fos-license | Nonconventional Wastewater Treatment for the Degradation of Fuel Oxygenated (MTBE, ETBE, and TAME)
Catalytic wet air oxidation (CWAO) is a nonconventional wastewater treatment, consisting of oxygen pressure releasing inside a reactor in order to degrade organic compounds dissolved in water, using a solid catalyst in the presence of an activated O 2 species, usually at temperatures ranges of 125–250°C and pressures of 10–50 bar. CWAO can reduce operating costs of conventional treatment due to the use of ideal catalyst that is able to improve reaction conditions at temperatures and pressures as mild as possible, simultaneously setting high catalytic activity and long-term stability of heterogeneous catalysts. Oxygenated fuels are gasoline additives in reformulated gasoline and oxyfuels. In the beginning, they provided an alternative solution of environmental problems, such as greenhouse gas emissions and octane enhancement, caused by fossil fuel use. The oxygenated fuels frequently used are methyl tert-butyl ether (MTBE), ethyl tert-butyl ether (ETBE), and tert-amyl methyl ether (TAME). However, there is environmental impact from oxygenated fuel hydrocarbons related to widespread contamination of groundwater and other natural waters. Our research group developed a wide study in order to evaluate several catalysts (Ru, Au, Cu, and Ag supported on Al 2 O 3 , Al 2 O 3 -CeO 2 , and TiO 2 -CeO 2 ) and to obtain the best for the efficiency of the oxidation process.
Introduction 1.1 Oxygenated fuels
Oxygenated fuels are oxygen-rich compounds such as alcoholic and ether fuels that act as gasoline additives in reformulated gasoline and oxyfuels.Oxygenates can be blended into gasoline in two forms: alcohols (such as methanol or ethanol) or ethers.They have potential to provide an alternative solution of environmental problems caused by fossil fuel use.The oxygenated ether fuels used more frequently are methyl tert-butyl ether (MTBE), ethyl tert-butyl ether (ETBE), and tert-amyl methyl ether (TAME), although the first fuel oxygenate used in reformulation was MTBE.The use of MTBE as an octane enhancer in the United States began in 1979 [1,2].
MTBE is a compound (chemical formula C 5 H 12 O) that is synthetized by the chemical reaction of methanol and isobutylene, and it is almost exclusively used as a fuel additive in motor gasoline.Although ETBE and TAME have only secondary importance in production industry, recently they have become more prevalent, and they can be used instead of MTBE.In France, MTBE has been partially replaced by the ETBE since 1990 or as part of a "binary or ternary mixture" with MTBE [3][4][5].
ETBE (chemical formula C 6 H 14 O) is considered an attractive octane enhancer as it presents a lower reid vapor pressure (RVP) mixture and lower water solubility than MTBE and ethanol.The azeotropic mixture of ETBE with ethanol takes down the volatility of ethanol making it suitable as an additive for automatic gasoline [6,7].
Finally, TAME (chemical formula C 6 H 14 O) was considered as an oxygenated fuel until the 1990s despite its octane rating lower than other oxygenated fuels; besides it is very soluble with other ethers, and it is highly soluble in water (12 g/l) [8,9].
Importance of oxygenated fuels in petroleum industry
The leading produced fuel in the world is gasoline.Because of its widespread use and the fact that it is composed of that fraction of crude oil with lower boiling points, gasoline is the single largest source of volatile hydrocarbons to the environment.Motor gasoline comes in various blends with properties that affect engine performance.All motor gasolines are made of relatively volatile components of crude oil.Other fuels include distillate fuel oil (diesel fuel and heating oil), jet fuel, residual fuel oil, kerosene, aviation gasoline, and petroleum coke.In the petroleum refining process, heat distillation is used first to separate different hydrocarbon components.The lighter products are liquefied petroleum gases and gasoline, whereas the heavier products include heavy gas oils.Liquefied petroleum gases include ethane, ethylene, propane, propylene, n-butane, butylenes, and isobutane.Internal combustion engines of high compression ratio require gasoline with octane ratings that are sufficiently high to ensure efficient combustion [2,10,11].
Gasolines need additives that increase their octane rating so they can decrease their self-knock capacity, increasing their resistance to compression, and finally improve the quality of gasoline.An economical way of achieving these properties has been the use of anti-knock additives, such as tetraethyl and tetramethyl lead at concentrations up to 0.84 g/l.With the phasing out of lead from gasoline because it was increasingly recognized that lead is toxic and non-biodegradable, oxygenated fuel become a better alternative for gasoline additive, instead of lead.Oxygenated fuels act as octane enhancers, bringing the additional benefit of making gasoline burn almost completely.Actually, using oxygenated fuels in internal combustion engine leads to a reduction of greenhouse gas (GHG) emissions, compared to gasoline because they burn cleaner than regular gasoline and produce lesser carbon monoxide (CO) and nitrogen oxides (NOx) and they reduce emissions of unburned hydrocarbons.Furthermore, using oxygenated fuels in an internal combustion engine (ICE) provides an alternative to conventional fuels that can solve many environmental problems [6,12,13].
In the United States, air quality regulations placed on automobile exhaust gases have forced dramatic changes in gasoline formulations.By 1990, the Clean Air Act Amendments (CAAA) required additives, such as MTBE at 15% and ethanol, to be blended in gasoline in some metropolitan areas, heavily polluted by carbon monoxide, and to reduce carbon monoxide and ozone concentrations [14].
Environmental impact of oxygenated fuels
Despite providing better conditions in terms of fuel quality, an environmental impact from oxygenated fuel hydrocarbons related to widespread contamination of groundwater and other natural waters exists.The distribution and storage of crude oil and refined products result in releases of significant amounts of hydrocarbons to the atmosphere, surface waters, soils, and groundwater.Groundwater contamination by crude oil, and other petroleum-based liquids, is a particularly widespread problem.In Mexico, the agency in charge of producing and distributing fuels derived from petroleum distillation, such as gasoline, diesel, fuel oil, diesel, and LP gas, is Petróleos Mexicanos (PEMEX).The retail distribution of gasoline and diesel is carried out by service stations (gas stations).One of the environmental risks that involves the handling of these stations is spills or leaks of fuels, which cause the contamination of the sites where the storage tanks are located [15].
Unfortunately, these oxygenates have high water solubility and high volatility, causing a high concentration of oxygenated fuel in the environment, air, and water.Another important problem happens when oxygenated fuel is accumulated in the groundwater due to it not absorbing appreciably to soil and undergoing only in slow biodegradation compared to the benzene, toluene, ethylbenzene, and the xylenes (BTEX) in gasoline.The relatively recalcitrant nature of oxygenated fuel to microbial attack makes them persistent, due to them being refractory to the biological treatments.Because of the chemical structure of these, oxygenated fuel hinders their natural biodegradation, which contains a combination of two biorecalcitrant organic functional groups: the ether bond and tertiary carbon atom.These are the reasons why water supplies close to the production sites of MTBE, ETBE, and TAME or near underground petroleum storage tanks and fuelling stations are often contaminated by large amounts of these compounds [16,17].
There have been extensive occurrences of groundwater contamination by MTBE in the United States because of its prevailing use.In a sampling study of 1208 domestic wells in the United States, MTBE was the most frequently detected fuel oxygenate and the eighth most commonly detected VOC.Perhaps the most publicized case of MTBE contamination of groundwater is the one involving public water supply wells in Santa Monica, California.In August 1995, the city of Santa Monica discovered MTBE in wells used for drinking water supply through routine analytical testing of well water [18,19].
MTBE has been detected in snow, storm water, surface water (streams, rivers, and reservoirs), groundwater, and drinking water, based on limited surveillance operations conducted in the United States.MTBE concentrations found in storm water ranged from 0.02 to 8.7 μg/l, with a median value of less than 1.0 μg/l.In streams, rivers, and reservoirs, the detection range was 0.2-30 μg/l, and the range of median values in several studies was from 0.24 to 7.75 μg/l [5,19].
In fact, the US Environmental Protection Agency (USEPA) included MTBE in its Contaminant Candidate List.MTBE in drinking water is carcinogenic for humans and animals.USEPA established a drinking water health advisory of 20-40 μg/l MTBE in December 1997, because it is hazardous to human health (US Environmental Protection Agency, 1997).
Although alternative ether oxygenates are detected less frequently than MTBE, these alternative oxygenates show future groundwater contaminations similar to MTBE if they are not under control.
The toxicokinetic data on MTBE in people come mainly from controlled studies of healthy adult volunteers and in a population exposed to oxygenated gasoline.MTBE quickly passes into circulation after inhalation exposure.In healthy volunteers exposed to inhalation, MTBE kinetics was linear up to concentrations of 268 mg/m 3 (75 ppm).It was measured in the blood and urine of people exposed to tertiary butyl alcohol, metabolic MTBE.The maximum blood concentrations of tertiary butyl alcohol were 17.2-1144 μg/m 3 and 7.8-925 μg/m 3 , respectively, in people exposed between 5.0 and 178.5 mg/m 3 (1.4-50ppm) of MTBE.Based on a singlebehavior model, rapid (36-90 min) and slow (19 h) components of MTBE half-life were identified (41).Following the introduction of two separate fuel programs in the United States, which require the use of gasoline oxygenation products, consumers in some areas have complained of acute health disorders, such as headaches, irritation of the eyes and nose, cough, nausea, dizziness, and disorientation.The acute experimental toxicity (CL 50 ) of MTBE in fish, amphibians, and crustaceans is greater than 100 mg/l.
WAO consists of an oxidation in aqueous medium at high temperatures using pure oxygen or high pressure air as an oxidant to maintain the liquid phase.The pressures used and reported in the literature range from 20 to 200 bar and temperature between 150 and 350°C, making this process highly expensive for industrial application.The use of catalysts allows reducing the temperature and pressure conditions for oxidation and even increasing the selectivity toward CO 2 .That is why we employ catalytic wet air oxidation, instead of WAO.CWAO of MTBE and oxygenated fuels of gasoline as ETBE and TAME is a nonconventional treatment for degradation of organic compounds in aqueous medium.Our research group developed a wide study in order to evaluate several catalysts and to know what the best are for the efficiency of oxidation process and the total mineralization of pollutants into CO 2 and H 2 O.
Synthesis, characterization, and catalytic activity of noble (Ru, Au, Ag) and based (Cu) metal nanoparticles supported applied in the CWAO of fuel oxygenated 2.1 Synthesis of Al 2 O 3 and Al 2 O 3 -CeO 2 by wet impregnation and precursor calcination
The synthesis methods occupied for the production of supported catalysts include different techniques or procedures based on a phenomenon of precipitation, chemical adsorption, hydrolysis-polymerization, etc.These methods can synthesize supported catalysts in a single step, or in two steps, that is, both the precursor salt of the support and the active phase are added in the reaction mixture in a single step; otherwise, in sequential or two steps, first the support is synthesized, usually an oxide, and then the active phase, usually a metal, is prepared by some other specific method, expecting all the metal to be added and adsorbed on the support, without metal loss and with a high metal dispersion [20,21].
These methods determine important properties such as homogeneous metal dispersion, high specific surface area, adequate acidity/basicity ratio, metal-support interaction, and generation of structural defects, for example, oxygen vacancies and reducibility; an improvement in the catalytic performance is concluded owing to the development of these properties in the synthesized catalysts [22,23].So in this, study we evaluated different synthesis methods for the catalysts tested in CWAO from oxygenated fuels.
The γ-alumina was obtained by the calcination of boehmite Catapal-B (AIO(OH)), in this process an amount of boehmite (AIO(OH)) is deposited in a Nonconventional Wastewater Treatment for the Degradation of Fuel Oxygenated… DOI: http://dx.doi.org/10.5772/intechopen.84250fixed-bed quartz reactor in which a continuous flow of air of 1 cm 3 /s is passed, and then the calcination is carried out, at a temperature of 650°C for 4 h.
Wet impregnation method was used to prepare the Al 2 O 3 -CeO 2 support.Ceria is incorporated into the boehmite (AIO(OH)) with an aqueous solution of Ce(NO 3 ) 3 .6H 2 O (necessary amount of salt to obtain 1, 3, 5, 7.5, and 10% weight) in 100 ml of distilled water.The precursor solution of ceria is previously deposited in a ball flask, the boehmite is added to this solution and left to stir for 3 h in a rotary evaporator, and then the solution is dried with constant agitation at 60°C to evaporate the water excess.After impregnation, the obtained solid sample was dried at 120°C for about 16 h and calcined at a temperature of 650°C in air flow of 1 cm 3 /s for 4 h.The CeO 2 support was obtained commercially.
Synthesis of noble and base metal catalysts by wet impregnation
The solid catalysts reported in the literature that are used in the oxidation of water pollutants can be classified into four groups: supported metal oxides, unsupported metal oxides, supported metals, and mixtures of noble metals and metal oxides.The type of supported metal is composed mainly of noble and base metals.These are also very important to influence catalyst activity.Noble metals such as Ag, Au, Ru, Pd, Rh, and Pt are very active elements for oxidation reactions; they reveal high activities and excellence stability; however, their high cost and limited availability can decrease their applicability.Base catalysts such as Ni and Cu are more interesting systems, and a lot of research is being done to improve their stability because by having a lower cost, compared to noble metals, they are an economical option; they are also active, but less stable, and suffer from carbon deposit and metal leaching [24,25].
Cu catalysts supported on Al 2 O 3 were synthesized by wet impregnation in a single step.A calculated amount of copper nitrate to obtain a concentration by weight of 5, 10 and 15wt% in copper plus an adequate amount of boehmite Catapal-B were dissolved in 100 ml of water; then the solution was adjusted to a pH of 1, with the addition of a drop of HNO 3 , and stirred for 4 h, regulating the temperature from 70 to 90°C.After impregnation, the obtained solid sample was dried at 120°C for about 12 h and calcined at a temperature of 400°C in airflow of 1 cm 3 /s for 4 h.Cu (5wt%)/Al 2 O 3 , Cu (10wt%)/Al 2 O 3 , and Cu (15wt%)/Al 2 O 3 are the monometallic Cu catalysts supported in alumina, synthesized by wet impregnation method in a single step, which later we will name as Cu 5 AlIH, Cu 10 AlIH, and Cu 15 AlIH.
Copper catalysts supported on Al 2 O 3 were also synthesized by sol-gel in a single step.An aqueous solution of 10 ml of aluminum trisecbutoxide ([C 2 H 5 CH(CH 3 ) O] 3 Al), 97% aldrich, d = 0.96 g/mol with 4 g of urea and copper nitrate adequate amount in grams for the percentages of 5, 10, and 15wt% in 1-butanol was progressively added, between 70 and 90°C to a mixture of water and butanol, under constant stirring.After 24 h reflux at 70°C, the resulting pseudo-gel was dried in in a rotating evaporator at 120°C for 12 h and then calcined at 400°C for 4 h.It is worth mentioning that a catalyst was prepared with a pyrrolidine additive instead of urea, exclusively with the same quantities of reagents as the 15wt% in Cu.The synthesized monometallic catalysts will be named as Cu 5 AlSG, Cu 10 AlSG, Cu 15 AlSG, and Cu 15 AlSGp.
Finally, the monometallic Cu/Al 2 O 3 catalysts were synthesized by wet impregnation with urea and with a concentration by weight of 5, 10, and 15wt% of the metal.A calculated amount of boehmite Catapal-B was dissolved in 300 ml of deionized water; then the solution was adjusted to a pH of 3 with the addition of 1 ml of HNO 3 and stirred for 2 h.After that, 200 ml of a solution of cupric nitrate [Cu(NO 3 ) 2 ½H 2 O] and urea is added dropwise to the solution of boehmite Catapal-B, regulating the temperature from 70 to 90°C.After impregnation, the obtained solid sample was washed three times with hot water and dried at 120°C, and finally it was calcined at a temperature of 400°C for 4 h.It should be noted that a catalyst was prepared with a pyrrolidine additive instead of urea, exclusively with the same amounts of reagents as the 15wt% in Cu.The synthesized monometallic catalysts will be named as Cu 5 AlIHU, Cu 10 AlIHU, Cu 15 AlIHU, and Cu 15 AlIHp.
Ru-supported catalysts were prepared by wet impregnation method of Al 2 O 3 and Al 2 O 3 -CeO 2 supports aggregating the appropriated amounts of an aqueous solution containing RuCl 3 * XH 2 O to obtain a nominal concentration of 2wt% of Ru, adding 100 ml of hydrochloric acid 0.1 M. First Al 2 O 3 and Al 2 O 3 -CeO 2 (1.0, 3.0, 5.0, 7.5, and 10wt% of Ce) support was wetted by distilled water in a beaker in order to have high dispersion and to maximize the mass transfer of added metal salt (RuCl 3 * XH 2 O) on the surface and the pores of the catalyst.The resulting solution is stirred for 1 h; after that it is heated at 60°C.The samples were dried at 120°C for 24 h and then calcined under air flow (60 ml/min) at 650°C for 4 h, with a heat rate of 2°C/ min.Finally, the catalysts were reduced under H 2 (60 ml/min) at 400°C for 5 h, with a heat rate of 2°C/min.The synthesized monometallic catalysts will be named as RuAlIH, RuAlCe 1 IH, RuAlCe 3 IH, RuAlCe 5 IH, RuAlCe 7.5 IH, and RuAlCe 10 IH.
Synthesis of noble and base metal catalysts by deposition-precipitation
Deposition of gold into the modified supports was carried out by the method of deposition-precipitation using urea according to the procedure described below.Support powder (Al 2 O 3 , CeO 2 , Al 2 O 3 -CeO 2 (1wt%), Al 2 O 3 -CeO 2 (5wt%), Al 2 O 3 -CeO 2 (10wt%) was first dispersed in distilled water.The temperature of the suspension was kept constant at 80°C and agitated with a magnetic stirrer.Secondly, the requisite quantity of chloroauric acid (HAuCl 4 ) solution was added to the suspension, and the temperature was let to stabilize.Thirdly, 2.33 g of urea was added into the reactor vessel, and the suspension was stirred continuously for 16 h.The deposition was followed by centrifugation of the catalyst suspension in 50 ml tubes.The centrifugation was conducted three times.Separated water was decanted away, and the tube was refilled with distilled water after the first and the second centrifugations.Posterior the following separation and washing, the solid was collected and moved to a rotary evaporator and dried at 60°C in a water bath under vacuum.Final drying was done in an oven at 120°C overnight.All catalysts were calcined in air flow by heating them from room temperature up to 300°C for 4 h.The synthesized monometallic catalysts will be named as AuAlDPU, AuCeDPU, AuAlCe 1 DPU, AuAlCe 5 DPU, and AuAlCe 10 DPU.
The supported Ag nanoparticles were synthesized by DP with NaOH.The procedure was the same as the described for the gold synthesis by DP with urea, only that, instead of urea, NaOH was occupied, regulating solution's pH to 9. The synthesized monometallic catalysts will be named as AgCeDPNa, AgAlDPNa, AgAlCe 1 DPNa, AgAlCe 3 DPNa, AgAlCe 5 DPNa, AgAlCe 7.5 DPNa, and AgAlCe 10 DPNa.All the catalysts prepared are mentioned in Table 1.
Characterization of noble (Ru, Au, Ag) and base (Cu) metal nanoparticles supported
Figure 1 shows the adsorption isotherms of the synthesized materials of RuAlIH and RuAlCe 1 IH.It was observed that both isotherms are of type IV, which were associated with capillary condensation in mesoporous catalysts, where the hysteresis loops indicated that the pores are well distributed.
For the catalysts of RuAlIH and RuAlCe 1 IH (the other TPR analyzes the rest of the catalysts not shown), Figure 2 which displayed a main peak of 36-52°C was observed which indicates that the reduction is carried out in that first peak, and it was attributed to the oxidation change of Ru from +2 to 0 (RuO) since it was the species that was reduced first.The second signal observed at 135-142°C was attributed to ruthenium oxide (RuO 2 ), with an oxidation state of +4 which passes from +4 to +2 and subsequently to 0. On the other hand, the two peaks clearly observed in indicated that the ions of Ru existed in two different states to be reduced with hydrogen, meaning that at the end of the reduction, only the states +2 and 0 remain.Figure 3 corresponds to the diffraction patterns of the catalysts containing Au.It showed only signals corresponding to Al 2 O 3 and CeO 2 , and only a decrease of the alumina signal was observed when the content of Al 2 O 3 -CeO 2 increases by 10%.The corresponding Au signals were not shown in this diffractogram due to the weight % in which the catalysts were prepared, and in XRD only the metal was observed at concentrations higher than 2 and sometimes 3%.
In Figure 4, the H 2 -TPR profiles of the Au-supported catalysts revealed that the first reduction peaks (around 50°C) appearing for all Au-supported catalysts corresponded to the highly dispersed Au peaks on the catalyst surface.This signal increased to values higher than 50°C in the case of the Au catalyst deposited in Ce which indicates a difference in the size of the particles (observed by TEM).The second peak (around 100°C) was attributed to a second oxidation state of Au that interacts with Ce.This signal increased with the Ce content.This can be supported since in the AuAlDPU catalyst, this signal did not appear; however, it appeared in the AuCeDPU catalyst.
Figure 5 shows the XRD for the copper catalysts prepared by wet impregnation method, in which γ-Al 2 O 3 phase was seen as well as the intense signals that indicated the presence of CuO, and the boehmite, indicating that the metal was correctly dispersed in the three synthesized catalysts.
Reaction conditions
The activity level tests of the catalysts synthesized in this study were carried out in a Parr batch 300 ml batch reactor, under the conditions of 100°C, 10 bar, and 1000 ppm of fuel oxygenated.In the standard procedure for a CWAO experiment, Nonconventional Wastewater Treatment for the Degradation of Fuel Oxygenated… DOI: http://dx.doi.org/10.5772/intechopen.84250Water and Wastewater Treatment 12 250 ml of fuel oxygenated solution were poured, and 0.25 g of catalyst were placed in the 300 ml reactor.When the selected temperature was reached, stirring was started at a maximum speed of 1000 rpm.This time was taken as the zero reaction time and the reaction duration was 60 min.These conditions were the same for all synthesized materials.The liquid samples were periodically removed from the reactor, then filtered to remove any catalyst particles, and finally analyzed by gas chromatography and total organic carbon (TOC).
With the following equation, the conversion values for total organic carbon and FO were determined at different times with intervals of 30 min up to 180 min of reaction: where TOC 0 is TOC at t = 0 (ppm), C 0 is FO concentration at t = 0 (ppm), C 60 is FO concentration at t = 1 h of reaction (ppm), and TOC 60 is TOC at t = 1 h of reaction (ppm).
The initial rate (ri) was calculated from FO conversion depending on time, using the following equation: where is the initial slope of the conversion curve, [contaminant] i = initial FO concentration, and m cat = catalyst mass (g cat /l).
So the selectivity was calculated according to the following equation:
Degradation of MTBE by catalytic wet air oxidation over noble and base metals
This research studied the degradation of the fuel oxygenated MTBE through CWAO occupying Ru, Au, and Ag as the catalysts, which will be responsible for mineralizing the pollutant.
Figure 6 shows the results of the catalytic activity for CWAO of MTBE with the sets of catalysts Ru, Au, and Ag.The best activity for the set of rutheniumsupported catalysts was for the one named RuAlCe 1 IH, since it presented a 68% conversion of MTBE and a 63% degradation of TOC; the most favorable results for this particular catalyst were attributed to a particle size of 9 nm measured by TEM, and to the contribution of ceria, due to the oxygen storage capacity phenomenon.Ceria is known for having the capacity to exchange oxygen, through its vacancies of oxygen, which promotes the increase of selectivity in all cases with the Ru-synthesized catalysts from a chlorinated salt, by the formation of a species of type Ce 4+ − O 2− − M + , contributing to improve the reducibility of ruthenium.
With respect to the Au-supported catalysts, AuAlCe 5 DPU was the one that stood out for its catalytic performance in comparison to the rest of its counterparts.AuAlCe 5 DPU reached a maximum of 73% MTBE conversion and a TOC degradation of 72%, indicating that this catalyst was efficient both to have a good Nonconventional Wastewater Treatment for the Degradation of Fuel Oxygenated… DOI: http://dx.doi.org/10.5772/intechopen.84250conversion and to transform to CO 2 .This behavior was explained by the fact of presenting the best distribution of particles on the surface of the catalyst, according to the TEM analysis, and confirmed by TPR.The largest amount of active particles for this catalyst was below 2 nm.It was observed that the other catalysts had a similar activity attributable to the distribution of particle sizes ranging between 2 and 10 nm.According to the performed analysis, well-dispersed Au nanoparticles and the oxidation state of Au play an important role in this type of oxide-reduction reactions.The excess of CeO 2 does not allow a good selectivity toward CO 2 since it interferes in the exchange of O 2 at the time of oxidation, giving an excess of oxygen that causes the metal particle to change its oxidation state on the surface of the catalyst and confirming the theory made by Imamura et al. [26] that a balance of metal particles in oxidized and reduced state is needed to obtain satisfactory results in terms of activity and selectivity; this theory is fulfilled in molecules that are strongly adsorbed on the surface of the catalyst such as acetic acid, and according to this study, this principle can be also applied with MTBE.
The results of the catalytic activity for CWAO of MTBE for the set of silver catalysts indicated that the best conversion obtained corresponds to the AgCeDPNa catalyst with 66%, due to the presence of CeO 2 .In this case, we can only mention an effect of CeO 2 because the particle sizes obtained by HRTEM and TPD-CO revealed different distributions but did not significantly impact the catalytic activity.This effect has been explained by several researchers as the formation of a bridge M-O-Ce where M means silver metal.The AgCeDPNa catalyst was the one that has a better behavior toward mineralizing CO 2 due to the oxide-reducing properties of this support.However, it was observed that the catalyst with 5% of CeO 2 has a very close TOC value with respect to the AgCeDPNa catalyst.In this case, the effect between the particle size, the activity, and the selectivity toward CO 2 was not possible to distinguish because, as the HRTEM histograms showed, the range of sizes was Water and Wastewater Treatment very broad in all cases; nonetheless, the effect of CeO 2 appeared in the activity and also in the selectivity.Imamura proposed a theory stating that a balance of metal particles in oxidized and reduced state is needed to obtain satisfactory results in terms of activity and selectivity, which is observed here by TPR of H 2 in the case of AgCeDPNa catalyst; Ag particles remain oxidized even with the passage of H 2 which makes them more selective toward CO 2 .It is not possible in this case to account for the proportion of Ag + /AgO particles because this can only be done by XPS, a very expensive technique and not available in this case; however, it can be concluded that, in terms of activity, the optimal catalyst is the one that only contains ceria.
Degradation of ETBE by catalytic wet air oxidation over noble and base metals
We also analyzed the degradation process of the ETBE molecule through CWAO using Cu synthesized by three different synthesis methods, with the main characteristic of being carried out in a single step.The aim of the application of these methods is to avoid the leaching of the metal, which has occurred in other previous experiments by different investigators, after having passed a certain time of the reaction.
The analysis results of the ETBE-treated solutions by CWAO are presented in Table 2; for each of the three synthesis methods of the Cu catalysts, Cu 10 AlSG and Cu 10 AlIHU catalysts were more active, obtaining 88 and 89% ETBE conversion, respectively, after 1 h of oxidation.But the highest values for TOC degradation were obtained with the catalysts prepared by sol-gel method, as Cu 5 AlSG reached 84% and Cu 10 AlSG 89%.This last result confirmed that Cu catalysts synthesized by sol-gel are more effective catalysts for the mineralization process, which allows to degrade the organic matter, coming from the contaminant present, by almost 90% until obtaining CO 2 in the treated solutions.In addition, we can affirm that the optimum percentage of Cu was 10%, as well as the commercial catalyst used for this type of reactions and reported in the literature.
Another discovery in this study with copper nanoparticles was realized by measuring copper concentration through atomic absorption in the ETBE-treated solutions, since no copper concentrations were obtained, particularly in the catalysts prepared by sol-gel method, opposite situation for the other catalysts prepared Nonconventional Wastewater Treatment for the Degradation of Fuel Oxygenated… DOI: http://dx.doi.org/10.5772/intechopen.84250by wet impregnation and wet impregnation with urea.Therefore, we can encapsulate the copper in the alumina, by sol-gel method, thus avoiding contamination by the metal, leaching, and, in turn, obtaining an improvement in the catalytic performance of the metal.
Degradation of TAME by catalytic wet air oxidation over noble and base metals
Another series of experiments was conducted under the same conditions described below over Cu synthetized in three different methods and Au-supported catalyst but in this experiments with TAME as a target molecule by CWAO.
Table 3 presents the analysis results of the treated solutions of TAME by CWAO; for each of the three synthesis methods of the Cu catalysts, the catalysts of Cu 15 AlSG and Cu 10 AlIHU were more active, obtaining 78% of TAME conversion, for both catalysts after 1 h of reaction.But the highest values for TOC degradation were obtained with the catalysts prepared by sol-gel method; Cu 15 AlSG reached 78% and Cu 10 AlIHU 75%.These results sustained that the Cu catalysts synthesized by sol-gel were more effective catalysts for TAME mineralization process, which allows them to degrade the organic matter, coming from the existing contaminant, by almost 80% in the treated solutions.
The results of the catalytic activity for TAME CWAO in Au-supported catalysts are shown in Figure 7.It was observed that the best activity happened with the catalyst at AuAlCe 10 DPU with 80% conversion of TAME, although all the remaining catalysts showed good activity except for the catalyst with AuAlCe 1 DPU; this could be explained by the fact that this molecule may not be very sensitive to particle size due to its structure.
Figure 7 also shows the abatement of TOC of the supported Au catalysts; as can be seen the best carbon transformation toward CO 2 was obtained for the AuAlCe 3 DPU catalyst, with a 77% conversion, and for the AuCeDPU catalyst with 80%, although all the remaining catalysts showed a remarkable performance, without exceeding these, except for AuAlCe 1 DPU.
Table 4 shows the selectivity to CO 2 , and we can say that the supported Au catalysts containing Ce 5 and 10% were the least selective to CO 2 .This is because there is a poisoning by CeO 2 that affects the selectivity when it is in excess due to the interaction of M-Ce-O; this case shows that the optimal percentage of CeO 2 for the TAME is 3%.It is important to note that the AuAlCe 3 DPU catalyst is equally active and selective to the AuCeDPU catalyst, so the alumina-ceria support with low concentrations of CeO 2 is presented as an alternative for the wet oxidation process of TAME.
Conclusions
This study concludes that the catalytic activity for MTBE oxidation of catalysts Ru, Au, and Ag, supported on Al
Figure 5 .
Figure 5. X-ray diffraction patterns for the Cu-supported catalysts synthesized by wet impregnation in a single step.
2 O 3 , 5 )
CeO 2 , and Al 2 O 3 -CeO 2 , synthesized by wet impregnation methods and DP with NaOH and urea, in two steps, is classified as follows: AuAlCe 5 DPU > RuAlCe 1 IH > AgCeDPNa (In addition, the catalytic activity for the oxidation of target molecule ETBE on Cu catalysts supported on Al 2 O 3 , synthesized by three different methods in a single step, is classified as follows: Cu 10 AlSG > Cu 10 AlIH > Cu 15 AlIHU(6)The catalytic activity for TAME oxidation using Copper catalysts supported on Al 2 O 3 , synthesized by three different methods in a single step and Au supported on
Figure 7 .
Figure 7. TAME conversion and TOC degradation % at 100°C and 10 bar over Au-supported catalysts.
Table 2 .
ETBE conversion, TOC and SCO 2 at 100°C and 10 bar of pressure during 1 h of reaction with one of[ETBE] 0 = 1000 ppm.
Table 3 .
TAME conversion, TOC degradation, and SCO 2 at 100°C and 10 bar of pressure during 1 h of reaction with one of [TAME] 0 = 1000 ppm. | 2019-04-26T13:58:42.596Z | 2019-02-08T00:00:00.000 | {
"year": 2019,
"sha1": "e8b6a34aa17bcc1e60e3b65861ea49e5a377fc50",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/65548",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7104e7a61189ac78547dc28883777deb81428b58",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
32567397 | pes2o/s2orc | v3-fos-license | Detection of Aeromonas hydrophila Using Fiber Optic Microchannel Sensor
This research focuses on the detection of Aeromonas hydrophila using fiber optic microchannel biosensor. Microchannel was fabricated by photolithography method. The fiber optic was chosen as signal transmitting medium and light absorption characteristic of different microorganisms was investigated for possible detection. Experimental results showed that Aeromonas hydrophila can be detected at the region of UV-Vis spectra between 352 nm and 354 nm which was comparable to measurement provided by UV spectrophotometer and also theoretical calculation by Beer-Lambert Absorption Law.The entire detection can be done in less than 10 minutes using a total volume of 3 μL only.This result promises good potential of this fiber optic microchannel sensor as a reliable, portable, and disposable sensor.
Introduction
Aeromonas hydrophila is a known free living pathogen that not only causes infections to human and animal but also influences the economics of fisheries department.It is a Gram-negative aerobic and facultative anaerobic, oxidasepositive motile bacterium mostly found in aquatic environments especially fresh stream and ponds.This bacterium contributes to internal infectious conditions like dermal ulceration, rotting of tails, inflammation, exophthalmia, scale protrusion, and others in fish species [1] especially Cyprinus carpio and Carassius auratus.This has caused high mortality and economic losses of fish farming [2].These bacteria can cause pathogenesis to human too through freshwater fish hemorrhagic, zoonotic diseases and also food-borne infections.The microorganism may require highly appropriate antimicrobial treatment to control potential serious consequences [3] due to its resistance towards the normal antimicrobials (ampicillin, cephalosporin, and cloxacillin).
Pathogen early detecting tool would be extremely useful to recognize A. hydrophila at an early stage to prevent diseases and infections to animals and human and also minimize the economic impact it brings to fish farming industry.Detection of pathogen demands a highly efficient, sensitive, and rapid tool due to its pathogenicity and fast infection rate.In the past years, several methods have been developed for detection of A. hydrophila such as identification based on molecular detection (e.g., polymerase chain reaction [PCR] method) and culture based detection method, which can be laborious and time-consuming [4][5][6].Numerous probes like DNA microarray [7], immunological tool (based on monoclonal antibody [Mab]) [8], and electronic nose (based on changes of volatile patterns produced by A. hydrophila) [9] were also constructed in recent years.Even though these unique systems were relatively sensitive and accurate, a need for a direct and rapid detection tool with lower detection limit is still essential to compete with increasing rate of infections.
The advancement in molecular biology, microelectronics, and optical and computer technologies has led to the development of miniaturized biorecognition system that can deliver a rapid result with simple methods and observations [10].This miniaturized device can be used as an in-line direct detection and also has in vivo measurement applications [11].Emerging method using optical fiber arrays with multiwavelength ultraviolet to visible (UV-vis) spectra provides a powerful and low cost substrate for creating high-density sensing systems to address a variety of biological problems [12][13][14][15].An optical fiber in sensor technology, especially in detection of microparticles, delivers a very high sensitivity either with applications of light-emitting diodes (LED) or with laser diode (LD) as the excitation [16].
Microfluidic system is a system that processes or manipulates small (10 −9 to 10 −18 litres) amounts of fluids, using channels dimensions of tens to hundreds of micrometres [17].The microfluidic system offers so many advantages: the ability to use very small quantities of samples and reagents and high resolution and sensitivity; low cost; less times for analysis; and small footprints for the analytical device [18].For certain applications, only small amounts of sample analyte might be usable.The analyte has to be exploited as much as possible.Several attempts have been made to fabricate focusing system on different materials.Numerous microlens materials and corresponding technologies have been demonstrated, like polymer [19][20][21][22], silica [23,24], diamond [25], and sol-gel [26,27].The main challenges faced by many experiments are the fabrication involving high-priced charges and complex processes and equipment [28].
Fiber optic biosensor can be an efficient method for reliable detection of pathogen in the effort to reduce its fatal effect.Several studies have successfully employed biosensors to detect E. coli [29][30][31][32] including the development of fluorescent based fiber optic biosensor hybridized with DNA sequences to detect oligonucleotides as an indication of E. coli infection.For instance, Leung et al. [33] demonstrated the detection of Helicobacter pylori at near-infrared region (785 nm) via oligonucleotide cross-linked to a fluorescent fiber optic sensor.The detection was near infrared region (785 nm) with oligonucleotide cross-linked to the sensor.In this research, a direct analytical tool for detection and identification of A. hydrophila using UV-Vis spectra based fiber optic biosensor is proposed.Optimization work on this detecting tool is performed in order to produce a sensitive and rapid detection without the need of focusing system.The detection of microorganism is determined based on their absorbance spectrum and region.
Preparation of Microorganism.
A. hydrophila was used to test the suitability and efficiency of the fiber optic biosensor.Strains of A. hydrophila were obtained from Science and Technology Research Institute for Defence, Malaysia Ministry of Defence, and Department of Fisheries, Ministry of Agriculture and Agro-Based Industry, Malaysia (Penang Branch).The strain was grown on Tryptic Soy Agar (TSA, Oxide) and incubated overnight.The sample was harvested and washed using sterilized deionized water and the growth curves were measured at 610 nm by using UV spectrophotometer.The measured growth curves of these microorganisms at particular wavelengths (Table 1) were used to select their physiological stages for subsequent optical measurement.
Design and Fabrication of the Fiber Optic Microchannel.
The microchannel was designed and fabricated by photolithography method as shown in Figure 1.First, the mask for the chamber was designed using AutoCAD 2011 (Figure 2) and printed on a high resolution transparency sheet.The width of the fiber grooves was to fit to the diameter of the fiber (125 m).The sizes of microchannel and reservoir diameter are of 0.1 cm × 5 cm and 0.18 cm, respectively.The detecting reservoir was designed to collect 3 L of sample volume.Next, the glass substrate was cleaned using methanol, propanol, and isopropyl to remove impurities, contamination, and particulate matter on the surface.After rinsing, the substrate was immediately placed in the oven at 200 ∘ C and held there for a fortnight.A small amount of photoresist Ordyl Alpha 940 was placed onto the substrate.The designed mask was aligned accurately on the photoresist coated substrate and exposed under high intensity of UV light for about 20 seconds.The substrate was soft baked for about half an hour at 100 ∘ C on a hotplate.After that, the underlying material was removed.The substrate was immersed in a developer solution for a few seconds and then washed with deionized water.Then, the substrate with the designed pattern was hard baked for one hour at 100 ∘ C to solidify and harden the resist on the surface.Next, by using a microscope, fiber optic was placed and aligned into the fiber grooves made on the substrate.The fiber optic was glued permanently into the grooves with fast cure epoxy in order to avoid any mechanical movement that leads to the issue of misalignment.Fiber optic was used as a medium to transmit and receive the light signal for detection and identification of the particular microorganisms.Fiber optic has an excellent capability in delivering and receiving light signals, as the attenuation loss was less during signals transmission.Next, the inlet and outlet tubing were fixed onto the substrate.Finally, the microchannel was enclosed with a transparent lid/glass cover slide.
Absorbance Measurement by Fabricated Fiber Optic
Microchannel.As illustrated in Figure 3, the microchannel was connected to spectroscopy system by bare fiber adapter.The spectroscopy system comprised light source with wider spectral region of wavelength (200 to 900 nm) and integrated JAZ photodetector (JAZ, Ocean Optic, USA) in a single unit which count the photons from the light transmitted from the fiber optic.The light source from spectroscopy with the potential of 5 V was transmitted to microchannel through fiber optic.It takes 10 minutes to stabilize the spectrum before the absorbance rate was acquired and collected by fiber optic.Absorbance unit of distilled water was used as the reference point for the microchannel reading.The sample was then injected to the microchannel through inlet tubing using syringe pump.The absorbance spectra (AU) of sample as function of time for continuous wavelength from 200 nm to 900 nm were recorded for different physiological stages from lag phase of 3 × 10 7 cells mL −1 , exponential phase of 5 × 10 7 cells mL −1 , and stationary phase of 7 × 10 7 cells mL −1 .When microbes passed across the light signal, the desired microbe will be illuminated and excited at different wavelength compared to other microbes.The most absorbed region was chosen as an optical detection region of particular microorganism.Continuous spectra of the sample absorbance will be collected and analyzed.
Validation Measurement.
UV spectrophotometer (Genesys 10 UV, Thermo Electron Corporation) was used to validate the max/peak absorbance region obtained from fiber optic microchannel in order to compare and justify the experimental results.The UV spectrophotometer measurement of the samples was performed by taking the values of absorbance unit at different interval of growth time.The wavelength selected as the set point for UV spectrophotometer was based on highest absorbance region (wavelength) of the sample during microchannel measurements.The selected wavelengths based on microchannel measurement for A. hydrophila included 352 nm 353 nm and 354 nm.The collected data through UV spectrophotometer and experimental works were analyzed based on two statistical measurements.Firstly, the coefficient of determination ( 2 ) which indicates how well the data fit or regression line approximates the real data points.Then standard deviation was calculated to evaluate the accuracy of the data to expected value.
Further validation to the UV spectrophotometer measurement was done by calculating the concentration for absorption unit of the corresponding samples using the Beer-Lambert absorption law [37] as follows: where is absorbance, is molar absorptivity, is path length of sample, and is concentration of sample.Equation (1) states the relationship between absorption and the concentration of the microorganisms.The absorptivity coefficient, , is a constant used to measure amount of particles which absorb light at a particular wavelength [38].It is important to note that the linearity of Beer-Lambert Law is limited by chemical and instrumental factors such as deviations in absorptivity coefficients at high concentrations, scattering of light due to particulates in the sample, fluorescence sample, changes in refractive index at high analyte concentration, shifts in chemical equilibria as a function of concentration, stray light, and nonmonochromatic radiation.
Transmission Measurement by Fiber Optic Microchannel.
Transmittance is the amount of light transmitted expressed as a fraction of the amount of light striking an object.Transmittance of light at specific wavelength through the fiber optic can be explained by Beer-Lambert Law of transmittance which described the intensity ratio of the radiation coming out of the sample to the intensity of incident radiation.The results obtained from transmission measurement were used to explain the efficiency of microchannel to conduct the overall experiments.After about 10 minutes of signal stabilization, the maximum transmittance rate of the microchannel was measured at approximately 85%-92% intensity (Figure 4).Hence, the efficiency of the microchannel to read absorbance was 0.85 with 0.15 of uncertainty.This finding, while preliminary, showed that there was enough light introduced from the light source into the optical fiber and collected by the detecting device.There are possible alternatives to improve efficiency of the microchannel among others which are application of high output power light sources and the use of higher numerical aperture and higher core diameter of optical fibers.These solutions may lead to obtaining much higher light-coupling coefficient from light source into optical fiber and from optical fiber to detector.However, the work and results are limited due to the unavailability of equipment.
Absorbance Spectra of A. hydrophila for Various Physiological Phases.
In order to set the reference point for the absorbance spectra, distilled water was introduced into microchannel by using syringe pump.The system was then allowed to stabilize for 10 minutes to acquire the absorbance unit.The absorbance unit (AU) of UV-Vis spectrum of clear and contaminant-free (particular matter-free) distilled water was 0.1 (Figure 5).Distilled water scatter but does not absorb ultraviolet and visible region but absorbs IR radiation fairly well [39,40].Then, Aeromonas hydrophila sample was injected to the microchannel and the continuous absorbance spectra of samples were measured as function of wavelength for different physiological stages of microorganism.The absorbance as a function of wavelength was shown in Figure 6 which falls within the region of ultraviolet and visible range.This is good since this will help to reduce interference of absorbance from distilled water.The peak absorbance spectra for all the three physiological phases remained within region of 352 nm to 354 nm.Lag phase of A. hydrophila recorded the peak absorbance of 1.15 at the region of 354 nm and exponential phase gave 1.54 peak absorbance at the region of 352 nm.Subsequently, A. hydrophila at stationary phase absorbed most with 1.64 absorbance at the 353 nm region.The absorbance unit was increasing as the physiological phases of sample change.In short, the entire A. hydrophila can be detected at 352 nm to 354 nm although there would be difference in absorbance by using the designed fiber optic microchannel.
From the observation, sample absorbed most at stationary phase followed by exponential phase and then lag phase at approximately similar region of wavelength.Different optical properties of each microorganism can be used to identify the sample at different stages, shapes, sizes, and chemical compositions [41].In the lag phase, cells adapt to growth conditions and undergo changes in their chemical composition.The exponential phase is a period characterized by cell doubling where cells increase exponentially.As the stationary phase results from a situation in which growth rate and death rate are equal due to depletion of essential nutrients, thus the growth eventually stops before it decreases.Thus, measureable properties of cells such as cell population, DNA, chemical properties, energy, and photopigments will be changing according to cell growth rate.These differences were able to be detected with optical measurement and differentiated by absorbance unit for each phase.When the cell population increased, the absorbance unit also increased as in stationary and exponential phases but still can be found in approximately similar region.A slight difference in detection region of wavelength for each phase might be due to changes of cell population and energy level of absorption.Furthermore, microchannel results can be affected by external noise or microbending of fiber optic during experiment setup.
Validation of Experimental Findings.
In this section, the validation of the microchannel experimental results was done by comparing with UV spectrophotometer reading and also with theoretical calculation using Beer-Lambert Law of Absorption.
First validation was done with respect to UV spectrophotometer reading.The absorbance values of the microbial growth after 12 hours were measured at the selected wavelengths (chosen based on the max/peak absorbance region recorded by the microchannel system earlier).The measured graph using spectrophotometer was summarized in Figures 7 and 8. Based on the experimental observation there was significant positive correlation between UV spectrophotometer readings and experimental findings using fiber optic microchannel.The validation measurement for detection region of A. hydrophila was promising as UV spectrophotometer is able to produce favorable growth curve shape with maximum absorbance unit when compared with a typical growth curve shape.
A crude simulated growth curve (Figure 8) was generated based on best-fit curved trend-line method together with the measurement of coefficient of determination ( 2 ) to indicate how well the data approximate the real UV spectrophotometer data points.Results showed that data taken at wavelength of 352 nm displayed most acceptable reading of 1.54 AU and the determination of coefficient ( 2 ) of 0.9688.Hence it is concluded that the higher 2 value at 352 nm means the UV spectrophotometer data fitted better to data collected by microchannel compared to other wavelengths.Further analysis of standard deviation revealed the accuracy of the data to expected value as shown in Table 2.A low standard deviation was found upon calculation at wavelength of 352 nm compared to the other wavelength region.Hence, based on 2 and standard deviation values, the absorbance readings are shown to be closer to microchannel data at wavelength 352 nm and hence it can be deduced that A. hydrophila can be detected most reliably at the region of 352 nm.A summary of the various sensor technologies that achieved high accuracy is shown in Table 3.However, none of these works tested on Aeromonas hydrophila.The closest work to ours was done by Richards and Watson [42] whereby they detected A. hydrophila at the wavelength of 364 nm which is slightly different to our findings in this work.
Next the experimental results were compared with theoretical value calculated using Beer-Lambert Absorption Law.The absorptivity coefficient, , is a constant that is essential to the solute itself and is measure of a degree to which particles in the solute absorb light at a particular wavelength.Initially, the absorption coefficients () at different wavelengths were calculated.The choice of wavelength depends fundamentally on the values at which the spectra show the greatest difference and can be easily selected from the maxima on this curve.This calculation was performed by constructing calibration curves (Figure 9(a)) at different concentration levels (0-6 × 10 7 cells/mL).The absorption coefficients were calculated from the slope of concentration of each fraction against the absorbance measured at different wavelengths.The value obtained for the absorption coefficients was 1.25 × 10 −8 mL cm −1 cell −1 .With the known absorption coefficients for the different wavelengths, the absorbance can be measured for each concentration.Figure 9(b) shows the absorption measured from fiber optic experimentally and also calculated by Beer-Lambert Law for exponential phase as a mean for comparison.By calculation, the peak absorbance was found to be 400 nm with 1.2 absorbance value while fiber optic microchannel gave absorbance of 1.54 at 352 nm.
These deviations from the Beer-Lambert Law can be classified into real deviations, chemical deviation, and instruments deviation.Since the analyte used was microorganism, thus the deviations are mostly from instruments and real deviations.Beer-Lambert Law is capable of describing absorption behavior of solutions containing relatively low amounts of solutes dissolved in it.At high analyte concentrations, it interacts with solvents and the analyte begins to behave differently due to interactions with the solvent (including hydrogen bonding interactions).This can cause different charge distribution on the neighbouring species in the solution.It can result in a shift in the absorption wavelength of the analyte.Furthermore, high analyte concentrations can also alter the refractive index of the solution that could affect the absorbance obtained [43].Deviations can also be attributed to instruments since polychromatic source of light was used in this work.The difference in the results shown in Figure 9(b) suggests different values of molar absorptivities for experimental and calculated ones that led to deviations in absorbance values obtained.Stray radiation can also be a potential source of deviations due to reflection and scattering by the surfaces of lenses, mirrors, gratings, and others.Table 4 illustrates further the comparison between numbers of cells corresponding to the absorbance measured at 352 nm for various systems.The number of cells based on the absorbance calculated by microchannel differed slightly for all phases (∼5%) when compared to the UV spectrophotometer readings.Again, the difference between the Beer-Lambert and microchannel values was larger (∼84%).The above results compliment the absorbance results as discussed in Figures 7 and 8.The ability to detect low number of sample at 10 7 is quite good as well.For instances, at lag phases (Table 4), the number of cells obtained using microchannel is very close to the reading in UV spectrophotometer indirectly suggesting that this microchannel fiber optic detection is a reliable detection system for up until 10 7 cells/mL (the tested range of limit in this work).Although this detection limit is incomparable to other detection methods like PCR (DNA based) and electronic nose (volatile compounds based) that gave a detection limit of 2.5 CFU/mL [44] and 10 3 CFU/mL [45], respectively, however, this method offers advantages of being simple, easy, less laborious, and cheaper compared to the above-mentioned methods.These preliminary findings suggest further investigation should be conducted for lower detection limit in order to test and explore its capability.
The difference in absorbance between experiments and UV-Vis spectrophotometer and theoretical values might be due to the influence of external noise during experimental work.The uncertainty or disturbance in the values of absorbance may be due to the presence of impurities in the sample injected into the channel or other contaminants.Samples of biological molecules should be pure in order to quantitatively use UV absorption spectroscopy [46,47].Any contaminating nucleic acids in a protein sample will increase the apparent absorbance, likewise for contaminating proteins in a nucleic acid sample [48].
The optical fibers on microchannel were designed to be alignment-free.However, there were several factors that might affect the light transmission in optical system.The travelling light through fiber optical could lose power over distance.This loss of power also known as attenuation is expressed in decibels (dB) or rate of loss per unit distance.Attenuation is due to absorption by the core and cladding which might be caused by the presence of impurities and disclosing of light from the cladding.At some point, the power level may become too weak for the receiver to distinguish between the optical signal and the background noise.Consequently, it can cause signal reduction that leads to inefficiency in the system.Apart from that, macrobending of fiber optic during experiments especially to connect to the JAZ spectroscopy could induce the signal power loss.Further losses can also occur at the splice locations, at fiber optic connectors due to poor cleave and also presence of contaminants on the connectors [49].As a result, all these factors including length of fiber optic used, macrobending that occurred during connection, and link loss mechanism caused the loss of signal power to achieve higher light transmission.
Generally, in biological samples, the primary photoacceptors of UV light are nucleic acids and proteins, while visible light is absorbed by cytochromes [50].Every sample has a different particular absorbance spectrum between UV and visible regions because it depends on the protein and DNA content of the sample [48].Since different samples vary by their energy level, DNA, and proteins, thus special primary photoacceptors are needed for every sample to absorb most at specific spectral regions of light [51].Besides, the characteristic and structure of each microorganism also played an important role in identification of detection region for each sample.For instance, A. hydrophila is a Gramnegative prokaryotic cell with straight rod with rounded ends.A. hydrophila contains cell wall and structure of slimelayer in matrix structure that embeds the cells.This layer or capsule also called glycocalyx is a thin layer of tangled polysaccharide fiber.The cell wall and outer membrane of A. hydrophila are made of thicker capsules or layers and bacteria lack of nucleus.Hence, the peak absorbance range of A. hydrophila falls at the UV spectral region, whereas eukaryotic microorganism contains nucleus to determine the particular spectral region of DNA peak absorption.In brief, microorganism at any phases can be detected at similar region of wavelength with difference in absorbance rate.
Conclusion
The detection and identification of microorganism can be done quite accurately using absorbance spectra of region.From the entire experimental work, A. hydrophila was detected at 350 nm to 354 nm.Besides, total volume of sample needed for detection was approximately 6 L or 10 2 cells/mL in less than 10 minutes.These results were comparable to theoretical values and also spectrophotometer measurements.The microchannel has the potential to convey efficient, precise, and sensitive results.In a nut shell, the microchannel has proven its functionality as fiber optic biosensor which can be used for rapid and accurate detection and identification of A. hydrophila which would be of benefit in varieties of field.Moreover, the microchannel can be reused for other samples, is easy to move, and consists of simple methods to fabricate it.Further work needs to be done to integrate the recognition elements into a lab on chip for specific analyte of interest.
Figure 3 :
Figure 3: Experimental setup for fiber optic microchannel with JAZ spectroscopy.
Figure 6 :
Figure 6: Absorbance spectra with error bar for A. hydrophila for their physiological stages by fiber optic microchannel.
Figure 8 :
Figure 8: Simulation of A. hydrophila growth curve microchannel data based on best-fit trend-line method.Evaluated coefficient of determination ( 2 ) on how well the fitted curve approximates the real data points of microchannel and other regions obtained from UV spectrophotometer.
Figure 9 :
Figure 9: (a) Calculated absorbance unit by Beer-Lambert Law for A. hydrophila based on cell concentration.(b) Measured and calculated absorbance unit for exponential phase of A. hydrophila.
Table 1 :
Selected sample phases based on growth curve obtained by spectrophotometer for absorbance measurements.
Table 2 :
Comparison of absorbance units for microchannel and UV spectrophotometer of A. hydrophila with standard deviation of each set of data values.
Table 3 :
Coefficient of determination, 2 obtained for various samples by previous studies.
Table 4 :
Comparison between numbers of cells corresponds to the absorbance measured at 352 nm for various systems. | 2018-04-03T02:11:30.344Z | 2017-01-04T00:00:00.000 | {
"year": 2017,
"sha1": "a1d24e17f7059969ec95b41a768d65bb8fc9ee1a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/js/2017/8365189.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a1d24e17f7059969ec95b41a768d65bb8fc9ee1a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science"
]
} |
56269285 | pes2o/s2orc | v3-fos-license | The textuality of O Macaco Brasileiro in the foundation of the brazilian journalistic discourse (1821-1822)
foundation and the operation of the journalistic discourse in Brazil and the meaning of nation, freedom and independence during the years 1821 and 1822. This research is theoretically based on the Discourse Analysis (Pecheux, 1969, 1975; Orlandi, 1996, 1999) producing interpretation moves that will make it possible to understand part of the functioning of an epoch, as well as of a social practice that produces founding principles. We realize that it was not the arrival of the Portuguese Court to Brazil that produced a Brazilian journalistic discourse, but the presence of a Brazilian press. It was from 1821, with the bill that abolished the previous censorship, that there was a displacement from the journalism determined by the Court to another discursivity. This happens in the textuality of O Macaco Brasileiro. As it installs a new discursivity, it materializes a new Brazilian journalist subject position which corresponds to the foundation of the Brazilian journalistic discourse.
Introduction 1
The year 1808 opened with a series of changes in the colony. The first one was the arrival of the Portuguese royal court in Brazil and the installation of the royal press. With the royal family and the power transferred from Lisbon to Rio de Janeiro, the first Brazilian periodicals started to circulate in the new metropolis: Correio Braziliense, written and printed in London and Gazeta do Rio de Janeiro, written and printed in Rio de Janeiro. Hence, the Brazilian press was launched.
According to Sodré (1999), the installation of the typography in Brazil happened by chance when one of the members of the crown court, Antônio de Araújo, future Count of Barca, put the equipment in the hold of the ship he used to escape from Portugal, and, as he arrived in Brazil, had it installed in his own house. In this way, the coming of the royal family brought to the Portuguese colony many cultural advances, such as the printing of books and serial publications, usually with novels in chapters. As there were few literate people in the colony, the launching of the periodicals contributed to the institutionalization of the Portuguese language, which was imposed through the writing as a way of domination.
The transference of the Portuguese royal family to Brazil dislocates around 15,000 Portuguese people to the new site of the Portuguese crown. This happening will change the relationship between the languages spoken in Rio de Janeiro. Besides that, king D. João VI of Portugal created the Brazilian press and founded the Biblioteca Nacional/National Library, institution which will be fundamental in the cultural and intellectual Brazilian lives so far. The result is an effect of oneness of the Portuguese language in Brazil. Portuguese is the language of the king, whose seat is in Rio de Janeiro, then the capital of the Portuguese Kingdom 2 .
Different from the first years of the royal family in Brazil, the Brazilian press had a period of great proliferation of newspapers in Rio de Janeiro as from the 1815, during the pre-independence period (1821). According to Lustosa (2000), many of them had short life, circulating only with a few issues.
They appeared once or twice a week and there were few copies. The difficulties in communication impaired the disclosure around the provinces. Many would be read only by the public in the cities where they were published. They were distributed only to the subscribers, who never outnumbered two hundred. Only much later the sunday sale, with newsboys shouting the name of the newspaper and the main headlines around the city, would be launched (LUSTOSA, 2000, p. 28).
According to the historiographical discourse, the manifestation of the opinions and political proposals of the editors, which generated many disagreements between them and their readers in those times of political transition, can be found in the daily newspapers of the first half of the nineteenth century. Pêcheux (1988) states that the discourse is produced by socio-historical affiliations of meanings, in a space that urges the interpretation movement as an effect of such affiliation networks, and that it always causes a displacement. Thus, […] the discourse determines a possibility of a destructing-restructuring of those networks and trajectories: every discourse is the potential sign of a movement in the socio-historical affiliations of identification (PÊCHEUX, 1988, p. 56).
The Portuguese colony went through many transformations in the period between 1821 and 1822. The press played an important role in such changes by disclosing the political ideas of the time.
2 Translator's Note: to keep the reading fluency, all the citations in this article, originally taken from Portuguese language texts, will henceforth be translated into English. Lustosa (2000, p. 25-26) says: Brazilian press was born compromised with a revolutionary process, in a moment when we, all of a sudden, let aside the thought that we were Portuguese and assumed our Brazilian nationality.
[...] The same journalists who, before December 1821, were celebrating the Lusitanian nation, preaching for conciliation, were, a few days after, doing their best in the defense of the separation between the Brazilian and Portuguese interests.
According to Antônio Cândido (1981), the coming of the royal family to Brazil established the beginning of a period of lights, expanding the interests for arts and literature, shaping the new aristocracy of the colony. Such changes motivated the intellectuals to conceive associations such as the Sociedades Literárias (Literary Societies) and the Academia dos Renascidos (Academy of the Reborn), which became part of the social life.
In that decisive moment, an 'intellectual life' in its proper sense was configured for the first time in Brazil. [...] The unusualness and the difficulty of instruction, the scarceness of books, the sudden relevance given to the intellectuals, gave them an unexpected distinction. [...] We must add to those factors the associative trend that linked the intellectuals, closing them in a system of solidarity and mutual recognition of the cultural-political societies, honoring them as an exception. The participation in the social life, preconized or favored by the educated guidelines, impeded divorce and segregation, giving them the power of intervening in the public life. It gave them a certain aspect of service, and from the public's part, it contributed to give them an aura of sympathy and prestige. Such aspect, mainly concerning the public speaker, the journalist and the jurist, has also concerned the writer, rather accepted in Brazil even when his/her work was not read (CÂNDIDO, 1981, p. 235-236 ).
The subjects are always interpellated by ideology, and through it the subjects produce their speech by means of structures of operation that produce evidences of meanings, which are affected by language within history, what makes us always already subjects, […] it is the interpretation gesture that links the subject to the language, the history, the meanings. [...] there is no subject without ideology. Ideology and unconsciousness are materially connected (ORLANDI, 2005, p. 45). In this way, the meaning of the Brazilian press could be displaced to the Brazilian journalism, observing that there was a newborn discursive process involving a new subject position in the 'press' that did not follow geographic criteria (produced in Brazil), but symbolic criteria (produced by Brazil). This happened mainly due to the Decree of March, 2 nd 1821, which offered the production conditions for the foundation of a discursivity in the newspapers, which corresponded to the foundation of the Brazilian journalistic discourse.
That decree was also a result of the insurrections of February, 26 th of that year, when Pedro, D. João VI's son, promised the rebels, among other rights, freedom of press. That promise was actualized on March, 9 th , with the promulgation of the bases for the Constituent, recognizing the freedom of thought as 'one of the most valuable rights of man'. Due to that decree, all citizens could manifest their opinions on any matter without previous censorship, as long as they responded for any ill-usage of that freedom. Before that declaration, all the writings have to go through the court's clearance and the censors in order to be examined and approved.
The decree abolishing the previous censorship and regulating the freedom of press, signed by D. João VI a day before the Portuguese court leaves Brazil, in April 1821, is a historical/political fact that may be understood as a discursive event in which a new discursivity is configured for the newborn press in the country. In Discourse Analysis, Pêcheux states that a discursive event is understood as "[…] a discontinuous and exterior historical element; it is as a meeting point of a present event and a memory" (PÊCHEUX, 1990b, p. 17).
We can follow one of the effects of that discursive event in the table at As a matter of fact, that decree just transferred the responsibility of the censorship, that is, the printers began to be held responsible for the articles published in their newspapers. In other words, although not previously, the censorship still occurred and began to exist not only to the authors, but to the typographers when the article was not signed. This event is understood as a discursive one, because it allowed a configuration of a 'be able to say' (poder dizer) (LAGAZZI, 1998): there is a legal responsibility for what is said a posteriori. This means that, although the articles could be censored afterwards, -the decree prohibited explicitly writings against religion, moral and good manners 4 , the Constitution, the emperor and the public tranquility -the newspapers 'could' say/publish anything. This dislocation of the censorship makes much difference and, from our point of view, it is one of the production conditions of a Brazilian discourse that starts to be materialized, also due to the proliferation of newspapers in that period.
Analysis
By using Persius's phrase 5 , '...Ah, si fas dicere! Sed fas' as epigraph, the newspaper O Macaco Brasileiro tried to demonstrate the political moment in Brazil through criticism. This phrase is taken from the dialogue between the Roman Persius and the Greek Romulus, in Persius's work Satĭra 6 . According to Donnini (1957), Persius's work approaches the moral decadence of the Roman Empire and the ethical questions in the lyrical, literary and dramatic works.
The satire, according to literature researchers, as a literary genre, has elements of satura, a Latin lyrical genre, characterized by a mixture of several themes, by dialogues and by improvisation, which disappeared in the II century B.C. Satura is one of the most ancient ways of dramatic representation in Rome. With the time, the satire became a composition of various leitmotifs, in form of prose and verse. In poetry, it criticizes sarcastically and acidly the social habits, aiming at causing or preventing a political, social or moral change. The satire becomes satiric humor because it presents similarities with the reality. Silva (2009) states that 4 It is important to notice that this formulation 'against the moral and the good manners' is very frequent in legal texts without, however, being able to define what it is about, although all of them, apparently, share a common meaning. 5 Roman stoic poet, born in Volterra, Etruria (34-62 A.D.). He studied stoicism with Cornutus since he was 16 and had connections with Seneca and Lucanus. Stoicism exhorted the cosmopolitism, considering that the man should be a citizen of the world. The stoics considered that the moral questions were more important than the theoretical questions (OLIVIERI, 2011). 6 Persius wrote six satires composed by 650 verses in form of dialogues and epistles. The work remained unfinished and was published after his death by Cornutus.
Acta Scientiarum. Language and Culture
Maringá, v. 34, n. 2, p. 151-161, July-Dec., 2012 the literary genre constitutes itself as moralizing with the aim of reforming the world, and restoring the common sense as a way of improving the social being.
The satire is also conceived to make people laugh through mockery, irony or incited anger, unveiling what is hidden behind the mask of hypocrisy. The seriousness is observed from the complex symbolism of the mask: then the use of parody, caricature, grimaces and apishness, ingredients of the grotesque (SILVA, 2009). Alves (2010) says that the poetical satire practiced in Rome by Persius and Horace was moralizing and semi-philosophical.
It contained reforming intentions, because the concept of satire is linked to the feeling of indignation and to the will to moralize the customs.
Taking this into consideration, discursively we realize that the epigraph in O Macaco Brasileiro is related to the 'be able to say'. Brazilian people did not have voice at the time; they could not speak, because they did not have the same rights as the Portuguese. Therefore, as it uses the statement '... Ah, si fas dicere! Sed fas' -'It is true! You have the right to speak!... So, use it!', the newspaper provokes the Brazilian reader. The awakening happens in several ways, all of them configured by affiliations to a memory of another time-space: ancient Rome. How is this time-space mobilized? For example, side by side, the title of the paper and its epigraph brings to the surface a connection between the satire and the irony: the first, due to the discursive memory instituted by Persius's citation; the second, due to the designation O Macaco Brasileiro (The Brazilian Monkey), which works at the same time with the common place of the look of the Otherthe foreigner (including, discursively, the Portuguese) and the figure of speech of the metaphor, displacing it: it is not a matter of imitating, but of saying something in an apish-like manner.
Another example is the direct mobilization of Persius's criticism to what caused the decadence of the Roman Empire and to a so-called lack of political ethics. There is, therefore, an allusion to a denouncing of the political situation of the time. Yet another mobilization place is the recovering of the Latin language in the epigraph. It reminds us of the Latin as mother-tongue, surpassing, in a satirical way, the Portuguese language from Portugal.
The presence of the epigraph written in Latin is, discursively, the material possibility of the authorship in relation to the language spoken in Brazil, to be precise, it is a new discursivity to the extent it brings a new sense of Brazilianness in a different formulation from the materiality of Portugal's language. This authorship constructs itself, then, when the relationship with Portugal is fading out, and also by referring to a culture which is universally considered civilized, withholder of an exemplary knowledge. And such authorship will take place through the satire and irony that consubstantiate in the already mentioned relationship between the title and the epigraph.
The name of the newspaper carries meanings that had already been formulated since Brazil was discovered. For the colonizers, the Brazilian people, mainly indigenous people's descendants, people who were born in the colony and the ones that did not attend the Portuguese schools, were regarded as animals. O Macaco, with its arts and apishness, represented the people, therefore the ones with no rights. O Macaco is the representation of the subjected Brazilian person, 'tied to the whipping post'. As Orlandi (2003b, p. 20) observes on the conversion discourse 7 , […] to subject the savages is to civilize them instead of exterminating them. To convert is to subject pagans in order to prevent, before all, the anthropophagy, but also to prevent the lack of political authority; the lack of religion, the mental roughness, the activism to the forest. However, as the word 'Brazilian' comes side by side with the animal's name, the metaphor slips. Firstly, because it is a nation's name: the Brazilian nation, inside a Portuguese state: the Portuguese Empire. Secondly, because, as it is next to the epigraph, it throws out the whipping post, and talks unrestrainedly.
O Macaco Brasileiro is written in metaphors, using the first person in dialogue form between the monkey and the readers. It was through metaphor that it produced a difference in the written tradition of the newspapers of the time. Discursively, the metaphor is understood as an effect of a word by another word.
[It is] a word, a proposition that does not have a meaning of its own, connected to its literality. On the contrary, its meaning constitutes itself in each discursive formation, in the relationship that words, expressions or propositions maintain with the other words, expressions or propositions of the same discursive formation (PÊCHEUX, 2009, p. 161).
So, although we consider that the metaphor is constitutive of the language, that its function is metaphoric 'par excellence', we must observe the explicit use of what has been called, in the tradition of the language studies, a figure of speech: the metaphor. In this sense, it is important to observe that, in the first edition of the newspaper, the editor/writer depicts himself as a monkey tied to the whipping post, which learns to speak. The monkey is, in all issues, criticizing and proposing to the new Brazilian citizen a new place for the speech, displacing him/her from the already existing politics of the court, that is, neither aligned to the Freemasonry nor to the Portuguese court. In this way, he/she is portrayed as a 'no Portuguese' Brazilian.
My friends, 'I'm a smart old monkey, expert by nature and by experience, tied to the whipping post' for so many years, and going from hand to hand, much was there to be learned on my expense, by imitation, or doing what I saw others doing, I handled little books and heard little things, nothing has escaped me, until now I have not gotten away for mischievous, 'but as I could not talk, I endured the scolding without a word, just shrieking'. I endured as long as I could, but one day, after I saw a nice lunch on the table, and we indolent are careless: I pretended to be a hungry cat and ate their business.
[...] At that time I was already swallowing the portion: I wanted to speak, but I got vexed; I wanted to be a parrot; then, I would be contented to be a parakeet of the Organ Mountain Range…So Minerva felt sorry for me, as I had so nice desires: and the love I have, always had for the bookshops, and gave me the deed to speak, so I could defend myself. [...] 'I shall speak my mind, but there must be someone to hear me, because I am an expert in charming with my grimaces and apish-like manners' (O MACACO BRASILEIRO, n. 1, 1822, grifos nosso).
We can notice certain regularity between the name assigned to the newspaper and the metaphoric effect 8 produced by the textualization.
According to Guimarães (2003, p. 54): The designation is the signification of a name in the relation with other names and with the world historically delimited by the name.
[…] The designation is not something abstract, but linguistic and historical. When a name designates, it works as element of the social relations it helps to build, and of which it becomes a part.
[…] To name something is to give it historical existence.
It is important to emphasize that the title O Macaco Brasileiro is connected to the place of Brazilianness that is being inaugurated. From Guimarães's (2002) concept of designation and from 8 According to Pêcheux (1969), a metaphoric effect is a semantic phenomenon produced by a contextual substitution, which causes a displacement of meaning. (PÊCHEUX, 1990) the standpoint of the semantics of enunciation, we already have that degree of historical density on a name. From the discursive perspective, what can be noted is that, besides being in that position of historical constitution, of historical density, it is also in a place of foundation, because that designation proposes, through the metaphor, a sense that does not exist newspapers yet, the sense/sensation of being a Brazilian person and, at the same time, to have a voice, to be able to speak. That sense is materialized in an inaugural form, which is what we can consider as originating from the journalistic chronicle. Therefore, the self-denomination 'macaco' has all the historical weight of not speaking in a first moment, and beginning to speak in a second moment, inaugurating there an identity of what it is to be a Brazilian individual. In other words, the Brazilian person was prevented from speaking before and, by behaving as an ape, appears as a speaking individual. Then, the important thing is to speak, to exist, independently of the position assumed in relation to the emancipation from Portugal or to the government regime.
We can think about this event of the language from the semantics of the enunciation, in which the event determines the time (past or future) of a statement, that is, it projects new meanings and another memory in a new discursivity.
The temporality of the event constitutes its present aspect which, later, will open room for other meanings, and a past that is not a remembrance or personal memories of former facts. The past is, in the event, the remembrance of statements, that is, it is as part of a new temporalization, such as the latency of future. It is to that extent that the event is the difference in its own order: the event is always a new temporalization, a new space of sociability of times, without which there are no meanings, there is no language event, there is no enunciation (GUIMARÃES, 2002, p. 12).
From the concept of the enunciative event, we propose the thinking on the discourse to understand that inaugural happening. If we agree that the discursive event is an encounter of a present happening with a memory (PÊCHEUX, 1990), we have, in that period before the Independence, the memory of a Brazilian individual who does not speak because he/she is an animal, and, therefore, has nothing to say, has no culture, is illiterate, did not study in Europe, does not attend the court, that is, one that can be considered an animal.
It is the connection between that memory and the speaking monkey, which speaks through metaphors, that produces the discursive event. We understand that such present constitutes a new discursive formation, that is, when the monkey gets to express itself, to voice something, it produces a new formulation due to the different way of speaking, in other words, due to the metaphors constituting the chronicle.
It is exactly in the relation with the memory of the Brazilian monkey that the newspaper appears as a discursive event, because it breaks with what the memory brings just because of its existence. We realize, then, the memory and the present connecting to each other through a work that is concomitantly political and linguistic. That is the proper place of the discourse.
O Macaco Brasileiro launches a place for the speech because it does not appear in the scene talking from a political position already established by the other newspapers in that beginning of the nineteenth century. If O Macaco assumed the already instituted discourses, it would be necessarily just repeating other papers instead of positioning itself. It would not be founding a discursivity: it would be, in that case, founding an argument, a subject position within an already existent discursivity. Therefore, by positioning itself out of that place already delimited for the paper writer, O Macaco establishes a place of authorship. This discursivity that today sounds as irony, at that time sounded strange: it was a place of foundation. In D.A., a new discursivity was instituted, the Brazilian journalistic discourse.
In the first edition of O Macaco Brasileiro, we can see the marks of the impossibility of enunciating in a way that it (the monkey) can only shriek: I'm a smart old monkey, expert by nature and by experience, tied to the whipping post for so many years, and going from hand to hand, much was there to be learned on my We realize, in that way, that there is a relationship between the barbarian and the civilized, that is, the monkey is from the human lineage, but it is not civilized, therefore not allowed to speak. The Brazilians could just imitate, in particular, the Portuguese. Those were then the production conditions of that subject position in that political moment, proposing a new form of constitution for the Brazilian identity. This is what calls the subject to resist, when he/she does not accept, does not submit him/herself passively to the power.
The resistance is the struggle of the subject for the right to position him/herself, to not accept coercion, it is the battle for a 'place where the subject may find the power to say with or without the support of the hierarchy (LAGAZZI, 1988, p. 97).
O Macaco Brasileiro contests, makes fun of the subjected Brazilian, provokes through the satire. It is the predecessor of a provocative discursivity, making a political criticism through debauch, through laughter.
This very curious newspaper was written in an extremely particular Portuguese, which reminds us the modernist text of Macunaíma, by Mário de Andrade. Its symbolic character, a crook, smart, astute character, the classical representation of the monkey, appears in each side of the issues as the protagonist in adventures told by itself, with a good humored critical sense of the reality (LUSTOSA, 2000, p. 37).
O Macaco Brasileiro had only sixteen issues 9 published between June and August 1822. Despite the short time it was published, it generated debates with other newspapers which circulated in the court in that period, particularly the periodical O Papagaio 10 , which criticizes its style, affirming that the language employed in O Macaco hurts the image of the Brazilian people, who were always compared to monkeys by the Portuguese.
It is true that they have good intentions, but it is a pity that they write in a way that nobody understands! What will the enemies of Brazil say?, who would repute our inhabitants as educated and as monkeys before such confused language they use?! (O PAPAGAIO, n. 7, June 1822).
The chronicle, which starts to be formulated in the beginning of the nineteenth century, settles later as a place of contradiction in journalism. There we can observe this place being explored by the paper's authors, to the extent in which there is an inversion of roles, that is, the monkey becomes invisible through the language of the intellectuals, and that fact is anxiously perceived by the O Papagaio.
In the issue number 6, the paper compares the political emancipation to the grown up son that leaves home. We can think this enunciate as an effect of a rhetoric metaphor. According to Joanilho (2005, p. 70), […] the effect of rhetoric metaphor produces historical singularity in the discursive event. The metaphoric aspect occurs in the event as a space of redistribution of the senses, in which the rhetoric memory operates and makes meaning. In that way, we can observe the slippery senses in the statement which, as it compares Brazil to a grown up child, reaffirms the need for emancipation, independent from the will of the court. We can also think about the meaning of the statements: 'We are now going to choose the foundation stones that will form the big political and civil building of our home' and '[…] but many more we will have to search for, to engrave, to perfect and to prepare: we have a great treasure, that is the almost empty and clean land; there is nothing better than that!'.
On the phrase '[…] to choose the foundation stones, to search for, to engrave, perfect and prepare the almost empty and clean land', the newspaper brings meanings that allude to the construction of that nation that is still to be shaped.
Great and rich Brazilian family, the day is coming when you will be what Nature demonstrates: an Empire. 'Brazil is like a son who, reaching the age of becoming a State, departs from Father's home', marries and makes a couple: although that fact goes against your father's interest, 'the time will cure the question and he will congratulate you'. Old age is coming and the children will support their elderly and ill father. ' [...] We are now going to choose the foundation stones that will form the great political and civil building of our home'; we already have some materials at our door, 'but many more we will have to search for, to engrave, to perfect and to prepare: we have a great treasure that is the almost empty and clean land; there is nothing better than that!' Courage, co-citizens, our brothers from Portugal will measure us from top to toe. We have no one to fear but ourselves; and we have only ourselves to expect from (O MACACO BRASILEIRO, n. 6, 1822, grifos nosso).
In the passage below, it is possible to observe the statements with deictic marks, When Brazil still wanted to be represented 'there', it was with sacrifices, now it wants it 'here', then mutual relations will be made: not with molds engraved anywhere (O MACACO BRASILEIRO, n. 3, 1822).
From the point of view of the state, the 'there and here' is still formulated as a Portuguese state (there), because the nation was still Portuguese. The crown court of Lisbon did not accept that Brazil had its own Constitution, so it was necessary to make agreements to constitute an independent state, a Brazilian nation.
Mr. Missionaire comes in saying that 'it is convenient for his liberality the great interval of the Ocean, and the impossibility to unite what Nature has separated'; and, after saying that America, sooner or later, will be, and has the right to be, independent, concludes that such an idea now, without knowing what freedom and Constitution mean, would be to throw itself in an abyss of incalculable disorders and problems. Well, then, Mr. Lisbon Preacher, I will let you know that, if the ones overseas were not so stupid, as they are, to think and say unrestrainedly and impudently that Brazil does not know what Constitution is, the subsequent arrangements would not have been accelerated, although we still want to contract from here, and in our favor, as it was a little unbalanced. ' [...] When Brazil still wanted to be represented there, it was with sacrifices, now it wants it here, then mutual relations will be made: not with molds engraved anywhere'. There are doctors who, in order to show their aptitudes, aggravate the illnesses, exaggerating about the patient's condition to disguise the insults of the medicine, 'but we do not just want to get better, we want to become healthy', in perfect condition and not with defects and other problems. (O MACACO BRASILEIRO, n. 3, 1822, grifos nosso).
With this textuality, O Macaco inaugurates a new discursivity that is the journalistic chronicle. From our point of view, this foundation is based on the discursive event of the Decree from 1821: the paper materializes the discursive event. It inscribes itself in a Brazilian journalist subject position. It is the mark of that subject position that will disclose afterwards in several other forms the Brazilian journalistic discourse.
When it uses the chronicle genre, at first, it is not reasonable that the periodical makes politics, because it seems to be 'joking'. It is interesting that, to assume that place, it must be through apish-like attitudes, or it will not be recognized as Brazilian. The reason for it is that a Brazilian person is no other than the one 'between the tables, stealing food'. Then, the paper had to be in that place, in between, to make that memory work and be recognized in any way as Brazilian. That is why, at that moment, many people consider it just odd, not ironic. The newspaper uses the image of the Brazilian that is in the memory as a stereotype, and it is funny exactly because it unveils such stereotype, showing that the irony is on treating the Brazilian people this way. Hence, it brings the discovery; it publishes the way the Brazilian people are seen, adding that now the Brazilian ones speak. That is the reason why it seems ironic, because it speaks with the 'language' of the intellectuals. Therefore, the paper produces an insult, a confrontation, a provocation, that results on a new Brazilian subject position as it occupies that in-between place, as it assumes a form of an 'I', and as it produces a beable-to-speak possibility to the monkey, as well.
Final considerations
As from the analysis, what calls our attention is that, until 1821, the newspapers circulating in the court produced a discursivity 'without mistakes', in the sense that the discursive formation was unique and connected to the court. It was a homogeneous writing, previously censored by the court, so the articles which were not inserted in that discursive formation were not even published. Pêcheux (2009, p. 147) defines discursive formation as " […] something which, in a given ideological formation, that is, from a given position in a given conjuncture, determines what can and must be said". Until 1821, the articles that were published in those periodicals were the ones the court authorized, therefore, the ones which were aligned with the mainstream discursive formation, which was the crown court's discourse. The use of the expression 'without mistakes' refers to the fact that there was no contradiction, or there was little contradiction in those papers. As from the historical and discursive event brought by the Decree of March, the contradiction between the discursive formation of the court, in a tense and contradictory relationship with other discursive formations, begins to work. Therefore, this contradiction, which was previously obliterated, which did not appear in the papers, came now elucidated in the discursivity of the periodicals published later.
We propose that the Decree of March 1821, as a historical fact, is a discursive event, and constructed the production conditions to institute the journalism in and from Brazil.
For the Discourse Analysis, the discursive event is the rupture, the gap that leads us to other meanings, it is what 'escapes', what destabilizes the meanings. Hence, it is what allows the disclosing of the meaning and makes it possible a new reading gesture in a "[…] logically stabilized world" (PÊCHEUX, 1990, p. 31). We once more mention Orlandi (2003b, p. 15), when she affirms that " […] giving meaning is to construct limits, it is to develop domains, to discover places of signification, to make moves of interpretation possible". It is in this sense that we understand the law of censorship as a possibility of rupture with the stabilized order, causing a displacement of the journalism determined by the court to another discursivity, which is to the Brazilian journalism since that, even due to be censored, other forms of saying made possible. And more: that freedom of speech incited the demand -thus the significant increase of newspapers which began to circulate in the country after the decree. That apparent 'lack of censorship' allowed other meanings to circulate, even if censored or administrated. That freedom also allowed new moves on the authors' part, who claimed (legally) responsible for their statements. A Brazilian authorship was also being constituted during that event.
In this way, from our standpoint, the newspapers which were launched after de 1821 Decree founded a new discursivity, in which the meaning of the journalism in/from Brazil breaks out and is recognized, legitimating, thus, the Brazilian journalist subject position in function of its authorship.
As seen above, Orlandi (2005) states that the authorship is a specific function of the subject who, through the socio-historical movement, promotes an assumption of the authorship that becomes evident in the way he/she constitutes him/herself and shows him/herself an author.
As an author, the subject, at the same time he/she recognizes a exteriority to which he/she must refer to, he/she also refers to his/her interiority, constituting thus his/her identity as an author. Working the articulation interiority/exteriority, he/she 'learns' to assume the role of the author and everything it implies. I called that process an assumption of authorship. According to it, the author is the subject who, having dominated certain discursive mechanisms, represents, through language, that role in the order it is inscribed, in the position it is constituted, assuming the responsibility for what is said, how it is said, etc. (ORLANDI, 2005, p. 76). We understand that the Decree of March was only possible because there was an emancipation process already going on even before king D. João VI returned to Portugal, that is, the relationship with the Independence was already being constituted and the decree was part of those production conditions that reaffirmed and sustained such process.
It was in the textualization of O Macaco Brasileiro that we could understand the foundation of the Brazilian journalistic discourse and, in the contradiction, we could understand the decree as a discursive event. This newspaper materializes the discursive event that was the decree of March, 2 nd , as it mobilizes the memory of a Brazilian person who has no right to speak or does not speak because it is an animal, but which is speaking, in the metaphor of the animal itself, bringing the incongruence as the possibility of saying. It breaks with that memory, launching a position of saying different from the other newspapers. It is in this new discursivity, materialized in the journalistic chronicle, that we have the establishment of the Brazilian journalist subject position operation. In other words, it is in O Macaco Brasileiro that the founding discursive process of a new subject position, which corresponds to the foundation of the Brazilian journalistic discourse, is evidenced. | 2018-12-17T22:34:44.890Z | 2012-08-17T00:00:00.000 | {
"year": 2012,
"sha1": "c17ef39dd83e09ac1ecdcf17954ef873e2949b78",
"oa_license": "CCBY",
"oa_url": "http://periodicos.uem.br/ojs/index.php/ActaSciLangCult/article/download/17874/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e03d2308180f9c0e88c8d307e4722f025a696f3d",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
216361174 | pes2o/s2orc | v3-fos-license | Taxing Peer-to-Peer Rental Services in Indonesia
: This descriptive analysis examines the difficulties of taxing income from online peer-to-peer rental services (e.g., Airbnb, Airy Rooms) in Indonesia. We collect and review data from scholarly and professional literature and documents. Our discussion reveals that Indonesia's self-assessment system whereby taxpayers calculate, report, and pay their tax obligations is problematic because digital peer-to-peer businesses lack clear regulation. Indonesia needs to revamp its tax laws to suit digital transactions in the sharing economy. The article provides insights into steps that need to be taken.
Background
In the past decade, the -sharing economy‖ has grown through digital mediation-i.e., acquiring or exchanging goods, services, knowledge, and experiences with others online. From a business standpoint, a key element of the sharing economy is replacing ownership with access (PWC, 2015). Online peer-to-peer platforms allow private individuals to collaborate in using inventory. Consumers enthusiastically have adopted the services of such platforms as Airbnb, Gojek, RedDoorz, and Airy. Exponential growth of peer-to-peer platforms is possible by technological innovation and supply-side flexibility. Technological innovation streamlines market entry for suppliers, feeds consumer access to information, and reduces overhead supply. This study examines how difficulties the Indonesian government faces in taxing online peer-to-peer rentals, focusing on Airbnb, a $30 billion pioneer in the sharing economy that has served 50 million guests since its inception in 2008.
Research Objectives
This research explains how Indonesian tax law fails to encompass peer-to-peer rental services in a sharing economy.
Research Urgency
This research is meaningful for several parties, notably: a. Government It informs the Indonesian government about impediments to taxing online peer-to-peer rental services and how to overcome them.
b. Taxpayers
It discusses taxes arising from a peer-to-peer rental services.
Digital Market
Characteristics of digital markets defined in the OECD model include: a. Direct network effects: Utility from consuming a good or service from digital markets often depends on the number of other end-users consuming it. b. Indirect network effects: These arise in multi-sided markets when one set of end-users (users of the social network) benefits from interacting with another set (advertisers on the social network). Online platforms offering rental accommodations, transportation, or peer-to-peer e-commerce are examples.
c. Economies of scale: Making goods and services available digitally may entail relatively high fixed costs and far lower variable costs. Developing software, for instance, requires considerable investment in infrastructure and labor, but once finalized a program can be maintained, sold, or distributed at low marginal cost. d. Switching costs and lock-in effects: End-users might own devices with operating systems that are incompatible with those of networks they wish to access. If so, customers may be reluctant to buy devices with compatible operating systems because of psychological or monetary switching costs. e. Complementarity: Many goods and services traded in digital markets are complements, and customers increase utility by consuming two or more complementary goods together.
Sharing Economy
Digitalization facilitates the sharing economy, or collaborative consumption. The sharing economy is not new, but technology has lowered transaction costs, made more information available, and enhanced transaction reliability and security. Numerous applications involving diverse business models now offer services or products such as cars, spare rooms, food, clothes, and private jets. Most digital participants in the sharing economy do so not to earn a living but to entertain relationships, advance a cause, or to make ends meet. Earning supplementary income is a net benefit that often requires little quantitative cost-benefit analysis. Non-professional providers generally share their personal resources at lower prices than professionals, thereby reducing overall prices, including those of professionals.
Methodology
We employ descriptive research by reviewing and assessing secondary data collected from scholarly and professional literature and documents.
Results and Discussion
Our research reveals that Indonesia lacks statutes for taxing online peer-to-peer rental services. Current regulations related to digital transactions fall under SE-62/PJ/2013 on the Affirmation of the Taxation Provisions on e-Commerce Transactions. Therein the Indonesian government acknowledges that e-commerce differs from brick-and-mortar transactions only as a mode of trading. It acknowledges no other general difference between them.
SE-62/PJ/2013 merely sets a model for e-commerce transactions. That model features an online marketplace, classified ads, daily deals, and online retailing. It defines terminology for e-commerce transactions, flowcharts, business processes, and aspects of taxation. The model for e-commerce transactions in SE-62/PJ/2013 differs from the model that features peer-to-peer transactions between individuals. Therefore, SE-62/PJ/2013 cannot be the legal basis for taxing peer-to-peer transactions. Indonesia's is a self-assessment system of taxation. Government trusts each taxpayer to calculate, report, and pay obligations owed. Like all self-assessment systems, Indonesia's relies on taxpayers' ability to understand their obligations and on their conscientiousness in meeting them. Digital peer-to-peer rental services challenge Indonesia's mechanism for taxing digital transactions.
Regulations should define taxable transactions clearly so all parties know when taxes apply and tax administrators can monitor compliance. Regulations should be simple to understand, honor, and monitor. Tax burdens should apply equitably to similar types of transactions whether conventional or digital.
A. Challenge in Taxing Peer-to-peer Rental Services
Taxing peer-to-peer rental transactions involves challenges to be met and consequences to be avoided. a. Tax art 21/23 makes it impossible to impose withholding on homeowners offering Airbnb-style rentals if they are not resident taxpayers in Indonesia. b. Self-assessment makes it difficult to identify renter-homeowners and taxes owed on income from peer-to-peer digital transactions. c. Indonesia's Directorate General of Taxation needs to create systems to facilitate taxing peer-topeer rental transactions. d. Taxation should not to kill growth in peer-to-peer rental businesses. e. Compliance burdens for taxpayers should not encourage tax avoidance.
Advances in Social Science, Education and Humanities Research, volume 426
f. Digital transactions thrive from difficulty identifying sellers, customers, and amounts paid. Difficulty accessing such information is exacerbated if customers and homeowners offering peerto-peer rentals are in different jurisdictions. In short, the challenge is to assure equitable, accurate, and visible taxation of online peer-to-peer rental services without destroying them as an economic activity or imposing overwhelming compliance burdens.
B. Addressing Challenges in Taxing Peer-to-peer Rental Services
Challenges for tax administrators raised by online peer-to-peer rentals include collecting information about participants and amounts paid and resolving jurisdictional questions about where tax liability falls. Options for meeting these challenges include taxpayer education and gathering information directly from online platforms. Both approaches are discussed below.
a. Improving taxpayer education and self-reporting
Providers of services such as Airbnb seldom have contracts or sustained business relationships with clients, so collecting taxes on their rental income depends on taxpayers' declaring it on their tax returns. Whether they do so is a question of conscience-providers know their transactions are largely undetectable-and education: they might not realize that income from a sideline or online business is taxable.
b. Educating taxpayers about obligations arising from the platform economy
Through its website (www.pajak.go.id), Indonesia's Directorate General of Taxes can educate taxpayers about taxes owed on income from online peer-to-peer rental services.
C. Taxing Income from Peer-to-peer Rental Services According to Indonesian Tax Regulations
The Government of Indonesia does not yet have a special regulation in taxing peer to peer rental services. Especially for peer to peer rental services in accomodation such as Airbnb, Airyroom and so on, when viewed from its business activities which is a rental inn like a rental house, hotel or inn. So that, in accordance with the provisions of the existing tax laws in Indonesia, the income received from peer to peer rental services such as Airbnb is included in the category of income on the lease of land and/or building. The following tax treatment of activities in respect of land lease and/or building is reviewed from the side of the recipient of rental income and the tenant refers to the applicable taxation provisions, namely: Act no. The types of rentals provided by Airbnb are vary, from rental of houses, apartments, rooms, condominiums, hotels, villas and so on, making it difficult to determine the type of tax on lodging services via Airbnb. Accomodation services through Airbnb includes categories of activities in respect of land and/or building rentals under the provisions of Article 4 paragraph (2) of 2009 on Regional Taxes and Levies is a lodging/lodging service facility including other related services with a fee, which includes motels, guesthouses, tourism huts, tourist homes, boarding houses, lodging houses and the like, as well as boarding house with more than 10 (ten) rooms.
Advances in Social Science, Education and Humanities Research, volume 426
Based on the above description, the following tax treatment of accomodation services via Airbnb is as follows: a. Object of Final Income Tax Article 4 paragraph (2)
Conclusion
The imposition of tax on peer to peer rental services, especially on lodging services via Airbnb becomes a challenge for the Government of Indonesia, in this case the Directorate General of Taxation considering the tax potential of the type of digital economy transactions become higher. Type of lodging offered via Airbnb various types, so in the imposition of taxes must be identified type of lodging owned by the owner of lodging services in determining the imposition of tax. So, it takes the role of government in educating the tax aspects of lodging services via peer to peer rental services. | 2020-04-09T09:25:33.123Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "5edd9886d9a30d6df4c218663db3a770c9b46a4c",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125938323.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d1ab2b64fbc1cd9ec960caa74ba9db13b1a3bbc9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
259370625 | pes2o/s2orc | v3-fos-license | Token-Level Self-Evolution Training for Sequence-to-Sequence Learning
Adaptive training approaches, widely used in sequence-to-sequence models, commonly reweigh the losses of different target tokens based on priors, e.g. word frequency. However, most of them do not consider the variation of learning difficulty in different training steps, and overly emphasize the learning of difficult one-hot labels, making the learning deterministic and sub-optimal. In response, we present Token-Level Self-Evolution Training (SE), a simple and effective dynamic training method to fully and wisely exploit the knowledge from data. SE focuses on dynamically learning the under-explored tokens for each forward pass and adaptively regularizes the training by introducing a novel token-specific label smoothing approach. Empirically, SE yields consistent and significant improvements in three tasks, i.e. machine translation, summarization, and grammatical error correction. Encouragingly, we achieve averaging +0.93 BLEU improvement on three machine translation tasks. Analyses confirm that, besides improving lexical accuracy, SE enhances generation diversity and model generalization.
Introduction
Sequence-to-sequence learning (Seq2Seq) with neural networks (Sutskever et al., 2014) has advanced the state-of-the-art in various NLP tasks, e.g. translation (Bahdanau et al., 2015;Vaswani et al., 2017), summarization (Cheng and Lapata, 2016), and grammatical error correction (Yuan and Briscoe, 2016). Generally, Seq2Seq models are trained with the cross-entropy loss, which equally weighs the training losses of different target tokens.
However, due to the token imbalance nature (Piantadosi, 2014) and the truth that different tokens contribute differently to the sentence meaning (Church and Hanks, 1990;, Figure 1: An example to illustrate the changing token difficulties in different training steps in WMT'14 En-De. The token "abschließen/ Sache" is hard/ easy to learn at 50K while the trend is totally reversed at 100K. several works are developed to reweigh the tokenlevel training loss according to explicit (e.g. frequency) or implicit (uncertainty estimated by offthe-shelf language models) priors (Gu et al., 2020;Xu et al., 2021;Zhang et al., 2022a). For example, Gu et al. (2020) proposed two heuristic criteria based on word frequency to encourage the model to learn from larger-weight low-frequency tokens. Zhang et al. (2022a) introduce target-context-aware metric based on an additional target-side language model to adjust the weight of each target token.
Despite some success, there are still limitations in these adaptive training approaches. First, most of them predetermine the difficult tokens and fix such prior to guiding the training. However, in our preliminary study, we find the hard-to-learn tokens are dynamically changing during training, rather than statically fixed. As shown in Figure 1, as the training progress goes, although the sentence-level loss is nicely converging, the difficult token is changing from "abschließen" to "Sache" in terms of the token-level loss. Second, these adaptive training methods overly emphasize fitting the difficult tokens' one-hot labels by reweighing the loss, which empirically may cause overfitting and limit the generalization (Norouzi et al., 2016;Szegedy et al., 2016;Xiao et al., 2019;Miao et al., 2021). Also, a more recent study (Zhai et al., 2023) provides theoretical evidence to support that reweighting is not that effective to improve the generalization.
Correspondingly, we design a simple and effective Token-Level Self-Evolution Training (SE) strategy to encourage Seq2Seq models to learn from difficult words that are dynamically selected by the model itself. Specifically, SE contains two stages: ❶self-questioning and ❷self-evolution training. In the first stage, the Seq2Seq models dynamically select the hard-to-learn tokens based on the tokenlevel losses, then we encourage the Seq2Seq models to learn from them in the second stage, where, rather than adopting reweighing, we introduce a novel token-specific label smoothing approach to generate easily digestible soft label, which considers both the ground truth and model's prediction.
Experiments across tasks, language pairs, data scales, and model sizes show that SE consistently and significantly outperforms both the vanilla Seq2Seq model and the re-implemented advanced baselines. Analyses confirm that besides improved lexical accuracy, SE generates diverse and humanlike generations with better model generalization.
Methodology
Preliminary Sequence-to-sequence (Seq2Seq) learning aims to maximize the cross-entropy (CE) loss of the log-likelihood of each target word in y = {y 1 , . . . , y N }, conditioned on source x, where the optimization treats all tokens equally: However, due to the different learning difficulties of each token, it is sub-optimal to treat all tokens equally (Gu et al., 2020). To address this limitation, a series of token-level adaptive training objectives were adopted to re-weight the losses of different target tokens (Xu et al., 2021;Zhang et al., 2022a). The common goal of these methods is to facilitate the model training by fully exploiting the informative but underexplored tokens.
However, our preliminary study shows that the hard tokens are dynamically changing (see Figure 1) in different training steps (or model structures), thus it is sub-optimal to employ static token priors (e.g. frequency) during training. Also, recent studies (Zhai et al., 2023) in the ML community theoretically show that reweighting is not that effective to improve the generalization. Based on the above evidence, we present the self-evolution learning (SE) mechanism to encourage the model to adaptively and wisely learn from the informative yet under-explored tokens dynamically determined by the model itself (Stage❶ in §2.1), with an easy-tolearn label distribution (Stage❷ in §2.1). A similar work to ours is Hahn and Choi (2019). However, their method mainly considers the situation where the predicted answer is incorrect but close to the golden answer, while our method focuses on all dynamic hard tokens.
2.1 Token-Level Self-Evolution Learning ❶ Self-questioning Stage. The goal is to select the hard-to-learn tokens that are questioned by the Seq2Seq model itself during training dynamics. Previously, these difficult tokens are predetermined by external models or specific statistical metrics. However, inspired by the finding of dynamic change of difficult tokens during the training stage as shown in Figure 1 and the finding that the trained model contains useful information (Li and Lu, 2021), e.g. synonym, we propose to straightforwardly leverage the behavior of the model to dynamically select target tokens. In practice, we first calculate the token-level CE loss, denoted as {l 1 , l 2 , ..., l n }, for each token for each forward pass. Then we set a loss threshold Γ and select the tokens whose losses exceed Γ as the target tokens, i.e., D = {t i |l i > Γ} where i ∈ N = {1, 2, ..., n}.
❷ Self-evolution Training Stage. After selecting the difficult tokens, we encourage the model to carefully learn from them. Given the theoretical shortage (Zhai et al., 2023) and potentially caused overfitting or overconfidence problem (Miao et al., 2021) of reweighting and deliberately learning from difficult tokens, we propose to strengthen the learning from these tokens with a newly designed Token-specific Label Smoothing (TLS) approach. Specifically, motivated by the effect of label smoothing (LS) regularization (Szegedy et al., 2016), we combine the ground truth p i and the model's predictionp i to form a new soft label p i for the i-th token. Then we use p to guide the difficult tokens D, while leaving label-smoothing CE loss for the other tokens. It is worth noting that we also apply the traditional label smoothing technique top i to activate the information in the predicted distribution. Analogous to human learning, it is often easier for humans to grasp new things described by their familiar knowledge (Reder et al., 2016),
Evaluation
Machine Translation on three widely-used benchmarks (Ding et al., , 2021c: smallscale WMT16 English-Romanian (En-Ro; 0.6M), medium-scale WMT14 English-German (En-De; 4.5M), and large-scale WMT14 English-French (En-Fr; 36.0M). We implement the baselines and our approach under Transformer-base settings. We follow the previous adaptive training approach (Gu et al., 2020) to pretrain with the cross-entropy loss with N steps, and further finetune the same steps with different adaptive training objectives, including Freq-Exponential (Gu et al., 2020), Freq-Chi-Square (Gu et al., 2020), D2GPo (Li et al., 2020), BMI-adaptive (Xu et al., 2021), MixCrossEntropy (Li and Lu, 2021), CBMI-adaptive (Zhang et al., 2022a), and SPL (Wan et al., 2020). For N , we adopt 100K and 30K for larger datasets, e.g. En-De and En-Fr, and small dataset, i.e. En-Ro, respectively. We empirically adopt 32K tokens per batch for large datasets, the learning rate warms up to 1e-7 for 10K steps, and then decays 90K, while for small dataset En-Ro, The learning rate warms up to 1e-7 for 4K steps, and then decays 26K steps. All the experiments are conducted on 4 NVIDIA Tesla A100 GPUs. The SacreBLEU (Post, 2018) was used for evaluation. Besides translation, we also follow previous works Zhong et al., 2022;Zhang et al., 2022b) to validate the universality of our method on more sequenceto-sequence learning tasks, e.g., summarization and grammatical error correction. Text Summarization on XSUM corpus (0.2M). We follow fairseq (Ott et al., 2019) to preprocess the data and train the model, then finetune them for the same steps. We evaluated with the ROUGE (Lin, 2004), i.e. R-1, R-2, and R-L. Grammatical Error Correction on CoNLL14 (1.4M). We follow Chollampatt and Ng (2018) to preprocess the data and train the model, then finetune them for the same steps. The MaxMatch (M 2 ) scores (Dahlmeier and Ng, 2012) were used for evaluation with precision, recall, and F 0.5 values.
SE brings gains across language pairs and scales.
Results on machine translation across different data sizes ranging from 0.6M to 36M in Table 1 show that our SE-equipped Transformer "+ Self-Evolution (ours)" 1) considerably improves the performance by averaging +0.92 BLEU points; 2) out-Valid Loss Scale 0-1 1-2 2-3 >3 Transformer 63.3 10.5 6.7 19.5 + SE 65.6 9.5 5.8 19.1 performs previous competitive method "+ CBMIadaptive" by up to +0.47 BLEU points on large dataset WMT14 En-Fr. These results demonstrate the effectiveness and universality of our SE. Table 4 show that our method can achieve +0.4 and +1.2 improvement in BLEU and COMET respectively, which proves that our SE also works on extremely large datasets.
Analysis
We provide some insights to better understand the effectiveness of our approach. The ablation of important modules and parameters is in Appendix A.
SE learns better token representation. To verify whether our method helps learn better tokens representation, we conduct analysis on WMT14 En-De from learning loss and fine-grained generation perspectives, respectively. First, we count the token ratios distributed in different cross-entropy loss scales in Table 3 following Zan et al. (2022a). Cross-entropy is a good indicator to quantify the distance between the predicted distribution and the ground truth in the valid dataset, and a lower value means a more similar distribution. As shown, our method improves the low-loss token ratios by +2.3%, indicating SE helps the model learn better token representations by reducing the token uncertainty. In addition, we follow ; to break the translation down into different granularities and measure their fined-grained performance. In particular, we calculate 1 the F-measure of words by different frequency buckets and BLEU scores of buckets of different lengths in Figure 2. We see SE achieves better performance in all frequencies and sentence buckets, demonstrating our method can improve the performance of different granularities.
SE encourages diverse generations. Lacking generation diversity is a notorious problem for Seq2Seq learning tasks (Sun et al., 2020;Lin et al., 2022). Benefiting from better exploring the model's prediction with corrected soft labels, SE is expected to improve generation diversity. We follow to examine this by analyzing the performance in an additional multiplereference test of WMT'14 En-De (Ott et al., 2018). We choose additional references for each of the 500 test sentences taken from the original test. Table 5 shows SE consistently outperforms the baseline with the average improvement being 0.9/1.0 BLEU, which indicates that our SE can effectively generate diverse results. scenarios following . In particular, we evaluate WMT14 En-De models over four out-of-domain test sets (Müller et al., 2020) in Table 6 and find that SE improves the translation by averaging +0.9 BLEU points, showing a better lexical generalization ability.
SE encourages human-like generations. We design two types of evaluation on WMT14 En-Fr: 1) AUTOMATIC EVALUATION with COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020), which have a high-level correlation with human judgments. 2) HUMAN EVALUATION with three near-native French annotators who hold DALF C2 certificate 2 . Specifically, for human evaluation, we randomly sample 50 sentences from the test set to evaluate the translation adequacy and fluency, scoring 1∼5. For adequacy, 1 represents irrelevant to the source while 5 means semantically equal. For fluency, 1 means unintelligible while 5 means fluent and native. Table 7 shows the automatic and human evaluation results, where we find that our SE indeed achieves human-like translation.
Conclusion
In this paper, we propose a self-evolution learning mechanism to improve seq2seq learning, by exploiting the informative-yet-underexplored tokens dynamically. SE follows two stages, i.e. selfquestioning and self-evolution training, and can be used to evolve any pretrained models with a sim- ple recipe: continue train with SE. We empirically demonstrated the effectiveness and universality of SE on a series of widely-used benchmarks, covering low, medium, high, and extremely-high data volumes.
In the future, besides generation tasks, we would like to verify the effectiveness of SE on language understanding tasks (Wu et al., 2020;Zhong et al., 2023). Also, it will be interesting to design SEinspired instruction tuning or prompting strategy like Lu et al. (2023) to enhance the performance of large language models, e.g. ChatGPT 3 , which after all have already been fully validated on lots of conditional generation tasks (Hendy et al., 2023;Peng et al., 2023;Wu et al., 2023).
Limitations
Our work has several potential limitations. First, we determine the threshold Γ by manual selection, which may limit the performance of Seq2Seq models, it will make our work more effective and elegant if we dynamically select the threshold. Second, besides the improvement on three widely used tasks, we believe that there are still other abilities, like code generation, of Seq2Seq models that can be improved by our method, which are not fully explored in this work.
Ethics Statement
We take ethical considerations very seriously and strictly adhere to the ACL Ethics Policy. This paper focuses on effective training for sequence-tosequence learning. The datasets used in this paper are publicly available and have been widely adopted by researchers. We ensure that the findings and conclusions of this paper are reported accurately and objectively.
Ablation Study
Metric. In this work, we use the loss-based metric to dynamically select the hard-to-learn tokens. To validate the effectiveness of the metric, we use a simple adaptive training method ("+ ADD") that adds 1 to the weighting term of loss of the hard-to-learn tokens. The results on WMT16 En-Ro are shown in Table 9, the simple Add method can achieve +0.3 BLEU improvement compared to the baseline model, which proves that our proposed self-questioning stage indeed mines informative difficult tokens. Also, we can observe that learning these dynamic difficult tokens with our SE framework ("+ SE") could outperform "+ ADD" by +0.6 BLUE points, demonstrating the superiority of our token-specific label smoothing approach. Learning objective. As stated in §2.1, our learning objective is the combination of the ground truth and the model's prediction. To validate the effectiveness of predicted distribution, we conduct ablation experiments on WMT16 En-Ro and WMT14 En-De. The results in Table 10 show that adding the predicted distribution will consistently improve the model's performance, which proves the effectiveness of the predicted distribution. The last section of the paper.
A2. Did you discuss any potential risks of your work?
Left blank.
A3. Do the abstract and introduction summarize the paper's main claims?
The abstract and the introduction section.
A4. Have you used AI writing assistants when working on this paper?
Left blank.
B Did you use or create scientific artifacts?
Left blank.
B1. Did you cite the creators of artifacts you used? No response.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts? No response.
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? No response.
B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No response.
B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? No response.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
No response.
C Did you run computational experiments?
Left blank.
C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? No response.
The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance. | 2023-07-09T13:13:27.145Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "51f931afb9437c9512ae0058041e260d6f9ea27c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "51f931afb9437c9512ae0058041e260d6f9ea27c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
243813835 | pes2o/s2orc | v3-fos-license | The impacts of the lack of ergonomic vision in mining equipment design projects on the health of workers in the maintenance sector
This study aims to analyze the impacts of the lack of ergonomic view in the projects of equipment design on workers’ health from the maintenance sector, in the mining industry context. To understand the operators’ health, maintenance activities of three types of equipment were analyzed: pump, crusher and sieve. The methodological strategy of Ergonomic Analysis of the Workplace (EWA) was used. Thus, changes in these activities are required, since some postures adopted for their performance were considered severe and with high risk. Bearing weight above the shoulders and uncomfortable positions are conditions resulted from failures in the equipment designs. To alleviate the problem, it is suggested the adoption of innovative tools or the creation of new supportive devices to improve working conditions of these maintenance professionals.
Introduction
The mining sector deals with several types of equipment whose designs come from a time when ergonomics was not a major concern (Jasiulewicz- Kaczmarek & Drozymer, 2013). The lack of ergonomic perspective in the design of the pieces of equipment, without the participation of the "conception actors" (Daniellou, 2007), provides working conditions that are harmful to the health of the maintenance teams. Thus, operators are exposed to noise, vibration, heat and muscle fatigue that generate serious ergonomic problems (Leite et al., 2003). Among them, Work-Related Musculoskeletal Disorders (WMSDs) and Repetitive Strain Injuries (RSI) are stand out (Bulduk et al., 2017;Nadri et al., 2015). Once conceived without an ergonomic perspective, the equipment maintenance acquire a decisive status in the operators' illness process.
Searches carried out in academic databases did not identify in the literature studies that relate the lack of an ergonomic perspective in the equipment design and the illness problems of maintenance teams, especially addressing the mining sector.
Studies found address this relationship, however in other industrial contexts (Jasiulewicz-Kaczmarek & Drozymer, 2013), but not focusing on mining equipment as in this article. On the other hand, studies focused on mining address the maintenance management and reliability analysis of these pieces of equipment (Fore & Msipha, 2012;Bascur & Kennedy, 2002), or the use of standards that address ergonomic issues of industrial equipment operators (McPhee, 2004), without mentioning this relationship. It is highlighted that with automation, machine learning and other technologies, many pieces of equipment have become autonomous, and man-machine interaction has become more evident in their maintenance. These changes lead to the need to consider the perception of the "design actors" during the design phase of the equipment, to ergonomic teams find measures that eliminate or at least minimize the negative ergonomic impacts to the operators, during the execution of their maintenance tasks.
Ergonomics applied to equipment design (Ishwarya & Rajkumara, 2020;Carballeda, 1997) aims to find a way to work focused on maintaining the operators' health and safety (Daniellou, 2004;Dejours & Deranty, 2010). Daniellou (2004) states that ergonomics does not see man as an adjustable variable. It is the equipment that must adjust, complement man's strengths and abilities, minimizing the negative effects of work on their health (Daniellou, 2007).
To enable the equipment design, Daniellou (2007) proposes the perspective of Future Activity Analysis (FAA). The FAA seeks to solve the ergonomist's difficulty in assessing risk situations, identifying existing situations (reference situations) that may clarify the conditions of future activity (Béguin, 2007;Daniellou & Garrigou, 1992). The FAA in equipment projects contributes positively to risk management and mitigates negative ergonomic impacts on the operators' health.
The lack of an ergonomic perspective in equipment design (Jasiulewicz-Kaczmarek & Drozymer, 2013) leads workrelated illnesses to get worse and worse (Bulduk et al., 2017;Nadri et al., 2015). This worsening can be observed in the maintenance activity in mining industry, which includes a set of complex and risky tasks (Amalberti, 1996). This activity involves different equipment that require a specialization of the task to carry out this activity. Design flaws lead to operating modes in dissonance with the activity prescription, generating ergonomic risks, due to the worker's exposure to situations of postures and movements that lead the human body beyond its limit (Daniellou et al., 1989).
Even with the increasingly mechanization and automation of these complex activities, they continue comprising a set of manual tasks (Huysamen et al., 2018;Nadri et al., 2015) that intensify the damage to workers' health. WMSDs, for example, were responsible for removing 349,050 individuals from the workforce worldwide in 2016, according to the data released in 2018 by the Bureau of Labor Statistics. Besides WMSD, other losses are present such as exhaustion, overworkload and absence from work (Paula et al., 2016;Nadri et al., 2015), herniated disc, back pain (Bulduk et al., 2017), varicose veins, among others.
In this context, this study aims to analyze the impacts of the lack of an ergonomic view during equipment design on workers' health from the maintenance sector in a mining industry. The search for understanding this issue is justified by the need to investigate the causes of sick absence from work due to WMSD arose from this lack of an ergonomic view during the design of these pieces of equipment.
For the development of this research, it was used the Ergonomic Workplace Analysis (EWA). EWA allows us to understand the work activity, causes and progression of WMSDs (Sharan, 2012). Ergonomics focused on activity analysis seeks to improve working methods to promote changes in the conditions to perform activities (Laville, 2007;Montmollin, 2007;Guérin et al., 2001). The mobilization of the operators' knowledge in the work analysis, added to the ergonomist's knowledge, broaden the power to act on the work. Every intervention at work is a reciprocal learning situation, between the ergonomist and the operator, in which both build new knowledges (Lacomblez & Teiger, 2007).
The activities observation was guided through 11 out of 37 design guidelines proposed by Mulder et al. (2012). As a supportive tool, the Rapid Upper Limb Assessment (RULA) was used to assess the risk of WMSD regarding postures and movements. Thus, it is expected that, from the observation of equipment design failures through these guidelines, it might be possible to identify the ergonomic impacts suffered by maintenance teams in the mining sector. It is also expected the development of working environment conditions that focus on health, operators' safety and the minimization of occupational diseases (Więcek -Janka, 2011;Cruz, 2010;Braga, 2007) in the context studied.
Studies show that in the impossibility of changing equipment designs, companies can use technological devices to mitigate this problem. The equipment design failures analysis can facilitate the development of these devices, contributing to the mitigation of maintenance impacts on operators' ergonomic issues. Among these devices, the use of robots to aid the execution of activities stands out (Huysamen et al., 2018), collaborative interaction activity between humans and robots (Figueredo et al., 2020;kim et al., 2020), wearable devices, sensors, and robotic technologies, including exoskeletons (De Looze et al., 2016), the use of virtual reality and augmented reality (Qian et al., 2018), and gamification (Nor et al., 2020;Olszewski et al., 2018). Therefore, the current research is set as a step towards the implementation of maintenance 4.0 in the studied context.
Methodology
EWA seeks to understand the tasks' performance to provide improvements in working methods and thus promote changes in the conditions to perform activities (Guérin, 2001). EWA seeks to assess the factors that can directly or indirectly affect the emerging of musculoskeletal diseases such as RSI and WMSD (Sharan, 2012).
Based on the concepts adopted to carry out an EWA, this study was divided into i) task analysis, ii) activity analysis and iii) recommendations (validations). The first steps correspond to data collection, namely: demand analysis, analysis of prescribed work and activity, respectively. The final steps focused on generating a diagnosis of failures in the equipment design that caused ergonomic problems. Finally, technological trends were presented to mitigate ergonomic problems arising from equipment design failures.
The task analysis stage began collecting statistical data regarding sickness absence due to WMSD, between the years 2018 and 2020. The data were obtained from the occupational medicine sector of the studied company. These data were compared to those from the Brazilian Department of Labor for the year 2018.
Also at this stage, to analyze the prescribed work, data from three pieces of equipment were analyzed: pumps, crushers and sieves, in order to relate the reasons for the absence from work with the deficiencies of projects in the mining equipment.
These pieces of equipment were chosen due to their importance for the production process. For data collection, meetings were held with the maintenance sectors of these three equipment, to describe the steps to execute the prescribed tasks. As a baseline to perform the initial stages of the research, as for individual interviews as for written questionnaires (appendix A and B), the questions presented in Table 1 were used. In addition, it was used a form that assesses their perception regarding 11 out of the Research, Society andDevelopment, v. 10, n. 14, e158101421933, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i14.21933 4 37 guidelines of equipment design projects proposed by Mulder et al. (2012). With this, we understood the interviewees profile, the absence reasons and what their perception of the maintained equipment was. Table 1. Inputed informations on the interviews and questionnaires.
Item
Description Personal information Age, weight, gender, education and years with the company Assessment of prescribed and performed activities Do you notice any ergonomic problems in the equipment design? If so, what are the equipment failures? What kind of discomfort do they cause while performing the activity? Does this lack of an ergonomic view impact on the execution time of the activities? What would the consequences be of these design flaws on your health? Source: Authors. 60 operators were interviewed (30 pump operators, 10 crusher operators, 20 sieve operators and). 30 of them belongs to the 258 employees who were absent from work during this analyzed period. These interviews were conducted in the company's training room and each one lasted about 30 minutes.
The interviews were recorded and transcribed for analysis. The data were arranged in tables that aimed to know the problems faced by the operators, their perception of risk and the respective defense strategies used to prevent accidents, injuries and other work-related illnesses. The activities were recorded by taking notes during the maintenance activities, randomly documenting professionals who whether were carrying out maintenance activities or developing preparation activities.
For the activity analysis stage, through monitoring and observation of maintenance activities, with photographic reports and notes, it was possible to obtain parameters from the operators' perspective in relation to the premises supported by Mulder et al. (2012), regarding equipment projects. We also analyzed: 1) postures and movements, 2) duration of activity cycles and 3) risk and severity of accidents.
In the recommendation stage, it was made the correlation between the identified ergonomic problems and the failures in the equipment design that lead to safety and health problems for operators. This stage consists in answering the research problem through the study objective, which is to analyze the impacts of the lack of an ergonomic view during the equipment design on workers' health from the maintenance sector in a mining industry.
At the end of this stage, the study addresses necessary ergonomic recommendations and possible technological devices to be implemented following innovation trends applied as best practices worldwide.
Statistical analysis on absences from work for musculoskeletal problems
In 2018, the Brazilian Department of Labor presented statistical data regarding absence from work that lasted more than 15 days, either due to accidents or illness. From a total of benefits granted for accidents and occupational illness, out of 196,754 hours, 63.45% are related to problems concerning the musculoskeletal system.
In the company studied, the number came close to this value, with 61.98% in the year of 2020. Thus, a very similar behavior can be seen among the main reasons for absence from work. It is observed that both the company and Brazilian Department of Labor present musculoskeletal system injuries as one of the main sources of work absences. Table 2 presents data from the occupational medicine department of the studied company, regarding the reasons for absence from work due to musculoskeletal injuries. Between the years 2018 to 2020, it was possible to notice an increase in the annual average, peaking in 2020, with 61.98% of sickness absences related to the musculoskeletal system. There was a constancy in back pain injuries throughout the analyzed period, taking the third place, in addition to the problem "Other acquired member deformities " occupies the first place in the years 2019 and 2020.
Prescribed task analysis
Understanding the maintenance dynamics of the studied pieces of equipment and their characteristics can help to understand how their project design influence the ergonomic problems presented in this work.
➢ Centrifugal pumps are used to transport fluids by converting the kinetic energy of rotation to the hydrodynamic energy of the fluid flow. In their maintenance, due to their size, the connections with the discharge pipes are above the shoulder line. This implies the need to remain for a long time with arms raised, bearing the weight of the tools used in the disassembly of the piece, causing discomfort in the musculoskeletal system.
➢ Crushers are machines used to reduce the size of rocks and stones. For their maintenance, due to their dimensions, it is necessary to handle heavy parts and tools, which promote great efforts in the musculoskeletal system of maintenance teams.
➢ Sieves are equipment used in the process of classifying materials by their particle size. In their maintenance, it is necessary to get into the sieve, which promotes the need to perform tasks in a crouched position, causing discomfort in the musculoskeletal system.
In order to evaluate the maintenance task, the perception of 60 operators from the company regarding the assumptions for designing equipment projects adopted by Mulder et al. (2012) was analyzed. Regarding the demographic data of the interviewed sample, it was possible to observe that the interviewees are exclusively men, 40% are aged between 31 and 40 years.
Also, 40% of the intervewees have beteween 15 and 30 years in the company, 46.7% are between 1.70 to 1.75 m height and 40% between 75 and 85 kg. Regarding education, 60% of them have completed high school and 53.3% were absent for more than 10 days. Table 3 presents the results obtained through the assessment made by the interviewees regarding the ability to maintain itself, reliability and supportability of the pieces of equipment, according to the initial stages of the proposed EWA. Note. * Nnegative opinion averages (N). Source: Authors.
Given the results of the interviews, it is observed that the majority of the interviewees confirm the lack of ergonomic vision in the project design, making their activity of keeping the equipment working harmful to their health. According to the data obtained, it shows that the Reliability criterion is the one that most represents the absence of ergonomic vision during the equipment design. This criterion has a rejection of 80.0% in crushers, 93.3% in pumps and 73.3% in sieves. Among the pieces of equipment, activities in crushers were considered more fragile in terms of ergonomics, with 75.0% rejection among operators.
In the initial stages of the proposed EWA yet, table 4 presents the main difficulties of the maintenance task, which offer risks to operators' health. These difficulties were raised from the RULA posture assessment tool. RULA was developed to assess workers' exposure to ergonomic risk factors associated with upper limb WMSD. RULA, as an ergonomic assessment tool, considers biomechanical and postural load, task requirements and work demands on the neck, trunk and upper extremities (Joshi & Deshpande, 2021). According to Table 4, in the interviewees' opinion, bear tools above the shoulders was the most relevant difficulty for pumps, considered by 22% of the interviewees. For crushers and sieves, the uncomfortable working positions represent the greatest difficulty, with 24% and 28% respectively. These problems understanding provides a greater comprehension of what the possible failures are arising from the projects of these equipment that cause ergonomic losses for these operators.
Maintenance Activity Analysis
To promote an assessment of the maintenance activity, the EWA protocol was adapted using the RULA tool. This protocol consists of data collection through a questionnaire in Check List form and systematic observation at the place where the activities are performed. For this analysis, three items were evaluated: A) postures and movements; B) duration of activity repetition cycles and C) accident risk and severity. The verification sheets address the criteria evaluated in relation to the three items, which received scores ranging from 1 to 5, being 5 a more critical grade for ergonomic problems and 1 the lightest one.
Posture and movements
Working postures refer to the positions of the neck, arms, back, hips and legs while working. Work movements are the body movements required to work. Postures and movements were evaluated during maintenance activities for the three pieces of equipment. This evaluation aims to show whether there is a relationship between the lack of ergonomic vision during equipment design on posture and on the work movements to-be performed. Within the initial stages of the proposed EWA, Table 5 presents the matrix of movement and posture classification for workplaces.
Research, Society and Development, v. 10, n. 14, e158101421933, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i14.21933 Table 5 Analyzing Table 5, it was observed that the Back and Hip-Legs received 4, both for pumps and for the crusher. This score points to the need to make changes in the activity. These equipment maintenance conditions promote moments of tension and effort, which can contribute to injuries and illnesses for these operators.
Duration of activity repetition cycles
The work cycle is evaluated only in those activities where the task is continuously repeated and determined by the average duration of a repetitive work cycle, being measured from the beginning to the end of this cycle. Table 6 presents a classification regarding the duration of the repetitive cycle of the analyzed activities. In this regard, the assessment was rated between 2 and 3, pointing to the need for changes. It is noteworthy that repetitiveness under time pressure is the biggest determinant of illness at work (Fernandes, et al., 2010).
Accident risks and severity
Accident risks are related to the frequency of occurrence of the event. Severity is characterized by the magnitude of the effect. If the accident occurs, the severity is measured in terms of the operator's time of absence. Following the steps of the proposed EWA, Table 7 presents the scores for the two criteria: risk and severity. The worker can avoid accidents by employing normal safety procedures. No more than one accident occurs every five years. Small does not cause more than one day of absence from work Light The worker avoids the accident by following special instructions and being more careful and vigilant than usual. It might occur one accident per year. Medium causes less than a week of absence from work Small The worker avoids the accident by being extremely careful and following safety regulations exactly. The risk is apparent, and an accident might occur every three months. Severe X X X causes a month of absence from work Serious X X X The worker avoids the accident by being extremely careful and following safety regulations exactly. The risk is apparent, and an accident might occur every three months. Very severe causes at least six months of absence from work or permanent disability.
Very serious
Note. Adapted from Anohem et al. (1989). (X) score given by the respondent during the evaluation process. Source: Authors.
In terms of severity and risk, the assessment was considered severe and serious, respectively. The severity assessment in this high range points to the need for change. With the consolidation of the evaluated data, it was verified through the examiner's view that the problems related to the musculoskeletal system are the most latent ones. It was observed that the activities present risks to the health and integrity of the teams involved.
Failures Diagnosis in the Equipment Design that Cause Ergonomic Problems
As proposed by the EWA, through the observation of the activity and with the application of the RULA posture assessment tool, the data were organized in order to present the difficulties in performing the tasks for the three pieces of equipment (Table 8). Table 8. Perception of tasks found in maintenance activities on researched equipment.
Difficulties in the activity Pumps Crushers Sieves
Bear tools away from the body Due to the position of the suction reel (between the pump casing and the slurry tank), it is necessary to keep the arms extended during its disassembly activity.
Due to several interferences existing in the hydraulic system of the crusher shaft drive, it is not possible to use the tools close to the body during the entire maintenance period.
The drive shaft makes it impossible for the arms to be close to the fastening screws, making it necessary to keep the arms extended during the removal of the bearing.
Bear tools above the shoulders
The casing dimensions (with sizes greater than 14x12") provide the existence of parts and attachments that are above shoulder height.
During crusher crankcase maintenance, many of the parts and fixtures are asssembled above the shoulder line.
In casing repair operations, to reach the upper sides, it is necessary to support the sander and electrode holder above the shoulder line.
Bear tools and components weight
To disassemble the pumps it is necessary to use tools such as pneumatic screwdrivers that weigh 25kg on average.
To disassemble the crushers it is necessary to use tools such as pneumatic screwdrivers that weigh a 25kg on average , 36" wrenches and 500kg hoists that weigh 7kg and 10kg respectively.
In the reforms and repairs of the internal structure of the sieves, as the operator must be inside it, below the level of fixation of the screens, it is not possible to use overhead cranes. In this way, the tools and parts used in this operation are kept with human strength.
Difficult accessibility to maintenance points
To disassemble the pump suction it is necessary to stand between the slurry tank, the suction valve, the flanges and the pump casing, giving limited space to move.
As in the crusher hydraulic system there are many oil and water pipes. To reach some parts it is impossible to maintain an upright posture. The operator "working sideways" exposes his spine to mechanical stress.
For repairs to the main pipe and channells it is necessary to get into the sieve. The operator works lying down, being limited in his movements.
Uncomfortable working position
To disassemble parts such as the casing, rotor, volute wear plate and pipes, it is necessary to remain standing or crouched during these activities.
To disassemble parts, such as the hydraulic system, coating, axle, among others, it is necessary to remain standing or crouched during these activities.
In the maintenance of the internal components of the sieves such as the main pipe, channell, fastening rulers and screens, it is necessary to get into the sieve. The working position is lying down or crouched. Moving heavy components and oxyfuel cutting assemblies with human force Replaced parts and oxy fuel cut sets are transported using human force, pushing a platform car for more than 100m.
Source: Authors.
According to the data collected, from the maintenance activities observation in the three studied pieces of equipment (Table 8), it can be noticed how the interaction between the body of the maintenance teams and the equipment maintained is.
Thus, it is possible to identify postures that cause effort or discomfort to the musculoskeletal system. Among the inappropriate postures, it is observed being crouched; standing for a long time; staying with arms raised above the line of the shoulders; using of human force as a way of transporting parts, among others. These postures come from the failures in these equipment design, which did not use the vision of future activity, to predict situations like these and thus outline mitigation measures.
The search for solutions to achieve the goals that provide more safety for workers in the mining industry, from the ergonomic maintenance point of view, presupposes the mobilization of professional design skills, which may involve technical areas (engineering, design, information technology, etc.) as well as the organizational structure. In this case study, it can be seen that the lack of "design actors", as called by Daniellou (2007), in the equipment designs involved in the study, lead to risky situations for the safety and health of workers involved in their maintenance. Reports of the difficulties encountered by these professionals, during the execution of their activities, demonstrate their dissatisfaction. Reports such as, "at the end of the day my legs are sore" (Operator 01)¹ or "it is strange how after a while with the arms up, the equipment part seems to get heavier" (Operator 02)¹ or, "after crouched for a long time, when I get up I get dizzy" (Operator 03)¹ or even, "the jolt that this platform cart causes in the arm, if we distract it may hurt us" (Operator 04)¹. These arguments were recurrent during the interviews.
The findings show that even with studies aimed at improving the ergonomic conditions of maintenance activities, there is still a gap for studies with regard to mining equipment. These pieces of equipment were conceived in the 20th century and have presented difficulties to be modified. The use of human force, as a form of traction for vehicles to transport parts and oxy fuel cut assemblies, for example, is still part of the routine of these professionals. Thus, they need to adapt to these conditions, creating their own strategy or being affected by occupational diseases to which they are exposed to.
Technological Trends to Mitigate Ergonomic Problems Arising from Equipment Design Failures
It is verified in the study that due to the physical characteristics of the pieces of equipment, as they are large and heavy, it is necessary to make great mechanical efforts during maintenance activities. As stated in the study, these pieces of equipment, conceived in the 20th century, showed little design evolution. The evolutions given to them were mainly restricted to the use of more durable materials or improvements to promote greater operational performance. However, no changes were observed to fullfill this lack of ergonomic point of view in the equipment design, as defended by the concept of design actors (Daniellou, 2007). Thus, what is verified here is the latent need to "remedy" the problem. Working reactively, creating devices that help operators to eliminate situations that are harmful to their health, or at least reduce their exposure to these agents. Given the difficulties found by operators, it is suggested the use of new technologies as mitigating actions. With technological advancements in the human-robot relationship, which aim to safeguard the worker's well-being while optimizing the system's productivity and performance, the exoskeleton stands out as a successful innovation (Figueredo et al., 2020;kim et al., 2020;Spada et al., 2017).
Physical assistance projects to activities with robots, cobots or wearable assistants, such as exoskeletons or assistive technology devices for physical disabilities are already a reality (Giovanelli & Vareille, 2018). For Viteckova et al., (2013), commercially available exoskeletons have been predominantly used for rehabilitation purposes. These devices are made to support and assist physically weak, injured or disabled people with prescribed exercises and activities. De Looze et al. (2016) highlight the importance of demonstrating the effectiveness and safety of these technologies in order to support their uptake by industry.
It can also be mentioned the fixed or mobile base manipulators very widespread in the automotive sector, as a way of solving the anti-ergonomic weight-bearing conditions to which these operators are subjected (Katz et al., 2006). The application of Automatically Guided Vehicles (AGV) to transport parts is also a reality in other industrial sectors and can be a solution for eliminating human force in the activity of transporting tools and parts (Das, 2016). There are also mechanical weight canceling devices that, through reactions by springs or counterweights, are capable to eliminate human-induced weight bearing (Yamada et al., 2011).
Therefore, using these technologies presented can be a way to remedy the ergonomic deficiencies found in the equipment design studied in this article, in order to contribute to improve the mining operators' health condition. According to the final stage of the proposed EWA and, given the study trends mentioned for mitigating the reported problems, the study points to some technologies capable to mitigate the negative effects on the operators' health during the maintenance activity (Table 9). Table 9.Trend suggestions as a mitigation solution for ergonomic risks.
Use of human force to move components Supportive vehicles (Das, S. K., 2016).
It is verified, according to Table 9, that these technologies can be applied with the intention of favoring ergonomic conditions in the maintenance of the three pieces of equipment in the study. Exoskeletons, manipulator arms and weight cancellers can all contribute to weight problems and unpleasant positions for the human body. Finally, robots can replace humans in hard-to-reach places, and supportive vehicles contribute to the use of human force to transport parts and tools.
Conclusions
This research aimed to assess the impacts of the lack of an ergonomic perspective in the equipment design on the maintenance operators' health, in a mining industry context. For Bolis et al. (2020) one of the main goals of ergonomics is to transform work processes into a deeper understanding and involvement of workers.
To gather data about ergonomic factors of the maintenance activity, a survey questionnaire was conducted among workers in the mining maintenance sector. This approach allowed workers to participate in the improvement, as they are the main focus of this study.
During the data analysis phase, there was a convergence between data from the Brazilian Department of Labor and the company studied. According to data from the Brazilian Department of Labor, out of 196,754 hours granted due to accidents and illness at work, 63.45% of them refer to problems related to the musculoskeletal system. In the company studied, this number came close to this value, with 61.98%.
According to the results of the prescribed task analysis, it was observed that tools above the shoulders were considered the greatest difficulty among 22% of pump operators. For crushers and sieves, the biggest difficulty was the uncomfortable working position, with 24% and 28% respectively.
In the activity analysis phase using the RULA tool, the Back and Hip-legs required changes were observed (both received score 4 for pumps and crusher). According to the operators' evaluation, 63.7% of them agree that there is a problem to keep the maintenance capability. Another 82.2% reject the reliability premise and 61.7% see the lack of ergonomic view in the supportability for maintenance item.
The conception of these huge pieces of equipment took place in an age when they were designed to be robust and to last for many years. Given the difficulty of modifying the designs of the studied pieces of equipment, it is suggested the use of technological solutions such as exoskeletons, collaborative robots, weight cancellers and autonomous supportive vehicles for transporting parts and tools. These solutions seek to reduce the negative impacts of equipment design failures on operators' health. They are also able to support weight, or promote rest for the musculoskeletal system of operators in crouched tasks. The use of virtual reality also represents a way to train operators in the prescribed tasks.
Even with all the technological advancements and the coming of industry 4.0, it is clear that the maintenance sector has cutting-edge technology but it still has equipment from the first industrial revolution age. Tools such as hammers, levers, wrenches, among many other manual work still contrast with the technological advancements present in industrial maintenance 4.0. Allied to this, the lack of space to the operator poses himself during maintenance, the lack of ergonomic positions to eliminate harmful conditions to health and the weight of parts and components lead us to a worrying situation in this sector.
The research gave birth to new theoretical discussions such as the mining equipment obsolescence from the point of view of the design actors of Daniellou (2007). It also allowed for an initial proposal on how companies can improve the synergies between ergonomics and deficiencies in equipment design, in practice, through the insertion of risk mitigation devices. Hence, the current research offered a kind of diagnosis of the situation and potential for further studies development, project reviews or creation of supportive devices to improve ergonomic requirements.
As the studied pieces of equipment in the research are used in all mining companies, the research was limited to observe only a single company, showing its reality in comparison with data from the Brazilian Department of Labor and Employment.
Despite this, it is observed that this problem is experienced by most companies in this sector. Another limitation is mentioned that the technologies presented here are not yet set as a reality in this sector, thus requiring an empirical validation.
For future researches, based both on the article and on the literature used, it is suggested a greater interaction of the ergonomic in the industrial equipment design stages. In this context, a new industry emerges, that creates devices that facilitate maintenance and that provide safety, ergonomics and productivity to operations. Studying the equipment design failures and proposing supportive solutions by using devices or accessories are presented as an excellent gap for further studies. The study of these solutions can improve these professionals' health as well as better operational performance of these industrial processes.
These researching lines are important for society, since the company data corroborate with data from the Brazilian Department of Labor and Employment. Furthermore, all mining companies use the same pieces of equipment. Future studies with more companies from the same field will be able to certify the impacts of the lack of ergonomic aspects in the mining equipment designs and verify the prevalence or not of the same problems mentioned here. | 2021-11-07T16:04:39.582Z | 2021-11-02T00:00:00.000 | {
"year": 2021,
"sha1": "36c1f850e51eab4e0aa0d689ce32caf89c55426e",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/21933/19596",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8487e226fa1903b7882422b2888c269440ec1a1d",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.