id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
119315424 | pes2o/s2orc | v3-fos-license | On Div-Curl for Higher Order
We present new examples of complexes of differential operators of order $k$ (any given positive integer) that satisfy div-curl and/or $L^1$-duality estimates.
Introduction
In 2004 Stein and the first named author discovered a connection [LS] between the celebrated Gagliardo-Nirenberg inequality [G]- [N] for functions (1) f L r (R n ) ≤ C ∇f L 1 (R n ) , r = n/(n − 1) and a recent estimate of Bourgain and Brezis [BB2] for divergence-free vector fields as proved by Van Schaftingen [VS1] (2) Z L r (R n )) ≤ C Curl Z L 1 (R n ) , r = n/(n − 1), div Z = 0 Such connection is provided by the exterior derivative operator acting on differential forms on R n with (say) smooth and compactly supported coefficients It was proved in [LS] that the inequality (4) u L r (R n ) ≤ C( du L 1 (R n ) + d * u L 1 (R n ) ), r = n/(n − 1) To holds for any form u of degree q other than q = 1 (unless d * u = 0) and q = n − 1 (unless du = 0). Note that (1) is the case q = 0, whereas (2) is the case q = 1 specialized to d * u = 0.
Since those earlier results div/curl-type phenomena have been studied both in the Euclidean and non-Euclidean settings [Am], [BV], [HP1], [HP2], [M], [MM], [Mi] [VS4], [CV], [Y]. In [VS2] and the recent works [BB3], [VS3], [VS5], differential conditions of higher order have been considered for the first time in such context. (By contrast, the exterior derivative in (3) is defined in terms of differential conditions of order 1.) The goal of the present paper is to produce a new class of differential operators of order k (where k is any given positive integer) that satisfy an appropriate analogue of (4) and contain the operators introduced in [BB3], [VS2] and [VS3]; since the conditions play an important role in the proof of (4), the new operators should satisfy (5) as well. We achieve this goal in a number of ways, beginning with: Theorem 1.1. If u ∈ C ∞ q (R n ) has compact support, then (6) u W k−1,r ≤ C( T u L 1 + T * u L 1 ), r = n/(n − 1) whenever q is neither 1 (unless T * u = 0) nor n − 1 (unless T u = 0), where T u := |L|=q+1 |I|=q j=1,...,n ǫ jI L ∂ k u I ∂x k j dx L Here and in the sequel, W a,p (R n ) denotes the Sobolev space consisting of a-times differentiable functions in the Lebesgue class L p (R n ) (and W a,p q (R n ) will denote the space of q-forms with coefficients in W a,p (R n )), while ǫ AB C ∈ {−1, 0, +1} is the sign of the permutation that carries the ordered set AB = {a 1 , . . . , a ℓ , b 1 , . . . , b q } to the label C = (c 1 , . . . , c ℓ+q ), if these have identical content, and is otherwise zero. Note that when k = 1 then T = d and inequality (6) is indeed (4).
Another such complex, again involving a differential condition of order k ≥ 1, is obtained by embedding R n isometrically in a larger space R N . (The choice of "inflated" dimension N will be discussed later.) The resulting operators act on "hybrid R n -to-R N " spaces of forms whose coefficients are trivial extensions to R N of functions defined on R n ; to distinguish such spaces from the classical Sobolev spaces W a,p q (R N ) (to which they are by necessity transversal) we will use the notation W a,p q (R N ), 0 ≤ q ≤ N, to indicate a dense subspace of smooth "compactly supported" forms. These operators, which we denote T 1,ℵ , map The label ℵ refers to a choice of an ordering for the set of all k-th order derivatives in R n , and so in practice we define a finite family { T 1,ℵ } ℵ of such complexes. (We use the subscript "1" in T 1,ℵ to specify that T 1,ℵ maps q-forms to (q + 1)-forms, a distinction that will be needed later on). The explicit definition of T 1,ℵ will be given in the next section; what matters here is that these operators satisfy a more general version of (6) in the sense that the following inequality implies (6) but the converse is not true: whenever q is neither 1 (unless T * 1,ℵ U = 0) nor N − 1 (unless T 1,ℵ U = 0). Theorem 1.1 recaptures an L 1 -duality estimate of Bourgain and Brezis: where the constant C only depends on the dimension of the space n and on the order k.
On the other hand, Theorem 1.2 was motivated by a recent result of van Schaftingen: VS3]). Given k ≥ 1 and n ≥ 2, let in the sense of distributions, then where the constant C only depends on the dimension of the space n and on the order k.
Here S(n, k) denotes the set of k-multi-indices in R n : A key ingredient in the proof of Theorems 1.1 and 1.2 is the fact that the Hodge laplacians for these operators, namely satisfy a uniform Legendre-Hadamard condition which in turn yields elliptic estimates.
Rather surprisingly, it turns out that in fact there is a larger class of such operators, mapping the label ℓ now runs over all the elements of what we call the set of admissible degree increments, which is a subset of {1, . . . , k} determined by n (the dimension of the source space) and k (the order of differentiation): for any n ≥ 2 and k ≥ 2, the set of admissible degree increments contains at least two distinct elements: ℓ = 1 (discussed earlier) and also ℓ = k. Each admissible degree increment in turn determines an "inflated dimension" N (in particular N will change with ℓ). However the situation for ℓ = 1 differs from the case ℓ = 1 in two important respects: the crucial condition (5) will hold only for odd ℓ, and if ℓ = 1 the Hodge laplacian for T ℓ,ℵ will fail to be uniformly elliptic (even for ℓ odd): as a result there is no analog of (6). Instead, we show that for any admissible degree increment (thus also for ℓ = 1), the operators T ℓ,ℵ satisfy L 1 -duality estimates that are similar in spirit, and indeed are equivalent to (13); see Theorem 2.3 for the precise statement.
A further class of operators which contains our very first example T , see (7), can be defined in terms of T ℓ,ℵ and of the aforementioned embedding: R n → R N . Such operators map q+ℓ (R n ) and satisfy div-curl and/or L 1 -duality estimates that are stated solely in terms of the source space R n rather than the "hybrid R n -to-R N " spaces L p q (R N ) and W a,p q (R N ), see Theorem 2.4 and (77). (Of course, if ℓ = 1 such operators are non-trivial only for n ≥ ℓ.) We need to explain the reason for our choice to keep track, through the label ℵ, of the orderings of S(n, k): this has to do with the notion of invariance. One would like to know whether the identity holds for any F ∈ C ∞ q (R N ) and for some non-trivial class of diffeomorphisms Ψ : R N → R N of class C k+1 : it is in this context that the choice of ℵ may be relevant. In the case k = 1 our construction gives N = n with ℵ spanning the set of all permutations of {1, . . . , n}, and since k is 1 there is only the admissible degree increment ℓ = 1. As a result, for k = 1 we have In particular one has T 1,ℵ 0 = T = d for exactly one permutation ℵ 0 (the identity) which therefore determines an invariant operator. On the other hand it is easy to check that for any ℵ = ℵ 0 the operators T 1,ℵ fail to be invariant.
No such phenomenon exists for k ≥ 2: there is no choice of ℵ (nor ℓ) that makes T ℓ,ℵ invariant and (15) fails even in the case when Ψ originates from a rotation of R n . It can be verified that T ℓ,ℵ , too, is not invariant because if k ≥ 2 the identity (16) T ℓ,ℵ ψ * = ψ * T ℓ,ℵ fails for any ℓ and for any ℵ, already for ψ a rotation of R n .
Finally, we point out that our results can be rephrased in terms of the canceling and cocanceling conditions of [VS4]: within that framework, our results provide new classes of differential operators of arbitrary order that are canceling and/or cocanceling, with the size of the admissible degree increments acting as an indicator of the canceling property. See the remarks in Section 4. This paper is organized as follows: in Section 2 we introduce the notion of admissible degree increment, we describe the "hybrid R n -to-R N " Sobolev spaces W a,p q (R N ) in term of the embedding, and we define the operators T ℓ,ℵ and T ℓ,ℵ and discuss their basic properties (adjoints; uniform ellipticity). The L 1 -duality estimates for T ℓ,ℵ and for T ℓ,ℵ are stated in Theorems 2.3 and 2.4, and the precise statements of (8) and of (6) are given in Theorem 2.8 and in (77). All the proofs are deferred to Section 3. Section 4 contains some remarks and a few questions.
1.1. Notation. As customary, we let Λ q (R n ) denote the space of qforms: where I(n, q) denotes the set of q-labels for R n : (18) I(n, q) = I = (i 1 , . . . , i q ) | i t ∈ {1, . . . , n}, i t < i t+1 and dx I = dx i 1 ∧ · · · ∧ dx iq . When q = n the expression above is the volume form and we use the notation dV . We will regard the label set I(n, q) as canonically ordered (alphabetical ordering). Letting i : R n → R N denote the isometric embedding mentioned above and defined in (26), the "hybrid R n -to-R N " subspace of Λ q (R N ) (consisting of those q-forms whose coefficients are trivial extensions to R N of functions defined on R n ) is more precisely described as follows As a result the reverse composition We will denote the Hodge-star operators for each of Λ q (R n ) and Λ q (R N ) respectively by * n and * N ; note that we have
Statements
2.1. Admissible degree increments. Given three integers: i. n ≥ 2 (the dimension of the source space), ii. k ≥ 1 (the order of the differential condition), and we say that ℓ is an admissible degree increment for the pair (n, k) if and only if the polynomial equation has a solution N that satisfies the following two conditions: Note that the pair (n, 1) (that is, k = 1) has exactly one admissible degree increment, namely ℓ = 1, and in this case equation (23) has the unique solution: N = n. On the other hand, for k ≥ 2 any pair (n, k) will have at least two admissible degree increments (ℓ = 1, k) and possibly more, for instance: the pair (n, k) = (2, 9) has (exactly) four admissible degree increments, namely ℓ = 1, 2, 3, 9; similarly, the pair (n, k) := (2, 29) has (at least) ℓ = 1, 2, 29. For any admissible degree increment, we consider the embedding where N = N(n, k, ℓ) is as in (23) and (24). We let i also denote the embedding of k-multi-indices i : S(n, k) → S (N, k) that is canonically induced by (26), namely (27) i(α 1 , . . . , α n ) := (α 1 , . . . , α n , 0, . . . , 0) ∈ S (N, k) and adopt the notation We have (11), and so there are m!-many distinct orderings of iS(n, k). By the definition of N the set of labels I(N, ℓ) also has cardinality m and we will think of each ordering of iS(n, k) as a one-to-one correspondence 2.2. Hybrid Function spaces. Given an integer a ≥ 0 and given p, p ′ ≥ 1 such that 1/p + 1/p ′ = 1, we first set (q = 0) where i is as in (26) and (32) π(z 1 , . . . , z n , . . . , z N ) = (x , . . . , x n ) := (z 1 , . . . , z n ) satisfies (20), and which in turn grants for any 1 ≤ s ≤ a and for any λ ∈ S(N, s) \ iS(n, s), so that these spaces are more precisely described as follows As customary, these definitions are extended to forms for any I ∈ I (N, q). We observe for future reference that identity (20) grants for any β ∈ S(n, s) and for any F ∈ W a,p (R N ).
Lemma 2.1. For any 0 ≤ q ≤ N; for any p ≥ 1 and for any integer a ≥ 1 the following properties hold: [A], provided the following conditions are satisfied: ii. For any 0 ≤ s < ∞ and for each β ∈ S(n, s) we have There exists a locally convex topology on the vector space C ∞,c q (R N ) with respect to which a linear functional L is continuous if, and only if, is identified (in the usual fashion, see e.g., [A,.12]) with a closed subspace of the Cartesian product n − 1 + j j and from this it follows that for any F ∈ W a,p q (R N ) and [A]. Note that since the spaces L r q (R N ) and L r q (R N ) are transversal; the same is true for W a,p q (R N ) and W a,p q (R N ) and for the respective dual spaces.
2.3. Operators and their adjoints. For ℵ as in (30) and for any admissible degree increment ℓ, we define a kth-order differential condition via the action where N is as in (23) and (24) and q ∈ {0, 1, . . . , N}. Here iα is as in (28) and ℵ is the correspondence (30). This action produces a differential operator T ℓ,ℵ that maps It follows from (35) that the action (43) also determines an operator (20) grants that the pullback by π maps see (32). On the other other hand, it is immediate to check that . On account of these observations we see that the action (43) produces a third operator T ℓ,ℵ that maps Note that T ℓ,ℵ acts non-trivially only for Condition (48) may be viewed in two different ways: as a constraint on the size of the degree increment ℓ relative to the pair (n, k) (however note that (48) is satisfied by ℓ = 1 for any pair (n, k)) or as a constraint on the size of n relative to k (and in this case, imposing the constraint n ≥ k ensures that (48) holds for all admissible degree increments). In the following, ·, · denotes the duality in W p,2 q (R N ) (resp. W p,2 q (R n )).
Proposition 2.2. Let ℓ be an admissible degree increment for (n, k).
in the sense of distributions, then For any ℓ ≤ p ≤ N and for any G ∈ L 1 The constant C depends only on n and k.
For any 0 ≤ q ≤ n − ℓ and for any f ∈ L 1 T ℓ,ℵ f = 0 in the sense of distributions, then For any ℓ ≤ p ≤ n and for any g ∈ L 1 The constant C depends only on n and k.
A similar computation shows that the same is true for T ℓ,ℵ , so in the sequel we will often pay special attention to the admissible degree increment ℓ = 1.
Lemma 2.7. We have that Theorem 2.8. Suppose that F ∈ L 1 q+1 (R N ) and G ∈ L 1 q−1 (R N ) satisfy the hypotheses of Theorem 2.3. Let be the solution of the Hodge system for T 1,ℵ with data (F, G), that is: whenever q is neither 1 (unless G = 0) nor N − 1 (unless F = 0).
We remark in closing that for ℓ ≥ 2 there is no analog of (67). (where the brackets { } indicate that the (ordered) label J is being regarded as an (unordered) set {J}), it can be proved that In particular, the coordinate-based representation of ℓ,ℵ does depend on the choice of the representation ℵ, and it is no longer true that C M I ℵ(iα)ℵ(iβ) = 0 whenever α = β, even for odd ℓ.
Proofs
Proof of Lemma 2.1. Conclusions i. and ii. are an immediate consequence of the (classical) theory for R n combined with the readily verified identities: To prove the density of C ∞,c be given. By the definition of L r q (R N ), for any I ∈ I(N, q) we have that F I • i ∈ L r (R n ), and so there is Then, using (20), we see that F j,I • i • π = F j,I and F j,I • i = f j,I ∈ C ∞ c (R n ) hold for any I ∈ I (N, q), and from these it follows that Moreover, on account of (79) and (81), there is C = C(r, N) such that as desired. The conclusions concerning the Sobolev spaces follow from the theory for W a,p q (R n ) via (34).
Proof of Proposition 2.2. Let F ∈ C ∞,c q (R N ) and G ∈ C ∞,c q+ℓ be given. One has Integrating both sides of this identity and then further integrating the right-hand side by parts k-many times we find On the other hand, a computation that requires manipulating the coefficients ǫ Identity (50) is now obtained by integrating the two sides of the identity above and comparing with (82) after having adjusted the multiplicative constants as in (49). Note that since (39), the same argument also shows that D λ T ℓ,ℵ F, D λ G = D λ F, D λ T * ℓ,ℵ G The proofs of (52) and of (54) follow in a similar fashion.
satisfy the hypotheses of Theorem 2.3. Fix an arbitrary I 0 ∈ I (N, q), and choose (any) L 0 ∈ I(N, q + ℓ) so that (The hypothesis: q ≤ N −ℓ grants q+ℓ ≤ N and so at least one such L 0 must exist.) With I 0 and L 0 fixed as above, define h L 0 = (h L 0 α ) α∈S(n,k) and We claim that g L 0 satisfies condition (12) where the last identity is due to the hypothesis (55). It thus follow from Theorem 1.4 that where α 0 ∈ S(n, k) is uniquely determined by I 0 and L 0 via On the other hand, it is immediate to verify that Since I 0 ∈ I(N, q) had been fixed arbitrarily, we have proved that holds for any I ∈ I (N, q), for any 0 ≤ q ≤ N − ℓ and for any admissible degree increment ℓ. Inequality (56) follows from (83) and the coordinate-based representation for ·, · L , see (80). (We remark that in the special case q = 0, the proof follows along these very same lines by defining g α := F • i for each α ∈ S(n, k).) In order to prove (58), it suffices to apply (56) to: F := * N G and H := * N K (with q := N − p).
Theorem 2.3 ⇒ Theorem 1.4. Let ℓ be any admissible degree increment for (n, k) and let ℵ be any one-to-one correspondence: iS(n, k) → I (N, ℓ). Suppose that g and h satisfy the hypotheses of Theorem 1.4; without loss of generality we may assume that g α , h α ∈ C ∞ 0 (R n ), α ∈ S(n, k). Since π • i is the identity on R n , we have (35) and (84) we find [ T ℓ,ℵ where the last identity is due to the hypothesis (12). Now observe that if G ∈ Λ q (R N ) then G = 0 ⇐⇒ G I • i = 0 for each I ∈ I (N, q).
Theorem 2.3 ⇒ Theorem 2.4. Let ℓ be an admissible degree increment such that n ≥ ℓ, let ℵ be any one-to-one correspondence: iS(n, k) → I(N, ℓ) and let 0 ≤ q ≤ n − ℓ be given. Suppose that satisfies the hypotheses of Theorem 2.4; without loss of generality we may assume that f ∈ C ∞,c q (R n ). By the definition of T ℓ,ℵ , see (47), we and applying (35) we obtain where the last identity is due to the hypothesis (59). Fix I 0 ∈ I(n, q) and choose (any) L 0 ∈ I(n, q + ℓ) so that (The hypothesis q ≤ n − ℓ grants q + ℓ ≤ n, so at least one such L 0 must exist.) Note that since ℓ ≤ n ≤ N we have I(n, ℓ) ⊆ I (N, ℓ), so with I 0 and L 0 fixed as above, we may define ǫ JI L 0 f I • π for J ∈ I(n, ℓ) 0 for J ∈ I(N, ℓ) \ I(n, ℓ).
Applying (51) with q := ℓ we obtain (ignore the factor (−1) k+q(N −ℓ−q) ) and by the definition of F L 0 this is further simplified to Note that on account of (20) and of (35) we have see (19). Thus T * ℓ,ℵ F L 0 = 0 and by Theorem 2.3 we conclude that be given (without loss of generality we may assume that h ∈ C ∞,c where δ J 0 J is the Kroenecker symbol. Then Note that Moreover, by the definition of F L 0 we have Thus, applying (87) toĤ we conclude that is true for any I 0 ∈ I(n, q), for any 0 ≤ q ≤ n − ℓ and for any h ∈ C ∞,c q (R n ), and this in turn implies (60). In order to prove (62), it suffices to apply (60) to: f := * n g (with q := n − p).
Suppose first that α = β. In this case we claim that C M I ℵ(iα) ℵ(iβ) = 0. The proof of this claim rests on the following We postpone the proof of Remark 3.1 and continue with the proof of Lemma 2.5; to this end we claim that if α = β then (92) holds ⇐⇒ A M I ℵ(iα)ℵ(iβ) = 0 Indeed, since α, β, I and M are fixed, the summation that defines (90), consists of at most one term, that is (90) and (91) become and since α, I and M are fixed, each of these summations consists of at most one term, that is for at most one choice for each of L 0 ∈ I(N, q+1) and K 0 ∈ I(N, q−1).
In particular we see that On the other hand, for I = M we have Combining (96) and (100) we obtain (67). The proofs of (69) -(71) are obtained in a similar fashion; in this case (49) grants Proof of Remark 3.1. If α = β and the three conditions in (92) hold, then it follows at once that the first two conditions in (93) (93) is true, as well.
Theorem 2.8 ⇒ Theorem 2.3 (ℓ = 1; 1 ≤ q, p ≤ N − 1). We show that (56) holds for any q in the range 1 ≤ q ≤ N − 1. Suppose that T 1,ℵ F = 0, F ∈ L 1 q (R N ) and let H ∈ ( L ∞ q ∩ W 1,n q )(R N ). Without loss of generality we may assume: H, F ∈ C ∞,c q (R N ). Let X ∈ C ∞,c q−1 (R N ) be the solution of (101) with data F . Then, by Hölder inequality (42) Now observe that if integrate the expression T * 1,ℵ H, ζ L by parts (k − 1)-times and then apply Hölder inequality we obtain | T * 1,ℵ H, ζ L | ≤ ∇H L n q ζ W k−1,r q−1 and this leads to the conclusion of the proof of (56) as 2. If q ≥ N − ℓ + 1 or p ≤ ℓ − 1 then one of the two compatibility conditions (55) and (57) holds trivially and in this case the conclusion of Theorems 2.3 and 2.8 are easily seen to be false: if k = 1 and T 1,ℵ 0 = d (exterior derivative) substitute inequalities hold provided the "defective" data belongs to a suitable (proper) subspace of L 1 , namely the real Hardy space H 1 (R n ), see [LS]. We do not know whether substitute inequalities hold when k ≥ 2.
3. In the context of [VS4] our results say the following: The class T ℓ,ℵ has similar properties with V = C ∞ q (R n ) and E = C ∞ q±ℓ (R n ).
In particular, T 1,ℵ and T 1,ℵ as well as their adjoints, are new examples of canceling operators of arbitrary order k. | 2015-09-29T15:04:44.000Z | 2014-01-05T00:00:00.000 | {
"year": 2014,
"sha1": "37b6e39add70d1f5359807a1902f6832d851ad12",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "37b6e39add70d1f5359807a1902f6832d851ad12",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
20646073 | pes2o/s2orc | v3-fos-license | Pre-processing for noise detection in gene expression classification data
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Introduction
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy examples. This kind of data may originate, for example, from errors during data collection, such as contaminations of laboratorial samples. Gene expression data are examples of biological data that suffer from this problem. Although many Machine Learning (ML) algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis.
Noise can be defined as an example apparently inconsistent with the remaining examples in a data set. The presence of noise in a data set can decrease the predictive performance of Machine Learning (ML) algorithms, by increasing the model complexity and the time necessary for its induction. Data sets with noisy instances are common in real world problems, where the data collection process can produce noisy data.
Data are usually collected from measurements related with a given domain. This process may result in several problems, such as measurement errors, incomplete, corrupted, wrong or distorted examples. Therefore, noise detection is a critical issue, specially in domains demanding security and reliability. The presence of noise can lead to situations that degrade the system performance or the security and trustworthiness of the involved information. A wide variety of noise detection applications can be found in different domains, such as fraud detection, loan application processing, intrusion detection, analysis of network performance and bottlenecks, detection of novelties in images, pharmaceutical research, and others 17 .
Different types of noise can be found in data sets, specially in those representing real problems (see Figure 1). In order to illustrate these different types, the instances of a given data set can be divided into five groups: Mislabeled cases: instances incorrectly classified in the data set generation process. These cases are noisy instances; Redundant data: instances that form clusters in the data set and can be represented by others. At least one of these patterns should be maintained so that the representativeness of the cluster is conserved; Outliers: instances too distinct when compared to the other examples of the data set. These instances can be either noisy or very particular cases and their influence in the hypothesis induction should be minimized; Gene expression data are, in general, represented by complex, high dimensional data sets, which are susceptible to noise. In fact, biological or real world data sets, and gene expressions data sets are part of it, present a large amount of noisy cases.
When using gene expressions data sets, some aspects may influence the performance achieved by ML algorithms. Due to the imprecise nature of biological experiments, redundant and noisy examples can be found at a high rate. Noisy patterns can corrupt the generated classifier and should be therefore removed 21 . Redundant and similar examples can be eliminated without harming the concept induction and may even improve it.
In order to deal with noisy data, several approaches and algorithms for noise detection can be found in the literature. This paper focus on the investigation of distance-based noise detection techniques, adopted in a pre-processing phase. This phase aims to identify possible noisy examples and remove them. In this work, three ML algorithms are trained with the original data sets and with different sets of pre-processed data produced by the application of noise detection techniques. By evaluating the difference of performance among classifiers generated over original (without pre-processing) and pre-processed data, the effectiveness of distance-based techniques in recognizing noisy cases can be estimated.
There are other works 18, 24 that look for noise in gene expression data sets but, different from this work, the experiments reported in these papers eliminate only genes. In the experiments performed here, we use noise detection techniques mainly to detect mislabeled tissues.
Details of the noise detection techniques used are presented in Section 2. The methodology employed in the experiments, the data sets used and ML algorithms adopted are described in Section 3. The results obtained are presented and discussed in Section 4. Finally, Section 5 has the main conclusions from this work.
Noise Detection
Different pre-processing techniques have been proposed in the literature for noise detection and removal. Statistical models were the earliest approaches used in this task, and some of them were applicable only to one-dimensional data sets 17 . In these approaches, noise detection is dealt with by techniques based on data distribution models 3 . The main problem of this method is the assumption that the data distribution is known in advance, which is not true for most real world problems.
Clustering techniques 8,16 are also applied to noise detection tasks. In these approach, small groups of data, disperse among the existent examples, are regarded as possible noise. A third approach employs ML classification algorithms, which are used to detect and remove noisy examples 34,19 . The work presented here follows a forth approach, in which noise detection problems are investigated by distance-based techniques 20,30,5,32 . These techniques are named distance-based because they use the distance between an example and its nearest neighbors.
Distance-based techniques are simple to implement and do not make assumptions about the data distribution. However, they require a large amount of memory space and computational time, resulting in a complexity directly proportional to data dimensionality and number of examples 17 . The most popular distance-based technique referred in literature is the k-nearest neighbor (k-NN) algorithm, which is the simplest algorithm belonging to the class of instancebased supervised ML techniques 25 . Distance-based techniques use similarity measures to calculate the distance between instances from a data set and use this information to identify possible noisy data. One of main questions regarding distance-based techniques relates to the similarity measure used in the calculus of distances.
For high dimensional data sets, the commonly used Euclidian metric is not adequate 1 , since data is commonly sparse. The HVDM (Heterogeneous Value Difference Metric) metric is shown by 36 as suitable to deal with high dimensional data and was therefore used in this paper. This metric is based on the distribution of the attributes in a data set, regarding their output values, and not only on punctual values, as is observed in the Euclidian distance and other similar distance metrics. Equation 1 presents the HVDM metric. VDMa(x a , z a ) is the distance VDM (Value Difference Metric) 29 , adequate for nominal attributes and a is the standard deviation of attribute a in the data set. Since the data sets employed in this paper do not present nominal attributes, the second row of Equation 2 is not used in this work.
The k-nearest neighbor (k-NN) algorithm was used for finding the neighbors of a given instance. This algorithm classifies an instance according to the class of the majority of its k nearest neighbors. The value of the k parameter, which represents the number of nearest neighbors of the instance, influences the performance of the k-NN algorithm. Typically, it is an odd and small integer, such as 1, 3 or 5.
The techniques evaluated in this paper are the noise detection filters Edited Nearest Neighbor (ENN), Repeated ENN (RENN) and AllkNN, all based on the k-NN algorithm.
In order to explain the techniques evaluated, let T be the original training set and S be a subset of T, obtained by the application of any of the distance-based techniques evaluated. Now, suppose that T has n instances x 1 , ..., x n . Each instance x of T (and also of S) has k nearest neighbors.
The ENN algorithm was proposed in 37 . Initially, S = T, and an instance is considered noise and then removed from the data set if its class is different from the class of the majority of its k nearest neighbors. This procedure removes mislabeled data and borderlines. In the RENN technique, the ENN algorithm is repeatedly applied to the data set until all its instances have the majority of its neighbors with the same class. Finally, the AllkNN algorithm was proposed in Tomek 31 and is also an extension of ENN algorithm.This algorithm proceeds as follows: for i = (1, . . . , k), mark as incorrect (possible noise) any instance incorrectly classified by its i nearest neighbors. After the analysis of all instances in the data set, it removes the signalized instances.
Despite the large number of existent techniques used in noise detection problems, it is possible to find also recent studies that use hybrid systems, as well as ensembles of classifiers, to improve system performance and reduce deficiencies of the applied algorithms. Hybridization is used variously to overcome deficiencies with one particular classification algorithm, exploiting the advantages of multiple approaches while overcoming their weaknesses 17 .
Experiments
The experiments performed employed the 10-fold cross validation methodology 25 . All selected data sets were presented to the noise detection techniques investigated. Next, their pre-processed versions, resulting from the application of each noise detection technique, were presented to the three ML algorithms employed. The original version of each data set used in the experiments was also presented directly to the ML algorithms, aiming to compare the performance obtained by ML algorithms with the original data sets and with their pre-processed versions. The error rate obtained by the ML algorithms was calculated by the average of the individual errors obtained for each test partition. Each noise detection technique was applied 10 times, one for each training partition of the data set produced by the 10-fold cross validation methodology.
The experiments were run in a 3.0 GHz Intel Pentium 4 dual processor PC with 1.0 Gb of RAM memory. For the noise detection techniques evaluated, the code provided by 35 was used. The values of the k parameter, which define the number of nearest neighbors, were set as 1, 3 or 9, to follow a geometric progression that includes the number three, which is the default value of the mentioned code.
The ML algorithms investigated were C4.5, used for the induction of Decision Trees, RIPPER, which produces a set of rules from a data set and Support Vector Machines (SVMs), which looks for representative examples to improve the generalization of the decision border.
The C4.5 algorithm 27 uses a greedy approach to progressively grow a decision tree whose leaf nodes represent classes. C4.5 deals with noise data by using a pruning procedure. In this procedure, ramifications of the trained tree that present, according to some criterion, low expressive power, are pruned. This procedure aims to simplify the built tree and to reduce its classification error rate.
The RIPPER algorithm (Repeated Incremental Pruning to Produce Error Reduction) 6 is a rule induction algorithm proposed to obtain low classification error rates even in the presence of noise and high dimensional data. Rule induction algorithms are more flexible than decision trees algorithms, like C4.5, since new rules can be added or modified as new data are included 17 .
SVMs are learning algorithms based on the statistical learning theory, through the principle of Structural Risk Minimization (SRM) 33 . SVMs accomplish a non-linear data analysis in a high dimension space where a maximum margin hyperplane can be built, allowing the separation of positive and negative classes. They present high generalization ability, are robust to high dimensional data and have been successfully applied to the solution of several classification problems 28, 9 . In the experiments reported in this paper, we used data sets obtained from gene expression analysis, particularly tissue classification. Gene expression analysis problems are, in general, represented by complex and high dimensional data sets, which are very susceptible to noise. Table 1 shows the format of the gene expression data sets used in the experiments. It shows that each data set can be represented by a table where the first row has the identification of a particular tissue, the expression levels of different genes for this tissue and the label associated to the tissue.
The main features of the gene expression data sets used in the experiments are described in Table 2. This table presents, for each data sets, its total number of instances, number of attributes or data dimensionality and existent classes.
Most of the data sets used in the experiments reported in this paper are related to the problem of cancer tissue classification. The development of efficient data analysis tools to support experts may allow better and earlier diag-nosis of cancer, leading to more effective patient treatment and increase of survival rates. Several research groups are currently working with gene expression analysis of tumor tissues.
The ExpGen data set 4 contains expression levels measurements from 2467 genes obtained from 79 different laboratory experiments for genes functional classification. This application consists in categorize a gene in a given class that represent its function in the cellular environment. From these experiments, the data set is composed by only 207 genes, which could be categorized into five classes during the laboratorial experiments made.
The Golub data set 15 has gene expression levels from patients with acute leukemia. The gene expression data were obtained from 72 microarray images, and measure expression levels of 6817 human genes. The disease was categorized in two different types, Acute Lymphoid Leukemia (ALL) and Acute Myeloid Leukemia (AML). The same pre-processing made in 11 was applied to Golub data set to simplify its data.
The Leukemia data set is known in literature as St. Jude Leukemia 38 . It is composed by six different types of pediatric acute lymphoid leukemia and another group with examples which could not be categorized as one of the previous six types. The original data set has 12558 genes and so a pre-processed version found in http://sdmc.lit.org.sg/GEDatasets and described by 38 research was used, reducing the number of genes to 271.
The Lung data set has examples related to lung cancer, where, for each patient, the label can be normal tissue or three different types of lung cancer. The three different types of lung cancer analyzed are adenocarcinomas (ADs), squamous cell carcinomas (SQs) and carcinoid (COID). This data set has 197 instances, with 1000 attributes each, and was presented in 26 .
The last data set analyzed, the Colon data set, is described in Alon et al. 2 , and includes patients with and without colon cancer. The data set presents gene expression data obtained from 62 microarrays images, which measure expression levels of 6500 human genes. Pre-processing techniques reduced the number of input attributes to 2000.
For the SVMs training, the SVMTorch II 7 software was employed. The values of different SVMs parameters were the default values of the software used, kept the same for all experiments. For the C4.5, training was carried out by the software provided by Quinlan 27 and For the RIPPER algorithm training, the Weka simulator from Waikato university 13 was adopted. The parameter values for the three algorithms were the default values suggested in the tools employed, which were kept the same for all experiments. Scripts in perl programming language were also developed to convert data sets to different formats demanded by Wilson's 35 code, SVMTorch II, Weka simulator and C4.5 algorithm.
To evaluate results obtained in the experiments, the statistical test of Friedman 14 and Dunn's multiple comparisons post-hoc test 12 were employed, according to the methodology described in 10 . Friedman's test was adopted since it is recommended for the comparison of different ML algorithms applied to multiple data sets, and has the advantage of not assuming that the measurements have to follow a Normal distribution.
The null hypothesis assume that all analyzed algorithms are equivalent if their respective mean ranks are the same. If the null hypothesis is rejected, and therefore the analyzed algorithms are statistically different, a post-hoc test might be applied to detect which of the algorithms differ. Dunn's statistical post-hoc test was applied, since it is recommended to situations where all algorithms analyzed are compared to a control algorithm, the strategy employed in the experiments performed in this paper.
Experimental Results
In the pre-processing, the amount of removed instances was different for each data set analyzed. However, it was between 20 and 30% of the total number, except for the Colon data set, original and simplified versions, which presented reductions between 30 and 40%.
The time spent in the pre-processing phase was measured to show how the application of the noise detection techniques investigated can affect the overall processing time. It is important to mention that pre-processing phase is only applied once for each data set analyzed, generating a pre-processed data set that can be used several times for different ML algorithms. The time consumed was always less than one minute. Another observation is related to data sets complexity: more time was spent in the pre-processing of more complex data sets.
In order to measure the effectiveness of noise detection techniques employed, the performance of the three ML algorithms concerning accuracy, complexity and processing time necessary to build the induced hypothesis were evaluated with the original and the pre-processed data. For all experiments, the statistical tests were applied with 95% of confidence level.
For SVMs, in general, the error rates of the classifiers generated after the application of noise detection techniques, for all evaluated k values, were the same as those obtained for the original data sets. The same was true for the Colon data set, but only for some values of k. The pre-processed data sets Leukemia and ExpGen had only some similar results, but none better than those obtained for the original data sets, while Golub data set presented the worst results in all cases. The obtained results can be seen in Table 3, where the best results are highlighted in bold and error rates similar to the best ones for each data set are shown in italics. Standard deviation rates are reported in parenthesis.
The analysis of the C4.5 classification error rates, which can be seen in Table 4, shows that the pre-processed data sets Leukemia, Lung and Golub presented similar and better results than those obtained for the original data sets. The ExpGen data set presented only few similar error rates compared to those obtained for the original data set. The preprocessed data set Colon provided only worst results.
According to Table 5, the RIPPER algorithm presented similar error performance for the original and pre-processed data using the Leukemia, ExpGen and Colon data sets. In the last two data sets, some results were improved by the preprocessing. The remaining pre-processed data sets Lung and Golub presented more improvements in ML accuracy after the pre-processing phase. For these two data sets, error rates were lower after pre-processing, for the majority of the experiments carried out.
In the complexity analysis of the SVMs, the number of Support Vectors (SVs), data that determine the decision border induced by SVMs, was considered. A smaller number of SVs indicates less complexity of the induced model.
For the C4.5 algorithm, complexity was determined by the mean decision tree size induced. Reduced decision trees are easier to analyze and so result in comprehensiveness improvements for the model.
The complexity for RIPPER algorithm was observed by the number of rules produced during the training phase. The smaller the number of rules produced, the simpler the complexity of the generated model.
For all three ML algorithms investigated, the complexity was reduced when the pre-processed data sets were used, as presented in Tables 6, 7 and 8, respectively for the SVM, C4.5 and RIPPER algorithms. In these tables, the best results are highlighted in bold and complexities similar to the best ones, for each data set, are shown in italics. Standard deviation rates are reported in parenthesis.
According to Tables 6, 7 and 8, most of the complexities were reduced after pre-processing, except for the Golub data set and the RIPPER algorithm, in which not all complexities were reduced.
For the SVMs, the smaller the pre-processed data set produced by noise detection techniques, the lower the number of SVs obtained and, consequently, the complexity of the model. For the C4.5 algorithm, the model complexity has decreased until a lower bound from which further reduction in pre-processed data set would not reduce the complexity.
For the RIPPER algorithm, the final models were also simplified, but with less reduction in the complexity. The complexity obtained using the original data for the Golub data set was maintained for its pre-processed versions.
The time taken by the SVM, C4.5 and RIPPER algorithms to induce hypothesis using the pre-processed data sets was always reduced when compared to those obtained with the original data sets, taking at most 1 second. For SVMs, the processing time was only slightly reduced in comparison to the time obtained for the original data sets.
The analysis of results presented in this paper shows that the three noise detection techniques evaluated presented similar results, in terms of amount of noise removed (data set reduction), time taken and effect on the ML algorithms performance. A possible explanation is that they all are noise filtering techniques based on k-NN algorithm. Besides, they are related, AllkNN is an ENN extension, while RENN is the ENN algorithm applied multiple times. For the gene expression data sets analyzed in this paper, the differences present in these algorithms may not result in significant differences in the ML algorithms performance. Most of the experiments presented satisfactory results, with lower error rates and better performance if compared to those obtained in the analysis of the original data sets, which demonstrates that noise detection techniques improved the performance of the ML algorithms evaluated. The C4.5 and RIPPER algorithms benefited from the application of noise detection techniques for most of the data sets investigated and reduced the complexity of the induced models. For the SVMs, the new results were slightly better, with lower complexity.
Furthermore, the gain in comprehensiveness and the reduction in time spent during training process is another advantage, since the complexities of all data sets were reduced after pre-processing (the noise detection and removal phase).
Therefore, the application of noise detection techniques in a pre-processing phase presents the advantage of reducing the complexity of classifiers induced by ML algorithms, as well as reducing the time spent in classifiers training, producing, in most experiments, better or similar classification error results than those obtained for the original data sets. This indicates that the distance-based noise detection techniques kept the most expressive patterns of the data sets and allowed ML algorithms to induce simpler classifiers, as shown in the reduced complexity and lower classification error rates obtained.
Conclusions
This paper investigated the application of distance-based noise detection techniques in different gene expression classification problems. We did not found in the literature a single approach or algorithm able to detect noise without classification accuracy reduction that was tested in several data sets. We also were not able to find noise detection experiments using gene expression data sets able to detect tissues that are probably noise. The closest works we found in gene expression analysis were the works from 18,24 . However, these works detect and eliminate only genes, not tissues. The data sets employed here are related to both gene classification and tissue classification.
In the experiments performed here, three ML algorithms were trained over the original and pre-processed data sets. They were employed to evaluate the power of these techniques in maintaining the most informative patterns. The results observed indicate that the noise detection techniques employed were effective in the noise detection process. These experiments shown the the incorporation of noise detection and elimination resulted in simplifications of the ML classifiers and in reduction in their classification error rates, specially for the C4.5 and RIPPER algorithms. Another advantage for these two algorithms was an increase in comprehensiveness.
We are now investigating new distance-based techniques for noise detection and developing ensembles of noise detec-tion techniques aiming to further improve the gains obtained by the identification and removal of noisy data. Preliminary results, presented in Libralon 23 , suggest that ensembles of distance-based techniques can be a good alternative for noise detection in gene expression data sets. | 2017-07-31T21:41:12.647Z | 2009-03-01T00:00:00.000 | {
"year": 2009,
"sha1": "6a1c88afc321854fb8d7058c14c9ba6d7051562b",
"oa_license": "CCBY",
"oa_url": "https://journal-bcs.springeropen.com/track/pdf/10.1007/BF03192573",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c833ec6e32abecf68fb3aeeeb823cd00ae71e48a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
29972616 | pes2o/s2orc | v3-fos-license | Direct sampling method for anomaly imaging from S-parameter
In this paper, we develop a fast imaging technique for small anomalies located in homogeneous media from S-parameter data measured at dipole antennas. Based on the representation of S-parameters when an anomaly exists, we design a direct sampling method (DSM) for imaging an anomaly and establishing a relationship between the indicator function of DSM and an infinite series of Bessel functions of integer order. Simulation results using synthetic data at f=1GHz of angular frequency are illustrated to support the identified structure of the indicator function.
Introduction
In this study, we consider an inverse scattering problem that determines the locations of small anomalies in a homogeneous background using S−parameter measurements. This study has been motivated by microwave tomography for small-target imaging, such as in the case of tumors during the early stages of breast cancer. Because of the intrinsic ill-posedness and nonlinearity of inverse scattering problems, this problem is very hard to solve; however, it is still an interesting research topic because of its relevance in human life. Many researchers have focused on various imaging techniques that are mostly based on Newton-type iteration-based techniques [1, Table II]. However, the success of Newton-type based techniques is highly dependent on the initial guess, which must be close to the unknown targets. Furthermore, Newton-type based techniques have various limitations such as large computational costs, local minimizer problem, difficulty in imaging multiple anomalies, and selecting appropriate regularization. Because of this reason, developing a fast imaging technique for obtaining a good initial guess is highly required. Recently, various non-iterative techniques have been investigated, e.g., MUltiple SIgnal Classification (MUSIC) algorithm, linear sampling method, topological derivative strategy, and Kirchhoff/subspace migrations. A brief description of such techniques can be found in [2,3,4,5,6].
Direct sampling method (DSM) is another non-iterative technique for imaging unknown targets. Unlike the non-iterative techniques mentioned above, DSM requires either one or a small number of fields with incident directions [7,8,9]. Furthermore, this is a considerably effective and stable algorithm. In a recent study [10], the MUSIC algorithm was designed for imaging small and extended anomalies; however, DSM has not yet been designed and used to identify unknown anomalies from measured S−parameter data.
To address this issue, we design a DSM from S−parameter data collected by a small number of dipole antennas to identify the outline shape anomaly with different conductivity and relative permittivity compared to the background medium and a significantly smaller diameter than the wavelength. To investigate the feasibility of the designed DSM, we establish a relationship between the indicator function of DSM and an infinite series of Bessel functions of integer order. Subsequently, we present the simulation results that confirm the established relationship using synthetic data generated by the CST STUDIO SUITE.
The remainder of this paper is organized as follows. In Section 2, we briefly introduce the DSM for imaging anomalies from S−parameter data. Subsequently, in Section 3, we present simulation results for the synthetic data generated at f = 1GHz of angular frequency, which is followed by a brief conclusion in Section 4.
Preliminaries
In this section, we briefly survey the three-dimensional forward problem in which an anomaly D with a smooth boundary ∂D is surrounded by N −different dipole antennas. For simplicity, we assume that D is a small ball with radius ρ, which is located at r D such that where B denotes a simply connected domain. We denote r TX as the location of the transmitter, r Throughout this paper, for every material and anomaly to be non-magnetic, they are classified on the basis of the value of their relative dielectic permittivity and electrical conductivity at a given angular frequency ω = 2πf . To reflect this, we set the magnetic permeability to be constant at every location such that µ(r) ≡ µ = 4 · 10 ?7 π, and we denote ε B and σ B as the background relative permittivity and conductivity, respectively. By analogy, ε D and σ D are respectively those of D. Then, we introduce piecewise constant relative permittivity ε(r) and conductivity σ(r), respectively. Using this, we can define the background wavenumber k as where λ denotes the wavelength such that ρ < λ/2. Let E inc (r TX , r) be the incident electric field in a homogeneous medium because of a point current density at r TX . Then, based on the Maxwell equation, E inc (r TX , r) satisfies Analogously, let E tot (r, r (n) RX ) be the total field in the existence of D measured at r with transmission condition on the boundary ∂D and the open boundary condition: Let S(n) be the S−parameter, which is the ratio of the reflected waves at the n−th receiver r (n) RX to the incident waves at the transmitter r TX . Herein, S scat (n) denotes the scattered field S−parameter, which is obtained by subtracting the S−parameters from the total and incident fields. Based on [11], S scat (n) because of the existence of an anomaly, D can be represented as follows. This representation plays a key role in the DSM that will be designed in the next section. 2
Indicator function of direct sampling method: introduction and analysis
In this section, we design an imaging algorithm based on the DSM, which uses the collected S−parameters S scat (n) such that S = {S scat (n) : n = 1, 2, · · · , N }. Because we assumed that D is a small ball such that ρ < λ/2, using the Born approximation, S scat (n) of (1) can be approximated as follows: Based on this approximation, the imaging algorithm based on the DSM can be introduced as follows; for a search point r ∈ Ω, the indicator function of DSM is expressed as follows: where Ω is a search domain, Then, F DSM (r) has a peak magnitude of 1 at r = r D and a small magnitude at r = r D so that the shape of anomaly D can be easily identified. Following [8,9], the structure of F DSM (r) can be represented as follows: where J m is the Bessel function of the first kind of order m. However, this does not explain the complete phenomena that were illustrated in the simulation results in the next section; thus, further analysis is required. Through careful analysis, we can identify the structure of the indicator function as follows: where Here, Z * = Z ∪ {−∞, ∞} and J m denotes the Bessel function of integer order m of the first kind.
Proof. Because k|r − r we can observe that Because |r (n) RX | = R, θ n · (r − r D ) = |r − r D |(cos(θ n − φ D ), sin(θ n − φ D )), and the following Jacobi-Anger expansion holds uniformly, Thus, we arrive at Using this, we apply Hölder's inequality to obtain (5). This completes the proof. (P1). Because J 0 (0) = 1 and J m (0) = 0 for all m = 1, 2, · · · , we can observe that F DSM (r) ≈ 1 at r = r D ∈ D. This is the theoretical reason for which the location of D can be imaged using the DSM. (P2). The imaging performance is highly dependent on the value of k and N , i.e., to accurately detect the location of D, the value of N must be sufficiently large. This is the theoretical reasoning for increasing the total number of antennas to guarantee good imaging results. (P3). If the value of N is not sufficiently large, the right-hand side of (5) will deteriorate the imaging performance by generating large numbers of artifacts. (P4). If N is sufficiently large, the effect of the deteriorating term becomes negligible and F DSM (r) becomes This result is same as the one derived in [9]. (P5). If the radius of D is larger than λ, then it is impossible to apply Born approximation (1). This means that the designed DSM cannot be applied to the imaging of extended targets.
Remark 3.2 (Imaging of multiple anomalies). If multiple small anomalies D l , l = 1, 2, · · · , L, whose radii, permittivities, and conductivities are ρ l , ε l , and σ l , respectively, exist F DSM (r) can be represented as Based on this structure, we can observe that the imaging performance of F DSM (r) is highly dependent on the values of permittivity, conductivity, size of anomalies, and the total number of dipole antennas N . This means that if the permittivity, conductivity, or the size of one anomaly is significantly larger than that of the others, the shape of the anomaly can be identified via the map of F DSM (r). Otherwise, it will be difficult to identify the shape of the anomaly via the map of F DSM (r).
Simulation results
In this section, simulation results are presented to demonstrate the effectiveness of DSM and to support the mathematical structure derived in Theorem 3.1. For this purpose, N = 16 dipole antennas were used with an applied frequency of f = 1GHz. For the transducer and receivers, we set r TX = 0.09m cos 3π 2 , sin 3π 2 and r (n) Hence, R = |r (n) RX | = 0.09m. The S−parameters S scat (n) for n = 1, 2, · · · , N were generated using the CST STUDIO SUITE. The relative permittivity and conductivity of the background were set to ε B = 20 and σ B = 0.2S/m, respectively, the search domain Ω was set to be an interior of a circle with radius 0.085m centered at the origin, i.e., Ω = {r : |r| ≤ 0.085m}, and the step size of r to be of the order of 0.002m.
Example 4.1 (Imaging of a small anomaly). In this result, we consider the imaging of small anomalies. For this, we placed an anomaly at (0.01m, 0.03m) with a radius, relative permittivity, and conductivity of ρ = 0.01m, ε D = 55, and σ D = 1.2S/m, respectively. Figure 1 shows the test configuration with the anomaly and the map of F DSM (r) with an identified location of D. Based on these results, we detected almost the exact location of the anomaly by considering that r satisfies F DSM (r) ≈ 1. Furthermore, because of the presence of the infinite series of Bessel functions in (5), the appearance of artifacts was found to be quite different from the usual form shown in [7,9]. 4.2 (Imaging of an extended anomaly). To examine (P5) of Remark 3.1, we consider the imaging of extended anomalies. For this, we placed an anomaly at (0.01m, 0.02m) with a radius, relative permittivity, and conductivity of ρ = 0.05m, ε D = 15, and σ D = 0.5S/m, respectively. Figure 2 shows the test configuration with the anomaly and a map of F DSM (r). Based on these results, compared to the imaging of small anomalies in Example 4.1, it is impossible to recognize the shape of the anomaly. This result shows the limitation of DSM and that an improvement is necessary.
Conclusion
We designed and employed DSM for fast imaging of small anomalies from S−parameter values. By considering the relationship between the indicator function and an infinite series of Bessel functions of integer order, certain properties of the DSM were examined. Based on the simulation results with synthetic data, we concluded that DSM is an effective algorithm for detecting small anomalies. Thus, we anticipate its development for its use in real-world applications such as breast cancer detection in biomedical imaging. | 2018-01-08T15:41:31.000Z | 2018-01-08T00:00:00.000 | {
"year": 2018,
"sha1": "eef50ac518db0ed0ca2b3963c24df5e6ebc8b4f4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.02511",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "eef50ac518db0ed0ca2b3963c24df5e6ebc8b4f4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
129962975 | pes2o/s2orc | v3-fos-license | The Advanced Hydraulic City Structure of the Royal City of Angkor Thom and Vicinity Revealed through a High-Resolution Red Relief Image Map
Numerical topographic data acquired through airborne laser scanning (LiDAR) performed at the Angkor Archaeological Park in Cambodia in April 2012 has revealed a large number of heretofore obscured water channels and ponds (Evans et al., 2013). Using this data, a high-resolution red relief image map (RRIM) was created of areas inside and outside the moated royal capital of Angkor Thom built during the latter half of the 12th century. The land around Angkor Thom is extensively covered by tropical jungle which has relatively well preserved the original urban structures and middle/post-Angkorian period modifications and renovations by escaping human-induced surface alteration except for the tourism-related infrastructure and renovations from the 20th century onward. The RRIM provided a new visualization method of localizing, minute topographical changes in regions with large undulations over a wide area. It has proved to be effective in mapping, on a single wide-area map, the numerous buried remains that exist as comparable height differences or minute undulations measuring less than 1 meter in height, and provides a unique aerial view of their widespread distribution. Based on the RRIM map, past archaeological studies were referenced to reconstruct the layout of the water channel network system. Past studies revealed that a large number of ponds had been dug inside Angkor Thom. The RRIM expanded the investigation and revealed the existence of many ponds outside the royal capital indicating that a residential community had flourished outside the moat-surrounded capital city. This paper was discussed the functional aspects of the water channel network and ponds that utilized the gentle gradient of the natural land to overcome the climatic induced environmental changes that were characterized by an extreme divide between the rainy and dry seasons.
Introduction
The royal capital of Angkor Thom, built by Jayavarman VII in the latter half of the 12 th century, lies in the tropical forest of the Angkor Archaeological Park.The royal city, surrounded by a 3 km-square enclosure and moat, was developed as the center of the Angkor Empire's expansive territory.Included within its walls are Bayon temple, situated in the center of the city, and numerous stone temples, royal palace ruins and a royal plaza, including buildings built in periods before and after the initial construction of the royal capital.
The Khmer Archaeology LiDAR Consortium, composed of a team of international researchers, utilized airborne laser scanning (LiDAR) technology to survey an area of 370 km 2 in April 2012 that included the Angkor Archaeological Park, the Koh Ker site, and part of Mt.Kulen.As a result, it was found a large number of hydraulic structures including water channels and ponds previously not identified as land features inside and outside the dense forest of Angkor Thom (Evans et al., 2013).The authors used a topographical data processing technology called Red Relief Image Map (RRIM) to map the topographical features of the Angkor area so the features could be more visually recognizable.A map was produced that allowed minute land features to be identified where they could not be identified from conventional contour maps, color contour maps, and shaded reliefs.The map also provided an aerial view of minute land features over the entire survey area.Capturing subtle undulations in the present landscape created by remains that lay buried over a wide area proved to be extremely effective in assessing the shape, layout and interrelationship of hydraulic structures, and other such structural elements.This paper discusses the hydraulic structures in and around Angkor Thom which have come to light through the RRIM interpretation.
Method of Creating a RRIM
From the LiDAR digital elevation model (DEM) a three dimensional visualization method known as Red Relief Image Map or RRIM was produced effectively representing 3D topographic information without any additional devices and stereopsis ability for its audience (Chiba et al., 2008).The RRIM was originally developed for visualization of DEMs produced by LiDAR, but is also suitable for a wide variety of such high resolution three dimensional data such as SRTM, GTOPO30 and ETOPO2.This method can visualize the topographic slope, concavities and convexities at the same time.The RRIM combines an image that expresses slope by different intensities of red and an image that expresses altitude by brightness; areas are redder as the gradient becomes steeper, areas near ridges are brighter the closer they are to the ridge, and valleys are darker the deeper they become.Height and depth, or ridges and valleys, are obtained by subtracting underground openness from aboveground openness and dividing the remainder by 2. Overground and underground openness are parameters proposed by Yokoyama (2002), which take into consideration the fractal nature of the terrain and adjust images to match the upper size limit of the terrain that is to be mapped.One of the advantages of the RRIM is that it can clearly visualize localized, subtle topographical changes even in regions with large undulations over a wide area.In other words, the RRIM represents not only large-scale land features but also fine structures in a wide variety of topographic situations.
From the high-resolution RRIM vestiges of a large number of ponds, linear water channels and mounds were identified inside and outside Angkor Thom (Figure 1).Included among them were many remains that could not have been identified from conventional contour maps, color contour maps and shaded reliefs.The RRIM was particularly effective in identifying the accurate shapes of small ponds, narrow water channels and levees.As Angkor Thom is situated on land that gradually slopes downward from the northeast to the southwest, the detailed landform of the entire area could not be expressed evenly on a multicolor contour/shaded relief map, but the RRIM was highly useful in detecting and analyzing the functions of buried features that exist as height differences of several meters or subtle undulations measuring smaller than 1 meter in height throughout the entire city (Figure 2).
The vestiges of ponds and water channels that were confirmed have remained in relatively good condition in the dense forest (Figure 3).The fact that they were hidden in the forest prevented them from human-induced topographical alterations caused by paddy field reclamation and village formation.In contrast, vestiges in older rice paddies and farmland have mostly been lost due to disturbance of the ground surface and leveling of the terrain.In this study, a 61.8 km 2 central area of Angkor, where large numbers of remains have been found, was mapped (Figure 1, Figure 3 and Figure 4).Of this area, dense forest covers 24.7 km 2 and obscures many ponds, water channels and mounds.
Summary of the Past Studies on the Hydraulic Structure of the Angkor Thom
Aside from the principal stone structural remains at the central area of Angkor Thom, numerous ancillary remains and hydraulic facilities are scattered in the moated city.The study of such remains, including water channels was commenced by Marchal in 1917, and was thereafter continued by Trouvé, Goloubew, and Glaize.In more recent years, Gaucher has greatly advanced the study.Marchal conducted a study of Buddhist terraces in Angkor Thom and Prasat Chrung of which there are four located in the four corners of the enclosure of the royal city (Marchal, 1918(Marchal, , 1925)).Trouvé conducted a study of the water channel network around Angkor Thom (Trouvé, 1933) which was later complemented by a study by Groslier that accompanied an aerial survey and revealed the relationship of the water channel network with water channels and baray reservoirs in the vicinity of the royal city (Groslier, 1958).Goloubew conducted an excavation survey in various areas within Angkor Thom and uncovered the existence of a number of water channels (Goloubew, 1933(Goloubew, , 1934(Goloubew, , 1936(Goloubew, , 1937)).He also shed light on the existence of another moat inside the enclosure, and suggested the possibility that it surrounded the city preceding the construction of the royal city whose ruins exist today.Glaize conducted an excavation survey of the Buddhist terrace that was reported by Marchal and uncovered its details (Glaize, 1937).Gaucher commenced a study of Angkor Thom a half century later, and provided a comprehensive study that made full use of topographical surveys, subsurface explorations, and archaeological excavations (Gaucher, 1997(Gaucher, , 2002(Gaucher, , 2003(Gaucher, , 2004a(Gaucher, , 2004b)).
The present landform inside the royal capital displays not only the initial structure at the time of construction of the royal capital in the latter half of the 12th century, but also a multilayered structure of various works from periods before and after.As Groslier put it in the past, the Angkor Monuments are a palimpsest, and require exhaustive archaeological excavation surveys to elucidate.However, even while the monuments pose difficult conditions, the hydraulic structure of Angkor Thom is gradually being revealed through the abovementioned past studies.
Hydraulic Structure in the Angkor Thom
Angkor Thom is a square plot of land measuring approximately 3 km on each side that corresponds to the cardinal directions, and is surrounded by a moat that is roughly 110 m wide with an enclosing wall on the inside of the moat.The moat is divided into five sections by approaches that extend from each of the five gates.No water channels have been found that interconnect the sections.
Based on past studies the moat water is drawn into the enclosing wall from tunnels located 80 m south of the northern end of the east side of the enclosure.No other external water intake structure has been found, and precipitation and underground water are the only other source of water to the interior of the moated city.The tunnels are supported by a corbel arch structure.There are four tunnels that lie side-by-side, each measuring approximately 68 m long (Goloubew, 1936).Through these tunnels water impounded in the outer moat was drawn into the internal linear structure.Gaucher (2004b).Depressed linear structures, earthwork, ponds and recent rice field patterns are identified in the RRIM map.The distribution of dense forest and vegetation is shown based on satellite photo.While many depressed linear structures and ponds can be confirmed in the dense forest, no such vestiges were found in wet-rice cultivation fields.The distinction between new and old water channels (depressed linear structures and recent channels) includes channels that are difficult to decipher at present.In particular, it should be noted that there are cases where a new water channel was built using a water channel that existed during the Angkor period.The internal linear structure is located 80 -100 m inside the enclosure.Excavation surveys have found that it is bounded by a tiered embankment made of sandstone upper tiers and laterite lower tiers.Measured at the topmost tier of the embankment, the moat is 24 -28 m wide and the embankment is 2.2 m high.As with the moat on the outside of the enclosure, the internal linear structure is also interrupted by five large approaches extending from the Bayon and the Elephant Terrace, and past excavation surveys have revealed that they are connected by culverts below the approaches at least on the inside of the East Gate, Gate of Victory and the South Gate (Goloubew, 1936).However, in the excavation of the northwest corner of the internal linear structure, a structure that blocks this linear structure was found.A similar structure was found in the southwest corner.It was presumed that the west side of the internal linear structure does not connect with the Beng Thom reservoir in the interior of the city.Thus, any judgment about whether the internal linear structure was continuous full must wait for further investigations.
Parts of water channel-like structures have been confirmed in excavation and boring surveys conducted on the sides of the five main approaches (Goloubew, 1933(Goloubew, , 1934;;Gaucher, 2004b).However, widths and structural materials that are different have been found from the left and right sides of each approach.Furthermore, structures that divide these water channel-like structures were found in several locations.It was also confirmed that they do not connect with the internal linear structure.When considering these findings, it seems reasonable to propose that the structures were not part of a continuous water channel, but rather narrow reservoirs or temporary holding features.
In addition to the channel-like structure along the main approaches and internal linear structure, the possibility had already been pointed out that a water channel network existed throughout the moated city (Groslier, 1958).However, any attempt to reveal such a structure has been abandoned for a long time because the dense forest prevented aerial surveys, and surface surveys were difficult to conduct.Against this backdrop, the existence of a water channel network displaying an irregular grid pattern was revealed by Gaucher.According to excavation surveys, the grid pattern water channel and internal linear structure are interconnected (Gaucher, 2004b), and thus the water channels probably ensured a continuous flow of water.At the same time, however, water channel-like structures along the main approaches have also been found divided into smaller parts.Therefore, any discussion on the control of flowing water at the intersections of the water channel network and the continuity of the water channels requires further archaeological studies.
It is proposed that the flow of water through the water channel network in the moated city ultimately converged and pooled in the Beng Thom reservoir in the southwest corner in the enclosure.However, the connection relationship of the internal linear structure and grid pattern water channels with Beng Thom is still uncertain.Beng Thom measures 193 m in the north-south direction and 380 m in the east-west direction.At present, it is only approximately 1.5 m in depth, and the results of a boring survey conducted in 2007 revealed that the bottom sediment is around 50 cm (Mori, 2012).Thus, it is assumed that water storage capacity was limited.
The impounded water in Beng Thom was drained to the moat through five tunnels that pass through the western end of the south side of the enclosure.They have the same corbel arch structure as the water intake tunnels in the northeast corner and approximately 58 m in length (Marchal, 1918).The bottom level of the southern end of the west side of the internal linear structure is 2.3 m lower than that of the moat, and the tunnels display a gentle downward slope toward the city so it is proposed that water was not drained into the moat unless the water level in the internal linear structure reached a certain level.
Reexamination of the Grid Pattern Structure
Data accumulated through past excavation surveys have gradually clarified the layout of water channels in the moated city, and new topographical data has verified their overall layout structure.When including the water channel-like structures along the main approaches, the irregular grid pattern of channels, and internal linear structure, the water channels extend a total length of 95 km within the royal city.
Figure 3 shows a relief of the present land features that represent vestiges of water channels and embankments based on the RRIM.This figure was created by manual vectorization of the outline of each artificial structure; "depressed linear structure", "earthwork", "pond", "recent channel" and "recent rice field pattern".The grid pattern in the moated city is formed by a series of depressions and embankments, but it is believed that the areas in the grid were depressions, or water channels.The majority of linear traces of depressions were 5 -10 m in width, but those at grids labeledW2S, W4S, W5S, W4N, and W5N were a slightly wider at 13 -15 m, and not uniform.
Figure 4 shows a proposed conjectured layout of water channels and embankments inside and outside of Angkor Thom utilizing Figure 3.The new map corresponds in large part to a previous report by Gaucher, and validates that Gaucher's work was based on a reasonably accurate survey.Even so, there are some elements that differ from his result.For example, in the northern half of the southeast quadrant, Gaucher has identified water channels that run in the east-west direction in the same manner as the southwest quadrant (Figure 5: S1E, S3E, S5E, S7E), but no vestiges of such structures are evident in the RRIM.Furthermore, Gaucher identifies a water channel that extends in the north-south direction at the eastern edge of the southeast quadrant (Figure 5: E5S).As there are clear vestiges in the northeast quadrant, the water channel could have extended from here, but no vestiges are apparent in the RRIM.It is also difficult to state with certainty that anthropogenic structures had necessarily existed at W2N, N2W and N4E.Moreover, Gaucher interpreted W5N in the northwest quadrant as two rows of linear structures, but using the RRIM any differences with other linear structures is vague.
Ponds in the Moated City
Among the hydraulic features in the moated city, the numerous ponds are important elements in addition to water channels.Gaucher counted a total of 2895 ponds (Gaucher, 2004b).They include Beng Thom and other large ponds, but most are less than 1000 m 2 .The ponds generally have a ground plan that is rectangularly long in the east-west direction, with the long side measuring 20 -40 m and short side measuring around 20 m.In terms of depth, many are around 1 -2 m as present land features (Figure 6, Table 1).Ponds in the southeast quadrant are relatively deeper than the other areas.However, future survey is essential to elucidate the depth of the initial pond bottom located beneath the sedimentary soil.
The ponds are deformed due to long years of erosion, soil and sand deposits.The new topographical map has confirmed the outlines of roughly 2000 ponds in the royal city (Table 1).This figure differs from Gaucher's figure, but the new figure counts adjacent ponds that have become a single pool as one, and does not count ponds near the main approaches extending to the south and north from Bayon whose traces are unclear.Figure 7 shows how difficult it is to identify the initial shape of ponds by comparing, for example, the area around the four water channels of N0E, N1E, E3N, and E4N according to Gaucher's distribution map and the RRIM.
Ponds are particularly densely distribution in the southeast area of the royal city.In the three areas surrounded by water channels E3S, E4S, S12E, and S15E, measuring 350 m east to west and 260 m north to south, there are 51 ponds in a total area of 91,000 m 2 , such that approximately 28% or 25,350 m 2 of the site is covered by ponds (Figure 8).The ponds are frequently located as though to form a continuous row in the east-west direction.They probably took on this elongated shape due to the later addition that overlapped with existing ponds.When roughly dividing the royal city into three segments in the east-west direction, the vestiges of water channels and ponds are clear on the west and east sides, but unclear in the central area that is approximately 500 -800 m wide (Figure 6).This is possibly due to the fact that the hydraulic structures that initially existed became dysfunctional when the Angkor Empire declined and Angkor Thom was abandoned, and surface water during the rainy seasons became a braided stream that flowed through the central area and destroyed the ponds and water channels.
Remains around the Moated City of Angkor Thom
Topographical data based on LiDAR revealed with very precise clarity that water channels and ponds were distributed throughout the area outside the moat of Angkor Thom, and in a similar manner as within the royal city (Evans et al., 2013).Trouvé (1933) documented how the east side of the moat of Angkor Thom and East Baray are connected by a water channel, and also recorded numerous remains of linear embankments and water channels in the vicinity of Angkor Thom.Most of the remains identified by Trouvé have been verified by the new topographical data, and is as a testament to Trouvé's careful work.
Groslier identified the existence of a multiple number of water channels that connect to the moat around Angkor Thom, through aerial photos and field exploration (Groslier, 1958).The recently acquired topographical data reveals, however, some of the structures identified by Groslier were presumed restorations based on assumptions.Furthermore, in recent years, archaeological maps of the Angkor area have been updated by Dumarçay & Pottier (1993), JICA (1998), Pottier (1999) and Evans et al. (2007), and a history of studies of hydraulic structures at the Angkor area has been compiled by Fletcher et al. (2008), but there have been no recent studies that focus on water structure in the vicinity of Angkor Thom.The new topographical data sheds light on the distribution of water channels, embankments and numerous ponds outside Angkor Thom which would imply that the residential community extended beyond the moated city.These structures remain in good condition particularly in the area between Angkor Thom and East Baray, which escaped man-induced intervention owing to the dense forest.
Land Allotment Feature inside and outside the Moated City
Particularly conspicuous among the features in the dense forest between Angkor Thom and East Baray is the embankment that extends to the east from near the southern end of the east side of the moat around Angkor Thom (Trouvé, 1933).This embankment is 4 -5 m high, and extends 1.2 km to the east before it forms a right angle to the north and continues 350 m to the southwest corner of the enclosure at Ta Prohm.Because of this structure an area is formed between Angkor Thom and East Baray that is surrounded by water channels that extend east and west and connects with the northeast corner of Angkor Thom, the enclosure of Ta Prohm, the west side of East Baray and the east side of Angkor Thom.A subordinate closed area to Angkor Thom was thus created.
Linear structure remains inside the moated city and this subordinated closed area do not display a continual arrangement.Inside the moated city a dense arrangement of water channels runs east and west, and areas bounded by these water channels are divided into short strips that are wide in the east-west direction, but in the subordinated closed area there are no such longitudinal divisions.Thus, it seems the water channels were installed according to a different system of land allotment inside and outside the moated city, and also points to the possibility that they were installed at different times.
Furthermore, low embankments that are around 3 m wide and form small square areas which were not found inside the moated city were found in large numbers on the east side of Angkor Thom.It is thought that the embankments were dirt walls that surrounded and separated each house.
Ponds in the Subordinate Closed Area to the Angkor Thom
Ponds in the subordinate closed area have a rectangular plan that matches the cardinal directions like those inside the city, and are around the same sizes.However when compared according to their present landform, the ponds in the subordinated closed area are characteristically shallower than ponds in the city.Many of them are shallower than 1.5 m (Figure 6, Table 1).It is not certain whether the original depth of the ponds were same inside and outside the city, but the thickness of sedimentary soil is different, or initially the depth of the ponds outside the city were shallower than inside.
Additionally, they are less densely distributed than the ponds inside the city.There are no ponds to be overlapped with the existed ponds.Although some differences of water environment such as the ground water level and the accessibility to the rivers were estimated between inside and outside the city, these shallower and low density ponds suggest the possibility that the subordinate closed area was home to a residential community for a shorter period than the inside the moated city.
Discussion: Water Management System during the Rainy and Dry Seasons
The area around the royal capital of Angkor Thom is characterized by clear differences between the rainy and dry seasons.The dry season is from November to April and is a period of little precipitation that is accompanied by temperatures that reach 40˚C.For this reason, there is no doubt that the system for securing and managing water to support the lives of people who live in the royal capital was the most important infrastructural element (Acker, 1998;Fletcher et al., 2008;Groslier, 1979;Kummu, 2003Kummu, , 2009;;Moore, 1995;van Liere, 1980).The rainy season, on the other hand, delivers an abundance of water requiring water to be stored and the excess drained from the flat land inside and outside the royal capital.Based on this background knowledge, the findings that have been acquired about the hydraulic structure of the royal capital of Angkor Thom at the time it was built will be organized, and discussed referencing the new topographical data.
The hydraulic structure of the royal city is based on a mechanism where water flows by gravitational force from the outer to internal linear structure from tunnels that passes through the northern end of the east side of the enclosure, and into the Beng Thom reservoir and drains out to the moat outside the enclosure through the tunnel outlets in the western end of the south side of the enclosure.The northern end of the east side and the western end of the south side are approximately 4 km apart in a straight-line distance and differ less than 4 m in elevation (Figure 2).Thus, the natural gradient of the terrain is used to distribute water.
The royal city had a large number of ponds.In the Customs of Cambodia, written by Chou Ta-Kuan, a member of a Chinese mission that visited the royal capital of Yasodarapura (present day Angkor Thom) from 1296 to 1297, the section on "bathing" contains the passage that mentions "a custom of bathing several times a day," and describes how "each house necessarily had a pond, or if not, two to three homes shared a pond."Furthermore, as it is written that "women inside the city go outside the city once in every three to four days of five to six days to bathe in the river," women apparently frequented the Siem Reap River that flows to the east of Angkor Thom.From these passages, it can be assumed that many ponds were needed by each house to pool water for household uses, including bathing, and that the ponds identified in the chronicle correspond to these household ponds.This means that if the number of ponds is 2000 as identified by the RRIM and proposing that a maximum of three houses may have shared a pond, some 6000 houses may have been concentrated within the royal city.By estimating the number of members per household, it is possible to obtain a rough estimate of the city's population.
The ponds probably sufficed as domestic water, but obtaining drinking water would have been problematic.Today, all communities in the rural areas of Cambodia have a pond, and water is drawn from a nearby well using a jug.Deep wells have increased in number in recent years, but most are hand-dug shallow wells that draw free groundwater from a shallow aquifer.Pond water may be contaminated with coliform, but by penetrating through the ground over several meters, it is naturally filtered so that the water that seeps into the well is generally fit for human consumption.Furthermore, it is thought by storing the water in a biscuit-fired jug placed in the shade, the water is cooled by vaporization heat and becomes even more suitable for drinking.Within the living environment of these people, their water environment has possibly remained unchanged since the Angkor Empire period.Thus, it is assumed that people in antiquity also used a well in a similar way to draw free groundwater from a shallow aquifer even though old wells have not yet been found in this area.
The functions of hydraulic facilities in the relatively flat land of Angkor Thom will now be examined.During the rainy season even small depressions fill to over-capacity with rainwater.To lower the water level, it is highly likely that the numerous water channels and ponds provided drainage.If the water channel network was managed and the system functioned to ultimately draw water to Beng Thom via the internal linear structure, it would have been possible to drain water from the city in a relatively short period of time.
During the dry season, it is most important to secure water in the wells.As a drop in water level would lead to well depletion, it is necessary to maintain groundwater level at a certain height (a shallow level from the ground surface).Boring surveys have revealed that the strata inside the royal capital are composed of repeated layers of high permeability sand and clay (Iwasaki et al., 1996).After the rainy season proceeds into the dry season, the groundwater level gradually begins to drop.By drawing water from the internal linear structure through the water channel network, water is allowed to penetrate all areas of the royal city via small and large water channels, with water also flowing into the ponds.As the land gradient is extremely gradual, some of the water accumulates in the water channels, and excess water seeps into the ground.The water that flows into the pond and the water that seeps into the ground push up the free groundwater level, and thereby help maintain the water level in the wells.
In these ways, the water channels and ponds are thought to have served differing purposes depending on the seasons.During the rainy season, they were used to promptly drain water, and during the dry season, to maintain the groundwater level.
Summary
This paper discussed the hydraulic structure inside and outside the Angkor Thom by using the topographical data obtained through a LiDAR survey and expressing it as a high-resolution red relief image map (RRIM).The region is widely covered by a dense forest, but precisely owing to this forest, many areas have escaped human-induced landform alterations which have helped to conserve the structure remains.The topographical map created based on the LiDAR survey provided a large amount of both quantitative and subjective information that could be used within the world geodetic system.Furthermore, in addition to providing an aerial view of the wide distribution of numerous remains that exist as subtle topographical undulations which could not be identified by conventional contour maps, color contour maps and shaded reliefs, the RRIM identified previously obscured water channel structures and artificial ponds inside and outside of Angkor Thom and provided reference for comprehensively analyzing their sizes and layout.
Inside Angkor Thom, the distribution of the water channel structures identified by Gaucher was reconfirmed by the new topographical data, but there were differences in interpretation of the facts concerning some of the water channels and ponds.The initial shape of many of the ponds cannot be distinguished from the present topographical data due to deformation caused by erosion and sedimentation, and many of the later additional ponds were overlapped with adjacent existing ponds.
Around Angkor Thom, it was found that linear water channels were installed according to the cardinal directions in the same manner as those inside, and that there were numerous ponds of the similar sizes as those in the royal city.The distribution of ponds indicates that the residential community extended outside the city.The ponds outside the city were shallower than those inside probably caused by the difference of ground water level and accessibility to the river, or it was thought that the residential community outside the city was abandoned at an early stage and allowed soil and sand to accumulate over a long period of time.Also, while many ponds inside the city appear as though they were dug later in close proximity to each other, no such vestiges have been found to those outside.This also suggested the possibility that outside of the city was used as a residential community for a shorter period of time.
Past Khmer engineers built an advanced water channel system in the Angkor Thom by utilizing the gentle gradient of 0.1% in the northeast to southwest direction.To promptly drain water during the rainy period, and to maintain the groundwater level during the dry season, they installed a water channel network throughout the royal city as well as outside and dug numerous ponds.The hydraulic structure of Angkor Thom was achieved by their comprehensive knowledge in cleverly utilizing the land form based on their previous experiences in urban planning.
Figure 1 .
Figure 1.High-resolution RRIM of the central area of the Angkor Archaeological Park.
Figure 2 .
Figure 2. Comparison of color shaded relief map (left) and RRIM (right) for the Angkor Thom and east area, and section through northeast to southwest of Angkor Thom (bottom).
Figure 3 .
Figure 3. Map of the present state of the central area of the Angkor Archaeological Park.The distribution of monuments in Angkor Thom is based onGaucher (2004b).Depressed linear structures, earthwork, ponds and recent rice field patterns are identified in the RRIM map.The distribution of dense forest and vegetation is shown based on satellite photo.While many depressed linear structures and ponds can be confirmed in the dense forest, no such vestiges were found in wet-rice cultivation fields.The distinction between new and old water channels (depressed linear structures and recent channels) includes channels that are difficult to decipher at present.In particular, it should be noted that there are cases where a new water channel was built using a water channel that existed during the Angkor period.
Figure 4 .
Figure 4. Reconstruction of water channels and earthworks in the central area of the Angkor Archaeological Park.
Figure 5 .
Figure 5. Layout of water channels in Angkor Thom.The coding of the grid pattern of the water channel network is according to Gaucher (2004b), but codes in parentheses indicate grids that could not be verified in the new topographical data.
Figure 6 .
Figure 6.Distribution of ponds by different depth for the Angkor Thom and east area.
Figure 7 .
Figure 7. Distribution of ponds in a specific area of Angkor Thom surrounded by water channels.The location of this area is shown in Figure 5.In this area, it is particularly difficult to distinguish the shape of each pond, as it is thought that ponds were dug overlapping each other, and also due to erosion and sedimentation.(Above left: Gaucher, 2004a, Figure 17; above right: Shaded relief and contour lines (1 m) based on LiDAR data; below left: High-resolution RRIM; below right: Reconstruction of ponds from High-resolution RRIM).
Figure 8 .
Figure 8. High-resolution RRIM (right) and reconstruction of ponds (left) of three sections in the southeast quadrant of Angkor Thom surrounded by S12E, S15E, E3S, and E4S.The location of this area is shown in Figure 5.The area has a particularly dense distribution of ponds.In fact, 28% of the area is ponds.Table 1. Number of the ponds by the different depth in each area inside and outside the Angkor Thom. | 2019-04-25T13:06:18.129Z | 2016-01-05T00:00:00.000 | {
"year": 2016,
"sha1": "729f6b61aab0f021aa16c15601c1681fdc362b70",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=62543",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "729f6b61aab0f021aa16c15601c1681fdc362b70",
"s2fieldsofstudy": [
"Environmental Science",
"History",
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
} |
229181767 | pes2o/s2orc | v3-fos-license | Clinician perspectives on what constitutes good practice in community services for people with complex emotional needs: A qualitative thematic meta-synthesis
Introduction The need to improve the quality of community mental health services for people with Complex Emotional Needs (CEN) (who may have a diagnosis of ‘personality disorder’) is recognised internationally and has become a renewed policy priority in England. Such improvement requires positive engagement from clinicians across the service system, and their perspectives on achieving good practice need to be understood. Aim To synthesise qualitative evidence on clinician perspectives on what constitutes good practice, and what helps or prevents it being achieved, in community mental health services for people with CEN. Methods Six bibliographic databases were searched for studies published since 2003 and supplementary citation tracking was conducted. Studies that used any recognised qualitative method and reported clinician experiences and perspectives on community-based mental health services for adults with CEN were eligible for this review, including generic and specialist settings. Meta-synthesis was used to generate and synthesise over-arching themes across included studies. Results Twenty-nine papers were eligible for inclusion, most with samples given a ‘personality disorder’ diagnosis. Six over-arching themes were identified: 1. The use and misuse of diagnosis; 2. The patient journey into services: nowhere to go; 3. Therapeutic relationships: connection and distance; 4. The nature of treatment: not doing too much or too little; 5. Managing safety issues and crises: being measured and proactive; 6. Clinician and wider service needs: whose needs are they anyway? The overall quality of the evidence was moderate. Discussion Through summarising the literature on clinician perspectives on good practice for people with CEN, over-arching priorities were identified on which there appears to be substantial consensus. In their focus on needs such as for a long-term perspective on treatment journeys, high quality and consistent therapeutic relationships, and a balanced approach to safety, clinician priorities are mainly congruent with those found in studies on service user views. They also identify clinician needs that should be met for good care to be provided, including for supervision, joint working and organisational support.
Introduction
The need to improve the quality of community mental health services for people with Complex Emotional Needs (CEN) (who may have a diagnosis of 'personality disorder') is recognised internationally and has become a renewed policy priority in England. Such improvement requires positive engagement from clinicians across the service system, and their perspectives on achieving good practice need to be understood.
Aim
To synthesise qualitative evidence on clinician perspectives on what constitutes good practice, and what helps or prevents it being achieved, in community mental health services for people with CEN.
Methods
Six bibliographic databases were searched for studies published since 2003 and supplementary citation tracking was conducted. Studies that used any recognised qualitative method and reported clinician experiences and perspectives on community-based mental health services for adults with CEN were eligible for this review, including generic and specialist settings. Meta-synthesis was used to generate and synthesise over-arching themes across included studies.
Introduction
The global prevalence of "personality disorder" in the community is estimated to be around 7.8% [1]. This increases to between 40 and 92% among people who use community secondary mental health care services in Europe [2]. High rates of comorbidity with other mental health conditions have been identified [3,4] and people with comorbid conditions appear to have particularly high inpatient and involuntary service use and poor outcomes [5,6]. High rates of comorbid physical conditions have also been found [7][8][9] and evidence suggests shorter life expectancies [10]. Impacts on quality of life are comparable to serious somatic illness [11] and a substantial economic cost has been found for health and social care services and society more generally [12,13]. Our team, which includes people with relevant lived experience and clinicians, has debated terminology for this review in light of rapidly evolving debates about the term 'personality disorder' (especially 'borderline personality disorder'). While some service users report that they find it helpful in clarifying the nature of their difficulties and it has a role in ensuring consistency in research, very serious critiques have been made of this diagnosis as stigmatising, potentially misogynistic, and associated with a lack of hope and of progress in delivering effective care [14][15][16][17][18]. Many service users find it unhelpful and do not identify with it. For this reason, in this paper and our companion papers on this topic, we have chosen to use the term complex emotional needs (CEN) as a working description of the cluster of needs that may lead to a "personality disorder" diagnosis, and / or to using services for 'personality disorder' or CEN, or who appear to have similar needs (e.g., related to repeated self-harm). It is not our intention that complex emotional needs becomes a substitute diagnosis, but rather a description of a broad group of service users. We advocate co-produced work to develop new ways of describing and assessing their difficulties that are clear, consistent and acceptable. While we use the term CEN in our summary of themes from the papers, as the tables of supporting materials indicate, most of the papers themselves use the term "personality disorder".
In the UK, care provided for people with CEN has recurrently been described as of very variable and often poor quality [19]. In 2003, new policy guidance was published aimed at greatly increasing provision of specialist services and improving training and support in generic services [20,21]. The number of mental health Trusts providing dedicated services increased fivefold over the following decade, but a national survey in 2015 found persisting deficits in access to specialist therapies and to a full spectrum of biopsychosocial interventions, and it remained unclear as to whether overall quality of care had improved [19]. Improving care for people with CEN has since become a renewed priority in England [22][23][24]. The need to improve quality of care, reduce stigma and deliver effective treatments for CEN is recognised internationally [25], with formulation of policies and guidelines in various countries aimed at improving care [26,27].
Policy focus on improving CEN care has been accompanied by growing evidence that there are effective psychological treatment options for CEN [28][29][30][31][32][33], but that the translation of policy and evidence into service provision has been slow [34]. Service users and clinicians have been found to agree that access to specialist services and psychological interventions, interventions to reduce stigma in services, specialist consultation services for generic mental health staff, and positive risk management are priorities, but these do not appear to be widely reflected in service provision [35].
As well as lack of resources, clinician-related barriers to service improvement have repeatedly been found. Stigma related to "personality disorder" diagnosis has recurrently been identified among clinicians: feeling powerless to be helpful, perceived un-treatability, preconceptions about patients and poor CEN understanding have been identified as contributors to this stigma [34,36]. Unmet training needs and lack of a clear framework are reported to contribute to negative experiences of working with people with CEN [37]. These do not, however, appear to be inevitable consequences of working with CEN: in a relatively well-resourced specialist "personality disorder" service setting, Crawford et al. [38] reported relatively low levels of clinician burnout and good satisfaction among staff working with people with CEN. Thus, understanding the perspectives, experiences and attitudes of clinicians and the conditions that allow them to work effectively and without excessive burnout with people with CEN is a crucial element in informing next steps for improving service provision.
The aim of this review was to synthesise existing qualitative evidence on clinician perspectives on what constitutes good practice in community mental health settings for people with CEN, and how this could be achieved. Objectives included conducting a systematic search of the literature, conducting a meta-synthesis of qualitative data, and assessing the quality of the evidence. This review is part of a broader programme of work conducted by the NIHR Mental Health Policy Research Unit to inform the development of NHS England specialist pathways and to strengthen the evidence base for service development in this field nationally and internationally. Other reviews include a synthesis of qualitive literature on service user perspectives on good practice [17], systematic reviews on treatment effectiveness and cost-effectiveness [39] and a study of service typologies.
Information sources and search strategy
The review team developed the protocol in line with PRISMA guidelines [40] and guidance on qualitative meta-syntheses [41] in collaboration with a project-specific working group of livedexperience researchers and subject experts. The protocol was registered prospectively on PROSPERO (CRD42019145615), as was the protocol for the wider programme of work (CRD42019131834).
One search strategy was developed for all the reviews in the programme (see S1 Appendix). Search terms were built around key words and subject headings relevant to CEN and related needs, community mental health services, and eligible study designs including qualitative, quantitative and guidelines. Comprehensive searches were conducted of MEDLINE ( No limits were placed on the language or country, and a limit of 2003 or later was placed on the date to capture perspectives of greater contemporary relevance by only including research since the release of "Personality Disorder: No Longer a Diagnosis of Exclusion" and National Institute of Mental Health in England policy implementation guidance [20,21]. Citations retrieved during searches were collated in Endnote [42], a reference management software, and duplicates were removed. Titles and abstracts were double screened by two NIHR Mental Health Policy Research Unit researchers for all the reviews together and full text screening was performed on potentially eligible papers for this review. Supplementary searching included a call for evidence publicised via the study team's networks, relevant professional associations and social media, forward and backward citation tracing of included articles, and reference lists of other relevant systematic reviews found in an additional systematic review search of EMBASE and MEDLINE (January 2003-November 2019). Grey literature was identified through web searches and the above bibliographic database search. All included studies and 20% of those excluded were double screened, and discussion with senior reviewers achieved consensus.
Eligibility criteria
Studies using recognised qualitative data collection and analysis methods to explore clinician perspectives on good practice in community mental health services for people with CEN were included. For the purposes of this paper, we have defined good practice as that which is likely to contribute to or be associated with improved service user outcomes, experiences and satisfaction with services. Studies were eligible if they reported the relevant perspectives of any mental health professional with experience of working with people with CEN. Our main sample was of publications which used the term 'personality disorder' to describe the difficulties discussed by clinicians. However, we were aware that other investigators may also have wished to avoid the term 'personality disorder' or may have collected data from people who did not identify with this diagnosis but were experiencing comparable long-term difficulties. As more fully detailed in S1 Appendix, we also ran searches with other terms which might be used to describe such difficulties (for example, recurrent self-harm, complex trauma, emotion dysregulation). As described below, this however yielded few papers. Eligible settings were community-based mental health services, i.e. any non-residential mental health services that provided care for people living in the community with CEN, whether exclusively or not. This included mental health care in primary care settings, generic community mental health teams (e.g., mainstream multidisciplinary teams providing services for a range of needs in the local population), and specialist/dedicated services exclusively for people with CEN [19]. Residential, forensic, or crisis services, or specialist services for different conditions were excluded. Papers were excluded if the service target population were primarily below the age of 16, unless focussing on transition into adult services. Initially, peer-reviewed and grey literature were eligible, except for case studies, dissertations and theses. Due to the broad scope in topics covered, a pragmatic decision was made ad-hoc to exclude papers not in English and not peer-reviewed (See S2 Appendix for full eligibility criteria). Most of the papers used "personality disorder" to describe the sample, but here we use the term CEN as an overall term for reasons discussed in the introduction.
Quality assessment and analysis
Study characteristics were extracted into a Microsoft Excel form. The Critical Appraisal Skills Programme (CASP) Qualitative Checklist [43] was used to perform quality assessments. Study quality was not used to determine eligibility but is reported below. Text from results sections of included articles was entered verbatim into the coding software NVivo for thematic metasynthesis [41] and linked to individual study characteristics such as types of clinicians, services, and interventions. For stage one, articles were coded line-by-line by one of two researchers and 20% of papers were double coded to produce an initial framework. A preliminary thematic framework emerged from further discussion between the two researchers for stage two, developed as codes were merged and grouped hierarchically. At stage three, analytic themes were developed and finalised iteratively through wider collaboration with the team of reviewers and experts by experience and occupation. The analysis process included considering whether there were sub-group differences related to major study characteristics such as country of publication.
Results
A total of 29 papers (drawing on 27 unique datasets) were eligible for inclusion [38, (Fig 1), representing perspectives from at least 550 clinicians. Clinicians represented a variety of professions, including (but not limited to) psychologists (8 papers), social workers (7), psychiatric nurses (12), occupational therapists (4), psychiatrists (12), family doctors (known as General Practitioners or 'GPs' in the UK; 3) and counsellors (2). Other papers defined their clinicians more broadly as those who provide the service of interest and some samples also included service managers, commissioners, administrators and referrers. Twelve studies were conducted in generic community mental health settings, four in primary care, seven in specialist / dedicated services for people with CEN and three in specific DBT-teams, with a further three studies including clinicians from across a range of community settings. The majority of included papers came from England (13), followed by Australia (5) and North America (3). Data collection methods were sometimes mixed and consisted of primarily interviews (22), focus groups (7), and open-text responses / surveys (4). While we use the term "CEN" in our summary, service users in most included studies were identified as having "personality disorder" or "borderline personality disorder", as summarised in Table 1 below.
Quality appraisal indicated that the majority of studies appropriately used qualitative methodology (n = 28), employed an appropriate research design (n = 28), and described clear findings (n = 28). Most studies also presented clear aims (n = 27) and used appropriate data collection methods (n = 26). However, a number of papers did not provide enough information to determine whether the data analysis was sufficiently rigorous (n = 6), whether the recruitment strategy was appropriate (n = 11), nor whether ethical issues had been sufficiently considered (n = 12). Only 5 papers in total adequately considered the relationship between researcher and participants. See S1 Table for full appraisal ratings.
Six overarching themes were identified through meta-synthesis: 1. Stigma and the use and misuse of diagnosis; 2. The patient journey through services: nowhere to go; 3. Therapeutic relationships: connection and distance; 4. Dialectics: not doing too much or too little; 5. Managing safety issues and crises: being measured and proactive; and 6. Clinician and wider service needs (including clinician support, interagency working and the wider system, and establishing new services, interventions and skills): whose needs are they anyway? These themes are Bosanac, 2015. [44] Mentalization-based intervention to recurrent acute presentations and self-harm in a community mental health service setting.
Case managers: psychiatric nurses and occupational therapists N = 8 Five 3-mothly focus groups of 3-5 clinicians Community mental health service, Australia 8 female service users diagnosed with 'BPD' (DSM-IV) and <7 on DIB-R.
MBI
Carmel, 2014. [45] Barriers and solutions to implementing dialectical behavior therapy in a public behavioral health system.
Stigma and the use and misuse of diagnosis
Our main aim was to synthesise evidence on clinician views of good practice, but it was clear that underlying beliefs about the nature of such difficulties and the appropriate use of diagnosis influenced clinicians' perspectives on care. A few studies reported that some clinicians found conceptualising and diagnosing difficulties as "personality disorder" helpful. They saw it as offering a 'common language', and a useful way to understand service users' difficulties, while also helping to ensure that service users were seen as having genuine needs. However, across a number of studies, clinicians questioned the use, meaning and validity of this diagnosis. They saw it as being associated with stigma, discrimination and exclusion from services, felt it could be difficult to 'shake off', and risked becoming "the person's entirety" [48].
Patients with a psychosis were seen as not accountable and in need of support. Borderline patients, however, were considered theatrical, posing, and in need of punishment.
Psychologist describing a crisis intervention team (Koekkoek et al., 2009) [54] Accounts of the use of "personality disorder" diagnoses in non-specialist primary and secondary care services suggested it was made at times on a basis of "gut instinct" [55] or "gut feeling" [71] or because other diagnoses did not 'fit'. An investigation of clinician views in generic community and voluntary sector services found that some perceived "personality disorder" as essentially "a form of social deviance or cultural rule-breaking" [65], while others felt that the label was an unhelpful medicalisation of legitimate feelings of distress, especially among women. In this study, as in several studies examining perspectives of specialist clinicians, a majority of clinicians saw trauma and adversity as major causes of "personality disorder". As a result of concerns about diagnosis, clinicians were reported in several studies to be reluctant to use this label and to avoid discussing it with service users. Some opted for alternative diagnoses (e.g., complex post-traumatic stress disorder) or employed what they considered to be 'euphemisms' like "difficulty managing emotions" [67]. Other specialist clinicians reported that they preferred a focus on narrative descriptions of presenting difficulties rather than relying on a "personality disorder" diagnosis.
The patient journey into services: nowhere to go
Access to services for people with CEN was reported in several studies to be a persistent difficulty, with GPs in one study [50] reporting longer waiting times than for any other group of mental health service users. Referrals for specialist support were impeded by factors such as a lack of local services, lack of awareness of services, frequent changes to services, and poorly established referral pathways. This was felt to risk disengagement, escalation of distress, or missing windows of opportunity to provide effective support. Thresholds for acceptance by specialist services were reported in some studies to be inconsistent and influenced by subjective judgements regarding for example 'severity', 'stuck-ness' or 'motivation to engage'. Many service users were excluded from specialist support due to being perceived as a risk to others (e.g., through having a forensic history), having substance misuse problems, exhibiting behaviour considered too 'problematic' or 'chaotic', or being seen as 'non-psychologically minded'.
Referrers such as GPs in several studies also reported difficulties getting service users accepted by generic, mainstream community mental health teams or psychological treatment services. However, in other studies, clinicians working in these generic teams saw their eligibility criteria as over-inclusive, with one study describing them as a "dumping ground" for anyone who did not 'fit' elsewhere [63]. Stepped care pathways could also contribute to difficulties accessing appropriate treatment. For example, clinicians in the UK reported being encouraged to refer initially to primary care Improving Access to Psychological Therapy (IAPT) or mainstream secondary care services, rather than to specialist teams. However, knowledge and capacity for treating CEN were often seen as lacking in these generic services, with people with CEN not prioritised and clinicians feeling they did not have the skills to deliver expected care. Some referrers described 'embellishing' referral information to meet thresholds for specialist support. However, in other cases, GPs as well as assessors in secondary care services, 'downplayed' service users' difficulties or risk levels and emphasised 'more agreeable' traits to meet thresholds for primary care support, such as IAPT services. Service users could end up being passed back and forth in "a tennis ball effect" [55] with a high but inefficient use of services.
You know if you mention 'PD' there will be nowhere at all for them to go so I'm usually very careful not to put it down in their notes. I usually say depressed or a bit anxious. Something that won't make them think the patient is risky. It's about knowing the hoops that you've got to jump through.
GP (French et al., 2019) [50]
The referral process was reported to be facilitated by good working relationships and communication between receiving clinicians and referrers, outreach by specialist services to raise visibility and explain service models, and acceptance of self-referrals, which some felt could be empowering and inclusive. Some referrers valued holistic, in-depth assessments and formulations from specialist clinicians, particularly non-medical, non-psychiatric or psychodynamic formulations, even if service users ultimately weren't taken on, as these could inform treatment plans and facilitate therapeutic relationships.
Therapeutic relationships: Connection and distance
Strong, trusting relationships between clinicians and service users were seen as key to treatment success across many studies, but clinicians' experiences of such relationships varied greatly both between and within studies. In several studies, clinicians were keen to emphasise the positives of working with people with CEN, describing them as 'relatable', 'honest' and 'creative', and seeing the role of the clinician as being to "harness that" [38]. However, negative feelings and a sense of burnout were also frequently described, with clinicians viewing (or reporting that other clinicians viewed) service users as 'demanding', 'challenging', 'risky', 'dependant', 'self-destructive', 'manipulative', 'non-compliant', 'untreatable', and likely to 'push boundaries'. Service users' difficulties were seen as enduring but urgent, and clinicians could feel overwhelmed by "a bottomless pool of need" [71] especially as comorbid diagnoses and wider social issues with housing, employment, finances and social networks were often also present. Clinicians described feeling both idealised by service users and as though nothing they did was good enough. While establishing an authentic connection with service users was seen as vital, clinicians admitted to fears of being "sucked dry" and "emotionally swamped" [64], experiencing feelings of vulnerability and of being dangerously on the edge of losing their sense of self.
Participants spoke repeatedly about the need to maintain a psychological distance from clients in order to prevent themselves from becoming overwhelmed or burned out.
(Langley & Klopper, 2005) [56]
In a few studies, however, clinicians reported they felt able to make use of their unsettling feelings to connect with service users' own feelings. Although there were exceptions, negative attitudes and experiences appeared particularly prevalent in mainstream primary and secondary care services. This was attributed to poor understanding of CEN in these settings, to staff being overburdened but inadequately supported, and to observing poor outcomes, leading to frustration, hopelessness, and sometimes feelings of aggression and blame towards service users. Suggestions to combat negative attitudes included better supervision and training by specialists to improve understanding, compassion, and perceptions of treatment effectiveness, along with more support from services for clinicians to engage with supervision and training.
Overall, the impression across studies was that clinicians described the need to be authentic, non-judgemental, empathic, collaborative, hopeful, motivating, consistent and dependable to build trust with service users, whom they understood often to have had histories of abuse or abandonment by key attachment figures. The importance of 'knowing' service users, holding them in mind, and acknowledging the reality of their experiences was emphasised. When relationships went well, clinicians described successfully negotiating connection and distance in the therapeutic relationship: being open, warm and available, but also retaining boundaries, structure and a degree of emotional detachment. Clinicians spoke of a need to create a sense of shared responsibility for progress with service users, and of the value of adopting a curious, non-expert stance to help develop a safe space where strong emotions could be processed, tolerated and "radically accepted" [48].
Clinicians who reported more positive relationships tended to be those who felt better supported, for example describing better team working, supervision, and informal support from their colleagues, as well as longer-term treatment frameworks, which allowed time for relationships to develop. Such support appeared to be much more available in more specialised services.
Dialectics: Not doing too much or too little
Clinicians' beliefs regarding appropriate duration of treatment, and how best to negotiate not doing 'too much' or 'too little', were complex. There was consensus across studies that people with CEN had long-term needs, but in a few studies, clinicians voiced concerns that openended, long-term support could be too demanding for service users to engage with, too resource-intensive, or could result in 'dependency' and a lack of delivery of interventions with clear therapeutic content, particularly in generic secondary care services. Clinicians felt that it was important to be realistic about what they could achieve and to avoid setting expectations that they could 'fix' everything. At the same time, in several studies, clinicians emphasised that not offering sufficient long-term support could result in unrealistic expectations for recovery, disappointment and undertreatment. Several studies reported a perceived lack of well-developed, longer-term support programmes at a medium level of intensity.
The requirements of the system do not always fit with the needs of the people who are using the service: The expectation is that you will recover. . . you will get out of the service. . . we can only work with you for a certain amount of time. . . It just doesn't work as simply as that.
Mainstream secondary care clinician (Priest et al., 2011) [63] Across studies, clinicians described a need for balance between recognising the limits of what could be achieved, managing the expectations of both clinicians and service users, and maintaining hope. In one study, clinicians saw a tendency in mainstream settings for clinicians to "do completely nothing" [54] in therapeutic encounters with people with CEN, or alternatively to display 'false optimism' or 'therapeutic nihilism', rapidly discharging service users due to underlying feelings of powerless and demoralisation. Paradoxically, however, such undertreatment then had the effect of increasing the very 'dependency' clinicians feared, as service users had to keep 'coming back for more'.
Premature discharge was identified as common and was put down to clinicians seeking to 'escape' from work they found challenging, to service recovery models conflicting with service users' needs, and to pressures to move people on. Yet, there was consensus across several studies that discharge could be particularly challenging for people with CEN and needed to be managed sensitively, especially because of associated safety issues (e.g., due to service users feeling abandoned by clinicians). Views diverged, however, about the best way to approach discharge. For example, in one study evaluating specialist services for CEN [47], some clinicians feared that open-ended service use without a clear plan for discharge could reduce service users' motivation to develop coping skills, affect the service's capacity to take on new referrals, and encourage 'dependency'. These clinicians felt having discharge or self-sufficiency as a time-specific goal from the beginning of care was helpful. However, other clinicians in the same study favoured offering continuing support at a lower level of intensity (for example through peer support), rather than absolute discharge following a period of intensive treatment, with clear provisions for re-engaging with services if required.
Short-term therapy, such as that offered by IAPT in the UK, tended to be seen as insufficiently flexible and intensive for people with CEN. In one study, primary care clinicians described a sense that they were "short-changing" service users [64]. In a few studies, clinicians also expressed fears that short-term support could potentially be harmful or experienced by service users as 'abandoning' and 'retraumatising'. However, in a small number of studies clinicians did argue that short-term support had value, either at specific points in service users' treatment journeys, or for those with less severe difficulties.
Clinicians in multiple studies also underlined the need to deliver both psychotherapeutic interventions and pragmatic social support to meet the varied and fluctuating needs of this population. Pragmatic support, which was reportedly offered more often in specialist services, could include vocational, educational, social, substance misuse, or parenting support, as well as skills to promote independence.
Intervention models. Specific treatment models that clinicians reported as having therapeutic benefits included Dialectic Behaviour Therapy (DBT) [51,55,60,61], Mentalisation Based Therapy (MBT) [44,55], Cognitive Analytic Therapy (CAT) [68] and psychodynamic formulations [58]. However, in several studies, clinicians also emphasised that 'one size does not fit all', that diverse, flexible treatment options were needed within mental health services and in primary care, and that more formulation-driven treatments could be more beneficial than those based on diagnosis or driven by manuals.
There was a consensus across studies that a variety of approaches could be taken to some core therapeutic tasks, making a range of interventions similarly effective in achieving good outcomes. Clinicians tended to see difficulties with managing emotions as central in CEN, and prioritised interventions that promoted development of skills relating to emotion regulation, distress tolerance, or developing a capacity for thinking and feeling rather than doing. Similarly, models that helped service users to practice their interpersonal skills (e.g., via groups, peer support, or therapeutic communities) were seen as valuable in several studies. DBT was the specific therapeutic intervention most often discussed in studies and clinicians identified several benefits from this. As well as helping service users develop better relationships and emotion regulation, clinicians felt it was based on a clear model and manual, and that it promoted hope, decreased medication use, encouraged service users to take responsibility for treatment, and helped encourage compassion, understanding and team working on the part of clinicians. Clinicians in some studies did, however, also report that delivering DBT placed considerable demands on them and their services, including the need for intensive training, implementation of a complex model allowing relatively little flexibility, and being contactable outside of working hours.
Formats like groups, peer support, and therapeutic communities were also valued for broadening the range of available options and promoting collaborative, user-led models of care and empowering service users to have ownership over their treatment in a more democratic way. Finally, support for family and friends was identified in several studies as important, but as an area where even well-resourced specialist services often fall short despite the perception that people with CEN often experience difficulties with relationships.
Managing safety issues and crises: Being measured and proactive
Managing safety issues was considered vital across all treatment settings. The nature of deliberate self-harm and other safety issues in the context of CEN was seen as differing from acute presentations in other mental health conditions because of its chronic, recurrent and to some extent predictable nature. As such, clinicians felt it could be prepared for proactively, through open dialogue with service users to agree parameters within which clinicians would respond.
In a small number of studies, clinicians suggested that 'rescuing' or stepping in too quickly at times of crisis could be detrimental or disempowering for service users. However, there was a competing need not to become neglectful, with a lack of consensus regarding how available clinicians should make themselves. Views about out of hours service provision varied. In one study of community-based mental health services implementing DBT, some clinicians described 24/7 availability or an 'on call' system as a 'step backwards' and ineffective. But in other studies, clinicians argued that this was important, and that greater availability of support in fact usually reduced the need for it. Some clinicians felt that people with CEN were seen as 'bad' for posing a safety risk, in contrast to those with other diagnoses, such as psychosis, who were seen as 'mad'.
Practice in mainstream services was described in some studies as risk-averse and reactive, sometimes creating a vicious cycle wherein service users felt they had to present in crisis to get more input. Clinicians used to dealing with crises in the context of conditions such as depression or psychosis were reported to struggle to manage the specific dynamics of safety concerns for people with CEN. Specialist services were seen as adopting more proactive approaches, negotiating plans for managing safety issues in collaboration with service users, moving away from action-reaction or fearful responses from clinicians, and fostering ownership of the management of safety issues among service users.
Clinician and wider service needs: Whose needs are they anyway?
Clinician needs. A recurring challenge across studies was for clinicians to reconcile their own needs with those of service users. This dilemma was particularly acute where clinicians lacked organisational support or adequate supervision. Clinicians found themselves negotiating between meeting the needs of service users, their own needs, and wider service needs. When synthesising studies, it was complex at times to disentangle whose needs were in reality met by particular practices. For example, when clinicians described a need to reduce service users' alleged 'dependency' and promote 'self-sufficiency', this seemed in part connected to clinicians' own feelings of being overwhelmed, as well as to wider service pressures to conserve resources. One study of clinicians working in "personality disorder" services [38] suggested that service users' perceived difficulties (e.g., with reflection) could be 'mirrored' further up the organisation. This study also reported that service leads across several teams appeared to be 'charismatic' but also 'autocratic', seeking to 'quell dissent' among clinicians by adopting firm, unequivocal stances. In other studies, it was clear that services, rather than service users, were at times experienced by clinicians as 'difficult to engage with'.
It's not the patients that make you frustrated nowadays, it's the organization around that is troublesome.
DBT Therapist (Perseius et al., 2003) [61] The importance of clinicians feeling supported in their work was a common theme across studies. Working effectively with people with CEN without becoming burnt out was seen as achievable, but the organisational support needed to do so was often missing, with the low priority and investment accorded to treatment of people with CEN affecting both service users and clinicians. Clinicians valued both supportive relationships with colleagues and formal supervision in a variety of formats, including individual and whole team supervision and input from external experts. The importance of addressing clinicians' own emotional needs, engaging in reflective practice and enabling clinicians to process their own vulnerabilities and 'destructive' emotions was emphasised, but provision was frequently described as inadequate.
Good team-working and sharing responsibilities for treatment and decisions regarding safety also helped clinicians to feel supported. This appeared to be reported most often regarding specialist teams, especially in those using DBT and CAT models, and in therapeutic communities, and least frequently in primary care settings-where "you're kind of left on your own with somebody" [64]. There could also be challenges in teams where only one or two clinicians in a team were trained in a particular therapeutic intervention or skill set. While clinicians saw value in including a range of clinicians with diverse backgrounds and approaches, they also felt this could encourage "splitting", making it more difficult to develop a shared language or model of understanding across team members.
Having divided caseloads (i.e., not fully CEN) was considered by some to be beneficial for integration of CEN work into generic teams and for staff wellbeing. However, having competing clinical priorities could impede therapeutic work, and the 'psychological shift' between various roles was experienced by some as challenging. One study noted that specialist services tended to promote broad, combined roles where all clinicians contributed to delivering the therapeutic model, but this required significant training. Specialist services sometimes had 'flat hierarchies' which could be empowering but also frustrating for clinicians when responsibility was equal but authority or pay, for example, was not.
Interagency working and the wider system. Effective inter-team and inter-agency working was considered important for management of the resource-intensive, multi-agency, and often out-of-hours service use by people with CEN. However, reports of inadequate communication between services were common at all levels of care. Challenges included high staff turnover, staff cutbacks due to reduced budgets, time constraints, and disagreements between clinicians or competing priorities, with poor interagency working leaving clinicians feeling more anxious and less contained. Pre-existing, personal, or good professional relationships [63] and clearly assigned responsibilities [47] (taking into account service user preferences regarding clinicians and services where possible) facilitated interagency working.
Clinicians in mainstream services reported in several studies that they valued support from specialist services, such as in hub and spoke models, where specialist staff provide expert assessments, case consultation, supervision, and staff training to mainstream services [47,57,58,71]. This model was perceived as making efficient use of specialist staff, allowing them to support not only those on their small caseload for intensive therapy but also a much wider group beyond the dedicated services. However, reservations about such models were described in a few studies, including that specialist input from specialists could undermine professional roles in mainstream service, may be ineffective on an ad hoc rather than sustained basis, and risks specialist clinicians having unsustainable workloads. There were also some tensions identified between mainstream and specialist services, where mainstream services were seen as having to 'firefight', whereas specialist services were perceived to have greater freedom to 'select' service users, refuse certain responsibilities, and prioritise time for reflection.
Establishing new services, interventions and skills. Finally, a number of studies were conducted in the context of establishing a new service or intervention programme, and thus themes emerged relating to good practice in initial implementation. Factors that were considered helpful for developing new services or interventions included: managerial support, recruitment of appropriate staff, leadership that embraced uncertainty and allowed clinicians freedom to innovate, team building, cross-agency and whole team training, and having realistic plans, timescales and budgets. Ongoing sustainability of new services was facilitated by integrating them into existing service systems, effective interagency working, and measuring and demonstrating good outcomes. Clinicians trained in new models described feeling like 'beginners' despite their clinical expertise and being required to make significant time commitments for implementation and ongoing practice and learning. There was widespread recognition of the need for ongoing support and training beyond the initial phase to support knowledge retainment and ensure programme sustainability. Some questioned the suitability of mental health service settings for delivering services given previous unsatisfactory or traumatic experiences for service users. However, acquiring alternative premises was often challenging.
Discussion
Several overall proposals can be drawn from this synthesis of clinicians' perspectives for good practice in treating people with CEN effectively and respectfully and at the same time supporting the clinicians working with them. Areas of consensus between the findings of eligible studies included the need for high quality, holistic assessments and care plans encompassing physical, psychological and social needs; easily navigable referral systems enabling good continuity of care; and the need for a proactive, collaborative approach to safety management. Therapeutic relationships were seen as key and as a major common factor in the success of different approaches, and clinicians in participating studies believed that they could be improved through greater therapeutic optimism, overcoming pejorative attitudes, developing partnerships between service users and clinicians through shared responsibility and decision making, radical acceptance and a non-expert stance, and sustainable models for service user involvement in care.
Some dilemmas and variations in opinion were also identified, especially regarding the balance between doing 'too much' or 'too little.' Potential positive and negative consequences were identified both for open-ended long-term input and for time-limited input, as for 24-hour availability of clinicians in specialist services. Those who advocate for long-term support may be more in tune with service users, reported often to see periods of treatment as too short and continuing support between periods of intensive therapy as lacking [17]. Whether or not services were time-limited, there was agreement that careful collaborative discharge planning was required to mitigate some of the frequently experienced challenges and help service users work towards self-sufficiency.
Many of these findings align with those identified in our accompanying meta-synthesis of the perspectives and experiences of service users with CEN [17]. For example, service users also appear to prioritise individualised care, preferring clinicians to focus on individual needs and aspirations rather than diagnosis or intervention fidelity. Clinicians were called upon in papers on service user perspectives to sustain hope and provide encouragement while at the same time maintaining realistic expectations and not invalidating service user distress. The centrality of the therapeutic relationship is a further point of consensus. While both clinicians and service users emphasised the need to offer a variety of treatment options to meet service users' heterogeneous needs, service users also prioritised structure, stability and a long-term perspective in their care. These are not inconsistent demands as options can be flexible and varied, yet their delivery can remain structured and consistent on an individual level. Whilst discontinuity of care and difficulties accessing services are often reported elsewhere for other diagnostic groups as well, we suggest that clinician reports in our synthesis reinforce views from service users [17] and policy makers [23] that this group is especially poorly served in terms of a service system designed to be accessible and meet a range of needs.
Concerns around the usefulness and impact of using "personality disorder" labels were also similar to those reported from studies of service user perspectives. However, the included papers on clinician perspectives tended less to reflect recent calls by service user advocates and some clinicians, supported by patient testimonials and growing evidence, to give trauma a central role in the assessment and treatment of CEN, a call also reinforced by feminist critiques of "personality disorder" as a mechanism for pathologising natural responses to oppression, abuse and structural inequalities [18]. This omission may in part reflect the fact that most studies were conducted before the rise of the 'Trauma not PD' movement [72,73]. We suggest that alongside the priorities identified above, incorporating trauma-informed approaches to care and preventing re-traumatisation within mental health settings should be seen as key elements in good practice if a shared agenda for service improvement is to be agreed on by service users and clinicians [74].
Exploring clinician perspectives is particularly valuable for identifying ways of promoting positive change and for removing clinician-related barriers to this. This review echoes much other literature in identifying pejorative clinician attitudes and behaviours as an important obstacle to delivering care that is even adequate, especially in non-specialist settings. Developing and evaluating ways to challenge and change such behaviours is thus a pressing need. This review also identifies the need to extend more support to clinicians working with people with CEN; across several studies, clinicians reported on the significant emotional toll of their work, which could potentially fuel negative behaviours and a lack of therapeutic optimism. Several of our themes related to the need for clinicians to strike a balance, including balancing connection against distance, doing too much or too little in terms of treatment provision and balancing service user empowerment and independence with service pressures of risk-aversion. Needs of different stakeholders also require balancing: for example, do some clinicians warn against long-term input for the benefit of service users (to promote independence), for the benefit of themselves (to avoid challenging work), or for the benefit of services (to meet capacity constraints)? This balancing act, together with caseload and referral pressures, may well contribute to the emotional toll of working with people with CEN. However, clinicians, especially in specialist services, also described many ways of alleviating this burden, including through supervision, reflective practice and informal support between colleagues. The burdens associated with difficult therapeutic decisions, especially regarding safety, were clearly alleviated by being shared, both with colleagues and service users. As such, multidisciplinary coproduced formulations, maintaining the centrality of the therapeutic relationship, and 'holding in mind' the service user could provide some guiding principles for clinicians when navigating these complex balances and would be a useful focus for further research.
Constraints on good practice relating to the wider service system were recurrently described, including exclusive thresholds and referral pathways, inflexibility of services to meet diverse and long-term needs and manage co-occurring conditions, and lack of time for reflection and training. Lack of recognition of the needs of people with CEN and lack of resourcing to meet these needs were widely reported and likely to contribute. These deficits may also reflect a lack of evidence and strategic thinking on how to optimise service design to result in coherent pathways allowing smooth transitions between accessible services corresponding to service users' needs and delivery of a full range of evidence-based psychosocial interventions in all relevant settings. This will require design of the system so that relevant evidence-based interventions can be delivered in primary care, generic secondary and specialised services, with smooth transitions and collaborative working between all sectors, including support for primary care from specialised CEN services. The major focus of research on CEN has been on the effectiveness and cost-effectiveness of relatively short-term psychological therapies: co-produced research taking a whole-system perspective on how to design systems of care that meet the varying needs of diverse service users at different stages in their pathways through services now appears to be an important need.
Limitations
We aimed to include papers regarding management in the community of people with a range of "personality disorder" diagnoses or who might have related difficulties, such as recurrent self-harm, but not have received such a diagnosis. However, in practice most studies focused on people who had received a diagnosis of "borderline personality disorder". As such, our findings relate mainly to this group, with some heterogeneity in the ways in which study samples were identified. Our search criteria were broad, encompassing qualitative literature using all methods on all aspects of community care for all personality diagnoses: we therefore made a pragmatic decision to exclude papers that had not been peer-reviewed, were not in English, and dissertations or theses. This may have resulted in substantial contributions being missed. There was a good variety of professional backgrounds and levels of care across included papers, but little literature about voluntary organisations and other community services outside the secondary mental health care system. This may reflect limitations of the search strategy, but probably also indicates a scarcity of research in these areas. This may mean that the voices of staff who support individuals who have disengaged or been excluded from the mainstream mental health system are not included.
As this is a meta-synthesis identifying and cross-validating over-arching themes across many studies, a level of nuance and specificity will have inevitably been lost, with findings pooled from a variety of contexts, dates and countries. The two researchers who worked the most closely on synthesis (JT and BLT) both have clinical experience of providing mental health care, while three other authors (JR, TJ, EB) bring relevant lived experience of service use-the results presented here and their interpretation may well be shaped by their perceptions born from these experiences. Efforts were made to counter this through adopting an inductive approach to analysis, double coding a portion of papers, discussing themes together and iteratively, and through the collaboration of the review team and experts by experience and occupation.
Conclusion
Clinicians' experiences of and perspectives on good practice for providing community care for people with CEN offer valuable insights into how to better meet the needs of this population and the needs of the clinicians supporting them and are largely in harmony with the perspectives of service users [17]. In further research, a focus is now needed on how to implement these principles of good practice across the service system to improve service user outcomes and the experiences of service users and clinicians. Previous research has tended to focus on individual psychological interventions: a focus on designing a whole system of care that can meet the longer-term needs of people with CEN in a sustainable way is now desirable. Development and evaluation of fidelity measures that reflect agreed good practice [75,76], and of approaches to support services in achieving and maintaining high fidelity, is a potential approach to meeting this need. The apparent congruence on many values and principles between service users and clinicians suggests that a co-produced approach to future research, service development and policy formulation is likely to be fruitful. Finally, an overarching emerging issue deserving further research and policy development is of equity: clinicians echo service users in arguing that people with CEN tend to be a marginal group, often not prioritised for resources and attracting negative attitudes and behaviour. Change is not likely to be achieved unless the needs of people with CEN are placed on an equal footing with the needs of people with other long-term physical and mental health conditions.
Lived experience commentaries
In line with service user critiques and our own lived experience, this meta-synthesis provides further evidence that for many people with CEN, current mental health services are simply not fit for purpose. From clinician burnout and pejorative attitudes, to a clinical victim-blaming culture when a service cannot meet service users' needs, the signs of a system at breaking point are undeniable.
Since clinicians themselves seem to recognise the wider social context, i.e., that trauma and adversity are major contributors to the distress experienced by people with CEN, it begs the question: why do most services still regard the medical model as the panacea? It appears that we need major systemic change and services should truly embrace inclusive, co-designed approaches that value lived experience and also support user-led models of care.
Clinicians' concerns around diagnostic utility are noted and shared. However, 'dancing around the diagnosis' due to fears of stigma and exclusion-no matter how well intentionedmay actually be counterproductive and inadvertently further perpetuate the stigma. It only underscores the urgent need to address this controversial terminology.
Despite the awareness of a gender bias that results in women with CEN being disproportionately more likely to receive a "borderline personality disorder" label than men, there is no mention of the overlap with Autism Spectrum Conditions (ASC) [77] and the fact that women are conversely under-diagnosed with ASC [78]. This can have serious implications for potentially mis-diagnosed service users who may end up trapped on unsuitable treatment pathways and therefore constitutes a significant gap in the evidence base warranting investment in further research.
While we support inter-and multi-agency working in principle, stakeholders need to be mindful of its potential pitfalls. For example, as if pathologising legitimate feelings of distress wasn't problematic enough, collaborating with law enforcement (e.g., through the "Serenity Integrated Mentoring" programme, a widely criticised intervention implemented in England, which integrates police officers in community mental health teams and routinely denies socalled 'High Intensity Users' access to crisis care [79,80]) can exacerbate the risk of going as far as criminalising CEN [81]. Such misconceived interventions can not only permanently destroy service users' trust in mental health services, but can also have absolutely devastating effects on their life chances, negating any attempt at meaningful recovery.
Overall, it is encouraging that there are clinicians who share our views after all, and the answer to "Whose needs are they anyway?" should be a resounding "Everyone's!" After all, service users don't benefit from working with stressed and burnt-out clinicians, either; therefore, the desire to improve staff training and support is mutual. Unfortunately, the prevailing systemic flaws are not conducive to either individual practitioner or service improvement. Likewise, influencing those clinicians who are steadfast in holding onto stigmatising views of people with CEN is going to be a major challenge that must be addressed with co-production throughout service development and delivery.
Eva Broeckelmann and Jessica Russell
Broken Mirrors
Whilst reading this review, I was struck by the allegory of a mirror. The focus is on clinicians, but its sister paper with a service user focus [17] reflects the same issues. The mirror allegory goes beyond similar themes being reflected. The opinions of each side are fragmented-like a broken mirror. The broken fragments of each side appear as perfect replicas of the other, yet can only see each other in reverse, appearing as polar opposites. The data here is constricted to what is within the literature, with both papers dutifully reporting this. This data is limited in providing an understanding of why, despite appearing to want the same thing, there is such a relational divide between service user and service provider.
The roles of people working within the Lived Experience Professions (i.e., peer support workers, service user consultants, lived experience researchers) could be described as roles that bridge between the two polarised worlds, communicating sameness and difference between the two. Literature exploring how this could relate to developing relational bridges within the field of trauma/complex emotional needs/"personality disorder" is not included-potentially because it does not exist or exists in a format that does not fit within the search criteria. This highlights the importance of being able to value experiential data as a valid consideration within research, in order to lessen the phenomenon of studies giving a perfect view of one small fragment of the broken mirror, whilst disregarding the rest. Services benefit more from a full view of the broken mirror, even if the individual shards are more blurred than one perfect piece.
This gave me pause for thought when researchers described their experiences of working in services as a potential limitation in the review. Once they have acknowledged their own perspective, understanding the line between this and the data, their 'limitation' is in fact a strength-and this knowledge needs to be recognised, valued and encouraged more. The literature we use to inform and shape policy is not being practiced under lab conditions, but in the messy world where broken mirrors exist. | 2020-12-16T14:07:22.732Z | 2020-12-16T00:00:00.000 | {
"year": 2022,
"sha1": "8cc9394e8c73d60fb14b88ba041a9b99df8eeaf9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0267787&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "674e7196795d02606a41d93e8bb369779e3c4eef",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
51999598 | pes2o/s2orc | v3-fos-license | Mergers and Acquisitions in Banking : A Framework for Effective IT Integration
This study aims to identify key issues in the information technology (IT) integration in mergers and acquisitions (M&A) in banking and propose an approach to increase the efficiency of such integration. The study produces a first cut IT integration framework based on the literature review into the key factors in IT integration in banking and selected popular IT governance models, such as Control Objectives for Information and related Technology (COBIT), and then refines it with the help of IT practitioners based on their involvement in a number of high-profile banking M&A cases. The proposed framework is thus underpinned by the latest theoretical thinking in the relevant subject field and builds on relevant practical experience. Senior level IT practitioners in banking organisations can employ the framework to inform and guide the execution and post-mortem review of their M&A integration projects.
Aims and Objectives
The study aims to expand the knowledge base on the IT integration best practices as applied to the M&A practice in banking. To achieve that, a number of large-scale high-profile banking M&A cases were investigated by interviewing key senior level technology officers directly involved in the delivery of those projects. Some of the results of the study were published in another work, Kovela and Skok (2012), which among other things confirmed that IT is a critical resource and enabler in the business model of modern banking and therefore a clear link between the business strategy driving the merger and the priority of the IT integration tasks which are key for the efficiency of M&A transactions. The current paper builds on and complements these findings by proposing a framework for those embarking on the process. The objectives here are: To establish whether the adoption of an IT integration framework is necessary for efficient integration in M&A in banking.
To survey the landscape of existing IT governance frameworks and identify those that could potentially be used as a basis for the above M&A IT integration framework.
To create a M&A IT integration framework and then benchmark and adjust it against the practice-based summative profile of the banking M&A IT integration process.
Deloitte (2014) "(in 2014)… conditions improved versus the preceding two years-more deals were transacted at higher multiples" with "total deal value of $14.4 billion in 2013, up from $13 billion in 2012". That and the fact that the government and self-initiated restructuring that swept the banking industry after the crisis of 2008 (Deloitte, 2014) is still on-going will result in more M&A IT integration projects with their corresponding problems.
There are several IT integration-related factors to consider. Firstly, M&As fail to increase shareholder value in more than half of the cases (58.3% between 1992 and 2006 according to Cools et al, 2007) and in banking specifically that can often be directly attributed to the IT integration related problems (Williams, 2005;Williams et al, 2007;Kovela & Skok, 2012;Deloitte, 2014;The Mergermarket Group, 2014), e.g. the cases of Santander's takeover of Alliance and Leicester (Williams, 2010) and integration of the RBS Group's IT infrastructure (King, 2012 andBoyce, 2012). Secondly, no appropriate common-reference M&A IT integration framework for banking institutions exists in the public domain, which means banks at large will be "reinventing the wheel" every time an M&A IT integration is undertaken and often making the same mistakes. Finally, a number of empirical studies (Haspeslagh, 1991;Pliskin et al., 1993;Davis, 2000;King, et al. 2004;Carretta, Farina and Schwizer, 2007) suggest that despite the differences between individual IT integrations a degree of commonality still exists at an industry, business, or business function level, so useful generalisations could still be made with sufficient research data on hand.
Consequently the authors suggest that openly available M&A IT integration guidance for banking could help establish a common knowledge base and create preconditions for improving the success rates in such projects and thus benefit both individual businesses and the industry at large. The question therefore arises-is there any approach, framework or guidance, readily available for that purpose?
M&A IT Integration in the Context of IT Governance
In 1996, Stylianou et al. (1996) made one of the first attempts to classify the factors playing a role in M&A information system (IS) integration and establish how important they were to the overall success of the transaction. A year later, Giacomazzi et al. (1997) proposed a model for IS integration, linking variables such as corporate objectives and setup of the parties merging with requirements for the target information system (IS) organisation and outlining a range of IS integration scenarios, i.e. total, partial, or no integration. Subsequently Wijnhoven et al. (2006) addressed the post-merger IT integration problem as one of IT alignment (Henderson & Venkatraman, 1992), whereby the strategy for integrating information technology would be inseparable from the overall business integration strategy and so should be handled with strict alignment to the business goals and objectives of the merger. A study by Baker and Niederman (2014) has recently developed the argument by proposing a framework for aligning business and IS strategies throughout the M&A execution cycle. Also, Alaranta and Mathiassen (2014) examined the risk management perspective of the post-merger IS integration and proposed a framework to "prepare for, analyse, and mitigate risks" in order to facilitate such transactions. Finally, and perhaps closest to our needs, in 2011 Maire and Collerette presented a more practical guidance for post-merger integration specifically in the financial services scenario (Maire & Collerette, 2011), although this too fell short of providing enough technology-level detail on the implementation of the integration process.
There is a clearly visible evolution of thought in the above studies, where the initial recognition of specific features and attributes impacting on the success of the IS integration process is followed by a gradual understanding that these often span beyond the IT function itself and are deeply ingrained in the organisational setup and strategy. Finally the conclusion is drawn that the IT function is as inseparable from the organisation as the IT integration process is inseparable from the overall post-merger integration of the business, and thus has to be treated within the broader context of the organisational and IT governance. Furthermore, there are indications that growing from the recognition of the ubiquity and critical importance of IT,there is a recent trend on the part of large firms to elevate control over IT performance to Board level. For instance, Tyco International, FedEx and JP Morgan Chase are among many Fortune 1000 firms to have recently established corporate-level IT governance councils (Symons, 2005).
The above leads one to assume some attempts would have been made to explicitly apply an IT governance approach to facilitate the M&A IT integration by now. Specifically, knowing from the outset that a robust IT governance framework that already had or could "adopt" an M&A IT integration component did exist would save time re-inventing it. Still, Alaranta (2005) notes that such works are sparse and inconclusive, which supports calls for the matter to be addressed in this study.
Identifying a Good IT Governance Framework for M&A IT Integration
The known IT governance frameworks can be broadly split into two categories -generic models and specialised (2009)), but the task of choosing an IT governance framework for a particular set of processes is still far from trivial due to the peculiarities of one firm's organisational structure and strategy. In our attempts to identify a suitable candidate, we focused on those generic and broad enough to fulfil the IT governance needs of the enterprise, yet provide enough detail to allow for practical implementation of the IT integration process. Broadly speaking, from amongst the above, ISO/IEC 38500:2008 and COBIT come closest. Brief descriptions of the two follow below.
ISO/IEC 38500:2008
According to ISO/IEC (2008) ISO/IEC 38500:2008 is "a high level, principles based advisory standard". In addition to providing broad guidance on the role of a governing body, it encourages organisations to use appropriate standards to underpin their governance of IT and provides a framework of principles for directors to use when evaluating, directing and monitoring the use of IT in their organisations. The standard defines the following principles for "good governance" of IT: Establish clearly understood responsibilities for IT; Plan IT to best support the organisation; Acquire IT validly; Ensure that IT performs well; Ensure IT conforms with formal rules; Ensure IT respects human factors; Complying with the above principles would assist directors in balancing risks and encouraging opportunities arising from the use of IT, as well as assure conformance with legislation and contractual obligations (ITGI, 2009).
COBIT
COBIT is positioned as a high level framework, that is business requirements-driven, covers the full range of IT activities, and concentrates on "governing and managing enterprise IT" (ISACA, 2012). COBIT's main principles are: Meeting Stakeholder Needs; Covering the Enterprise End-to-end; Applying a Single, Integrated Framework; Enabling a Holistic Approach; Separating Governance From Management.
Performance measurement goals and metrics are defined at the following levels: Enterprise goals and metrics that define what the overall business goals are and how to measure them; IT goals and metrics that define how IT would support the above enterprise goals and how to measure it; Process goals and metrics that define what IT process must deliver to support the IT goals and how to measure it; Activity goals and metrics that establish what needs to happen inside the process to achieve the required performance and how to measure it.
The framework has evolved significantly throughout the time this study was conducted, going from version 4.1 originally published in 2007 (ITGI, 2007) to version 5 in 2012 (ISACA, 2012). There have been a number of changes, the most relevant one being a change in the definition of the Domains of COBIT. Version 5 has separated goals and metrics from the planning and delivery process and widened the scope of the domains by www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 3;2015 bringing stakeholders and good practices in the mix; this is reflected by the new name of Enabler Dimensions (see Table 1).
Is COBIT Sufficient as a Practical M&A IT Integration Framework for Banking?
Considering the robustness of COBIT and the proliferation of M&As in regulated and IT-dominant environments such as banking, it would seem natural to see COBIT being actively applied. Surprisingly, there is little evidence of the framework being applied either in pre-merger planning or post-merger integration. Alaranta (2005) ). The question, therefore, is whether this is caused by general inertia in new theoretical concepts achieving traction in the "real world" or is this due to some limitations in the theoretical approach itself.
There are several considerations regarding the "inertia element". Firstly, since the banking industry is so dependent on IT, relying on proven methods rather than pioneering cutting-edge yet potentially imperfect approaches, seems totally justified. Secondly, there is a plethora of IT governance and project management frameworks around (ITIL, CMMI, PRINCE2, etc.) that perhaps lack all-round coverage, but are well known and are thus cheaper to use. Thirdly, the real incentive for exploring the benefits of COBIT only came about in 2002 with the introduction of the Sarbanes-Oxley Act, at which point regulatory compliance and transparency became imperative. Thus it is only recently that COBIT has been showing signs of becoming a de facto regulatory-compliant IT governance framework standard.
Considerations regarding the "theory limitations" part are two-fold. On the one hand, COBIT is a well-thought through integrated collection of controls spanning all the project planning and delivery phases in the traditional project management sense, but on the other hand the framework does not address the key technology-implementation level issues in the M&A IT integration process, i.e. the consolidation of multiple IT infrastructures into a single one and the types of documents that should accompany the process. COBIT on its own therefore cannot serve as a one-stop solution for our task; to accomplish that it needs some form of implementation-level IT platform integration plan attached to it. It is also worth noting that version 4.1 Domains of COBIT would be the authors' preferred option to be used in constructing such a plan, as the definitions used there are both robust enough for our IT integration process structuring needs and are quite straightforward to use, whilst the version 5 COBIT Generic Enablers have wider scope and are therefore more difficult to apply whilst adding no extra value to addressing the issue at hand. Therefore, to produce a first cut M&A IT integration framework it would be feasible to use COBIT as an overarching enterprise IT management and governance framework with version 4.1 Domains as a basis for structuring the M&A IT integration process, and complement it with the detailed implementation-level banking IT platform integration plan.
Method and Techniques
The authors adopt grounded theory as the study method. In line with the definition of "systematic generation of theory from data that contains both inductive and deductive thinking… deriving conceptual profile of the phenomena by employing a systematic set of procedures" (Glaser and Strauss, 1999), we employ techniques such as literature review, interviews, and subsequent analysis with the aim to produce a new process model in the form of an IT integration framework for banking institutions engaging in M&A IT integration projects.
Data Collection, Selection and Analysis Methods
The secondary research data collection covered over sixty peer-reviewed journal publications and a similar number of industry publications (e.g. white papers, company reports, etc.). This allowed a theory-based version of the M&A IT integration framework to be constructed (see Appendix A). The principles guiding the design of the prospective framework were: -Firm alignment between the business and IT goals -IT strategic and tactical plans; programme, project, and service portfolios, Four Domains of COBIT (ITGI, 2007) as the initial model for defining the integration delivery areas; -Aggressive exploitation of technology consolidation-related savings -business processes, applications, infrastructure, data, vendor contracts, software licensing; -Synergy through application rationalisation -consolidated application platform and data repository, optimal functionality range, support for the new business model; -Synergy through process consolidation -software procurement, service desk operation, security policies; -Synergy through common organisation culture and staff retention -clear staff communications strategy, staff retention incentives aligned with integration needs, cultural integration as a management team priority; The primary research exercise was in the form of semi-structured two hour-long interviews (see Appendix B) with four senior-executive-rank officials who personally oversaw eight large-scale post-merger IT integration projects from their London headquarters between 1995 and 2010. The banking institutions involved in the above projects were Royal Bank of Scotland, Lloyds, Citigroup, UBS, Nomura, and Deutsche Bank. Major limitations of the approach were recognised to be: -Imperfect sample reducing the ability to generalise the findings; -Imperfect interview structure / questions failing to address the complexity of the subject in its entirety; -Personal bias of the respondents due to their role on the acquiring / acquired side.
To address the above issues the following measures were taken: -The case selection process strived to produce a sample which was balanced and diverse enough to serve as a fair representation of the IT integration practices in contemporary banking M&As. Table 2 below summarises the case selection aims and criteria; -Prior to conducting the interviews the question list was critically reviewed by one of the would-be respondents and adjusted according to the corrections proposed. Professional advice was taken to eliminate possible bias of the interviewer and balance the focus evenly between the aspects of the study; -To ensure the unbiased view of the relevant issues and success an even "acquirer / acquired" split in the interviewees' roles was sought. In three of the eight cases studied, the interviewees were the acquirers, whilst in the other five, the interviewees were on the acquired side. The interview analysis that followed was a cross-case analysis of four interviews covering eight M&A cases using the constant comparison method (Patton, 1990) to group answers to the questions below: -Is the IT integration framework a "must have" or a "nice to have" in a banking M&A?
-What are the issues and workarounds in employing such a framework?
-Is the proposed theory-based framework applicable and usable immediately?
-If changes to the framework proposed are required, what are they?
The Research Process
The literature review phase produced a sense of the importance of the IT integration framework for efficient M&A integration in banking and what the optimal theory-based version of such a framework should look like. The interviews, on the other hand, validated the above findings and supplied insight into the issues and workarounds associated with putting this theory into practice in a commercial environment. The validation was based on real-life practices that have been in place in the banking industry for the past two decades. The results of both exercises were combined to create an "optimal" banking M&A IT integration framework.
Summary of the Findings
For confidentiality purposes, interviewees are referred to as C, H, J and T respectively.
The section below presents the summary of findings divided into three areas: a) the perceived need for an M&A IT integration framework in banking, b) issues and workarounds in employing such a framework and c) the proposed revised framework.
An IT Integration Framework -a "Must
Have" or a "Nice to Have"? Table 3 lists the respondents' comments that show that despite the differences in perceptions of how an IT integration framework would apply to small and big projects and how formal it should be, everybody agrees that having one is necessary. The mergers we are talking about are big and you need a level of formality… everybody would see the need of using a framework" Respondent T -"We have got the policy; it is there… We've got a process for big projects, and a process for small projects… so even if it is small we've still got one… because of the standards, everything has to comply at some level" Respondent J -"Firms like us absolutely strive to put some governance in place… but in this industry, formal governance frameworks are worth nothing anyway, unless you're doing something massive… maybe if you're doing a two / three / four hundred million program to rationalise two enormous data centres between say Deutsche Bank and Banker's Trust with multi-hundreds of million dollar spends and potentially huge savings, you would need to invest in that sort of thing" Respondent C -"The size of the IT integration project does not affect the applicability of formal IT integration frameworks… It just means that the error (and the tolerance) in absolute pounds is obviously much larger. So it is more about the impact; you need the right governance" www.ccsenet.org/ijbm
Issues and Workarounds in Employing an IT Integration Framework
With regard to successfully employing an IT integration framework, the respondents have named two areas that tend to generate the majority of issues: -Imperfections of the framework in use (overly generic/prescriptive, contains gaps, etc.); -Overly loose interpretation and poor execution of the framework guidelines.
Our research indicates that the first of the above can be successfully addressed by gradually accumulating experience as key staff and organisations themselves engage in an increasing number of projects and use the knowledge acquired to improve the frameworks in use. The other area is more difficult to handle, since it must deal with elements such as motivation and the personal agendas of key staff involved. Positive results can still be achieved by employing organisational measures, such as structuring the enterprise governance so that the IT integration process is managed outside the IT function. As one of the respondents put it, "there was an independent judicial, which sat externally organisationally, but it was part of the overall program. So they had complete transparency, which is a pain in the back when you're trying to run something… but it was good". Also, proper enforcement of the guidelines in place would alleviate some of the issues too. Quoting another respondent, "when you are changing things… it is not the time to be making shortcuts, because we really need that company to be connected and that to happen and lets relax a few things -so the answer there is NO… go and do your due diligence!". Thus, one would conclude that putting the rules in place, sticking to them consistently and improving them over time as one does more work is the way forward.
Is the Proposed Theory-Based Framework Applicable and Usable Immediately?
All the respondents have agreed that the proposed IT integration framework was a good match for guiding the generic M&A IT integration process, but needed some minor adjustments. Some of the most notable comments were: -"One methodology never fits all anyway, but if I had this in my hand it would have been a very useful checklist or reference basis and the framework" (Respondent J); -"The point with this framework is that these aren't the specific issues but these give a general impression of things you'd have to look at… I would say maybe some sequencing of things, but you probably captured the essential items, you've used the words that people use for running the project, and every project is different and every deal is different, but logically there is a flow" (Respondent T); -"It needs a few minor adjustments, because the key thing to me is -you need absolute clarity on the customer impact" (Respondent C).
When asked whether the proposed framework was an immediate improvement over the company's existing IT integration process the respondents stated that: -"If you were starting from scratch, that is a very useful checklist to start with, if you were looking for something that is generalizable, small, and M&A" (Respondent C); -"It would potentially be an improvement, because it would be just a little bit more structure to the thought process… we did get blind-sided by a couple of things, and this checklist might have stopped that from happening" (Respondent J).
-"Every deal is different, so you'll not write The Perfect IT Integration Process for all cases. What you'll write is the things that people will need to consider and then depending on the deal, and then that the deal happens, then that will be appropriate… it is a fair one" (Respondent T).
All in all, the proposed framework was considered a "compact and structured generic reminder for M&A IT integration" -generally applicable, well organised, compact, and particularly useful for new M&A IT integration planning. However, some specific changes were deemed necessary.
Summary of the Changes Suggested
A number of suggestions were made to improve the framework proposed (see Appendix C). These revolved around several themes listed below: -Business strategy must come before everything else, as an inability to clearly state what business priorities and requirements are is guaranteed to stifle the IT integration process; -Putting an appropriate management team in place as early as possible is extremely important, as it will www.ccsenet.org/ijbm When asked about possible further enhancements to the framework the response was "needs adjustment for a particular case" (Respondent T) and "every deal is different and every project is different; so it lists all the vital items here but no one size fits all, so have to adjust accordingly every time" (Respondent H)". Based on the fact that the proposed M&A IT integration framework had already been revised to incorporate and properly sequence all the essential elements of the generic M&A IT integration process, the enhancements mentioned would likely bias the framework in favour of a particular (class of) M&A deal(s) and thus make it less universally applicable. We therefore conclude that the research has hereby reached its practical limit.
Conclusions
There has been a lot of M&A activity in the banking industry in the last decade and it continues to increase. Paired with the fact that modern banking is underpinned, enabled and facilitated by IT, this means that the efficiency of the M&A IT integration is one of the most serious challenges the industry is facing today. To help improve the efficiency of the IT integration in M&A in banking, the authors have created a framework that is generic and robust enough to cover all the aspects of consolidation of two IT platforms coming together, yet www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 3;2015 detailed enough to allow practical implementation in a commercial setting.
Literature sources and empirical evidence indicate that since the IT function is deeply ingrained within the organisational setup in banking, IT integration should be addressed in the overall context of IT governance. Therefore a suitable IT governance framework should be used as a foundation of the M&A IT integration framework being created. The authors have surveyed the landscape of IT governance frameworks in use today and established that COBIT was currently best fit for the purpose, as it provided the overarching enterprise IT management and governance context for the IT integration process. Thus COBIT 4.1 Domains were used as a basis for structuring the overall IT integration process and a practice-based implementation-level IT integration plan was added to complement that structure. The banking M&A IT integration framework subsequently created was validated with a balanced selection of leading industry practitioners based on their involvement in a diverse selection of high-profile banking M&A projects. The framework would thus be particularly useful for those at a planning stage of a new banking M&A IT integration project or those conducting a post-mortem review. How did the above constraints affect the planning process? 8 Other interviewees have indicated that often the IT integration is actually a 2-project process: • 1 st project being a quick bolting together of systems of the two firms to provide business-critical functionality and basic connectivity; • 2 nd project being a much slower and thorough consolidation of the IT assets, which may take years after the merger has been officially declared a success.
Do you support this view? 9 Which of the following could be used to describe the degree of cooperation at the top management level in the IT integration process? a) CEO / CIO actively cooperating and using same performance assessment metrics b) CEO / CIO actively cooperating, but using different performance assessment metrics Appendix C. Correction suggested for the theory-based framework A résumé of the suggestions to the originally proposed framework is presented below: Respondent C -"It is very technically focused, and you need to make sure that you've got business, you need to somehow make the linkages to the business strategy and the business stakeholders, and ultimately it is traced through to the customer" Respondent J -"Before you even start -assess who the key players are and what would the management team look like. I would look into that before you advocate doing the deal and then how do you implement the organisational change, and that organisational change at the senior level needs to start from the top down" -"At some point you need to consciously take the decision as to your strategy for your implementation -"do we deliberately concentrate all our resources on delivering tactical fixes over the next year just to get the thing working, and then get strategic later on, or do we get strategic now?" -you've got to define the strategy… and then that strategy needs to be agreed at very senior level… And then you have to choose the operating model" -"What you haven't mentioned is the organisation -…if one bank is working with more integrated model and the other bank is quite segregated… the hierarchy, the logical reporting lines, and the accountability… if you want to get to a certain level of best practice in your target organisation, you've got a lot of educational and organisational change to do -you compare across organisations, how roles and responsibilities are segregated, and how they map out" -"You should add IPR in the due diligence / applications section" -"Common integrated solution / applications -the system selection can sometimes be influenced by the skills that you have and the amount of documentation and tacit knowledge in your team" -"Common integrated solution / infrastructure -data and IPR is big here, especially if data centre will be located in a different jurisdiction" Respondent T -"Clearly Stage 0 would be to know what your business strategy is for this target. What you have at Stage 0, without a doubt, is business -get your acquiring business to describe their strategy, because that will steer everything thereafter" www.ccsenet.org/ijbm International Journal of Business and Management Vol. 10, No. 3; 2015 -"Due diligence / infrastructure -add to that a "contingency infrastructure" in c). And I would also be asking about the issues they've got there" -Define the systems integration strategy -put that at the top, point 0.5 perhaps -it is about not making a mistake of diving into the detail without understanding whether it is important to the business" -"Define the information architecture of the combined unit -you might not be able to do that pre-deal. You might get it, you might not… let's say it is not all set in stone by that moment just yet. More realistically, you might just be able to sum up with a crude estimate of how much certain different configurations might cost" -"Choose operating model -again, you probably are not going to be able to choose the operating model pre-deal, it is not going to be decided by then" -"Common integrated solution -costs and budgets must always go first… staff should come straight after costs and budgets. In your cost and budget you've got your synergies… and a load of that synergy is going to be staff… You are also going to have some sort of a retention arrangement for some key staff…. implementing staffing and retention plans should follow that too" -"Common integrated solution / regulatory approval -if in regards to the IT side, you are unlikely to have an IT regulatory approval requirement; that is outside of IT" -"Common integrated solution / commercial and contractual elements -needs to come much earlier on, often you are supposed to have either obtained from the vendors or at least properly in the process of having asked for it before you define the information architecture... this should go before the new 1.3 "Define the information architecture" -"Establish basic connectivity and consolidate key aspects of infrastructure -put basic connectivity in the beginning, as discussed before" -"Assess and control integration risks -you need to write "Information security" alone. It is the new safe as far as a bank is concerned or like a lock on the door these days" Respondent H -"This (framework, ed.) is quite detailed, so what this basically says is that you should do this stuff, so that you're informed to do this, and the question is whether you do this or you actually do that or that you take a macro decision based on my picture (the vertical model diagram, ed.)" -"You don't really have anything here about cost, so you don't have anything that is informing your decision about the total cost of ownership (TCO, ed.) of the platform that you are going to choose… the integration budget… and your TCO of going forward. So I have always thought that you should have something here that actually informs that" -"The regulatory approval and client consents are important specifically in relation to the negotiation of contracts" -"Assess and control integration risks -Information security, access rights, all those things are painful… this must be included separately in the "1.14 Assess and control integration risks" section"
Copyrights
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). | 2018-08-09T17:24:56.980Z | 2015-02-27T00:00:00.000 | {
"year": 2015,
"sha1": "35b08bc07ffae3d271f49575b70d4a7ad8841776",
"oa_license": "CCBY",
"oa_url": "http://www.ccsenet.org/journal/index.php/ijbm/article/download/43823/24893",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "35b08bc07ffae3d271f49575b70d4a7ad8841776",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
118537613 | pes2o/s2orc | v3-fos-license | Using non-positive maps to characterize entanglement witnesses
In this paper we present a new method for entanglement witnesses construction. We show that to construct such an object we can deal with maps which are not positive on the whole domain, but only on a certain sub-domain. In our approach crucial role play such maps which are surjective between sets $\mathcal{P}_{k}^d$ of $k \leq d$ rank projectors and the set $\mathcal{P}_1^d$ of rank one projectors acting in the $d$ dimensional space. We argue that our method can be used to check whether a given observable is an entanglement witness. In the second part of this paper we show that inverse reduction map satisfies this requirement and using it we can obtain a bunch of new entanglement witnesses.
I. INTRODUCTION
It is well known that quantum entanglement is the most important resource in the field of quantum information theory. Is worth to mention here such significant achievements as quantum cryptography [1], quantum teleportation [2], quantum dense coding [3], quantum error corrections codes and many others important application of this phenomena. Thanks to this it is obvious that knowledge when we deal with entangled states together with their classification plays crucial role. However still one of the biggest problem in the field remains open. Namely up to now we do not have satisfactory criteria to decide whether given quantum state is separable or entangled. Full answer delivers famous Peres-Horodecki criterion [5,6] based on idea of partial transposition, which gives necessary and sufficient criteria for separability for bipartite 2 ⊗ 2, 2 ⊗ 3 systems, but unfortunately for higher dimensions this criterion is not conclusive. The problem is even more complicated if we lift it to multipartite case, but of course there are several approach to detection entanglement (separability) in general [9][10][11]. Despite this difficulties, fortunately there is one of the most general method to decide when quantum composite state is entanglement is based on the concept of entanglement witness firstly introduced in [7] based on famous Hahn-Banach theorem. This approach allows us to detect entanglement without full knowledge about the quantum state. What is the most important any entangled state has corresponding entangled witness, so this property makes mentioned method somehow universal. Exploring theory of entanglement witnesses from mathematical point of view, there is well known connection between them and the theory of positive maps [8], which allows us to understand much deeper the structure of the set of quantum states.
Let us say here a few words more about notation used in this manuscript. In this section and also in our further considerations by B(C d ) (respectively B(H)) we denote the algebra of all bounded linear operators on C d (respectively on H). Using this notation let us define the following set: which is set of all states on space H. Suppose now that we are dealing with two finite dimensional Hilbert spaces H, K. State in the bipartite composition system ρ ∈ S(H ⊗ K) is said to be separable is can be written as ρ = i p i ρ i ⊗ σ i , where ρ i , σ i are states on H and K respectively, and p i are some positive numbers satisfying i p i = 1. Otherwise we say that state ρ is entangled. Now we are ready to present definition of entanglement witness and basics ideas connected with these objects. Let us start from the definition of entanglement witness [5], [7]: There is well known theorem [5] which states that for every entangled state ρ there exists corresponding entangled witness W , such that Tr(W ρ) < 0. Reader notices that this condition is equivalent to first condition from the above definition. From Definition 1 we see that any entanglement witness corresponds to some hermitian operator, which thanks to Jamio lkowski isomorphism [16] is connected with some positive, but not completely positive linear map Λ : B C d → B C d , such that: where P + d is the projector on maximally entangled state |ψ d In this point for more information about entanglement witnesses and their properties we refer here reader to excellent review paper treating this topic [17].
At the end of this introductory section we present the structure of our paper. Namely in the Section II the main result of our work which is contained. In the Theorem 1 we show that to construct entanglement witness we do not have to restrict to positive maps on whole domain, but only its certain subset. In particular such map has to be at least surjection between set of the rank k ≤ d projectors P d k and set of rank one projectors P d 1 acting in the d dimensional space.
After that we present two short sections with examples which illustrate how our method works in practice. We start from the Section III where we show that inverse reduction map satisfies all requirements from the Section II, then in the Section IV we show a illustrative example of entanglement witnesses obtained thanks to inverse reduction map.
Finally at the end of this paper we present also Appendix A where we explain basic properties of unitary spaces which are necessary to discussion about inverse reduction map in the Section III, then in the Appendix B we formulate Propositions 3 and 4 which together with the Remark 5 are necessary in the proof of Theorem 1 and its formulation itself and also play very important role in the analysis of inverse reduction map from the Section III.
II. GENERAL CONSTRUCTION OF ENTANGLEMENT WITNESS FROM NON-POSITIVE MAP
In this section we present our main result which is contained in the Theorem 1. We show that to construct entanglement witnesses we do not have to restrict only to positive maps on the whole domain in general, but only on some specific subset. To do so we can use map Λ † : B C d → B C d , which is surjective between set P d k of rank k projectors and the set P d 1 of rank one projectors, which is given in the Proposition 3 contained in the Appendix A. Having this knowledge we are in the position to formulate the following: We assume also that map Λ † : B C d → B C d is not positive on whole domain but only maps surjectivley set P d k of rank k projectors on the set P d 1 of rank one projectors, then we have: so the operator W is a entanglement witness.
Now we can continue rewriting right hand side of the formula (5) as where by Λ † we denote adjoint 1 map to Λ. The projectors Suppose that we are given with liner map Λ : B C d → B C d , then adjoint map is defined as Tr (AΛ(B)) = Tr BΛ † (A) , ∀ A, B ∈ B C d . We also say that linear map Λ is self-adjoint when A, B ∈ B C d Tr (AΛ(B)) = Tr (BΛ(A)) . Moreover if Λ is positive map then also Λ † δ ij } = P d 1 = {P ∈ B C d : P 2 = P, P † = P Tr(P ) = 1}. It means that W ∈ B C d ⊗ C d , W = W † , W 0 takes non-negative expectations values on separable states. This finishes the proof. Remark 1. Form the proof of Theorem 1 it follows that the operator W cannot take negative values on the product states.
III. INVERSE REDUCTION MAP AS AN EXAMPLE
In the previous section we have considered general maps Λ with certain properties. Now natural question arises: Do we know any examples of the maps which satisfy required demanding? The goal of this paragraph is to present such example. Let us consider the following linear map Definition 2.
The linear map R −1 acts in the linear space B C d which a Hilbert space with respect to the standard Hilbert-Schmidt scalar product where † is the hermitian conjugation.
so the image of the map R −1 on any orthogonal projector of rank d − 1 is an orthogonal projector of rank 1. Moreover the map R −1 establishes a bijective correspondence between the set of all orthogonal projectors of rank d − 1 and the set of all orthogonal projectors of rank 1.
The proof of above statements can be directly deduced from facts contained in the Appendix A.
Corollary 1. The operator 1 ⊗ R −1 is also self-adjoint with respect to the tensor product scalar product where (A, B) ≡ Tr(A † B).
Remark 3. The map R −1 : B C d → B C d is not positive but R −1 restricted to the set P d d−1 is a positive map. Indeed, as an example let us take matrix I which is filled only by ones and it is positive. Now acting by R −1 we have We notice that whenever d > 2, then d(2−d) d−1 < 0, so A is no longer positive. Summarizing when R, R −1 are not equal (for d ≥ 3, see Remark 2), then the main difference between them is that R is positive but R −1 is not in general.
Summarizing inverse reduction map R −1 satisfies all conditions from the assumptions of the Theorem 1, so it can be used for the entanglement witness construction. Moreover thanks to Proposition 1 point 1) this map is self-adjoint, so it satisfies even stronger conditions that we require.
IV. EXPLICIT EXAMPLES OF ENTANGLEMENT WITNESSES
In this section we use Theorem 1 together with the Definition 2 of the inverse reduction map from the previous section to present explicit construction of entanglement witnesses. To do so let us consider positive semi-definite operator 0 ≤ W ∈ B C d ⊗ C d in the standard operators basis B C d e ij = |i j| for i, j = 1, . . . , d: Let S ∈ B C d be a shift operator defined as: S |i := |i + 1 mod d, then using above definition we can write operators W ij from formula (13) in the following way: and for all off-diagonal elements i.e for all indices satisfying i = j.
Using form of our operator W from equation (13) together with conditions on W ij given in formulas (14) and (15) we are able to write explicit conditions for positivity of state W in terms of parameters a i and x. Namely we have the following: it means that x ∈ −a1 d−1 , a 1 .
Now we are in the position to use all what we have learnt from the Section III and use the inverse reduction map to construct some appropriate example of entanglement witness: Let us use as map the inverse of reduction map i.e R −1 : B C d → B C d defined as fallows Above map is not positive map in general. As operator W from Theorem 1 let us take W = 1 ⊗ R −1 W , then the following conditions should satisfy where At the end of this section we give explicit example of the operator W ∈ B C 3 ⊗ C 3 with particularly choosen parameters. Namely let us take a 1 = 2 , a 2 = a 3 = + 1 and x = , which satisfy conditions from the formula (18), then whenever > 1/2 operator W is entanglement witness: · · · · · · · · · · · · · · · · · · · · · · · − · · · 1 · · · − · · · · · · · · · · · · · · · · · · · · · · · · − · · · − · · · 1 where dots denote zeros.
V. CONCLUSIONS
In this paper we have shown that to construction of entanglement witnesses is enough to consider maps which are not necessary positive on whole domain, but only on some sub-domain. Namely we can consider in general nonpositive map (see Theorem 1) which is the surjective function from the set P d k of k rank projectors to the set P d 1 of rank one projectors (see Corollary 2, Proposition 3 and Remark 5). Our illustrative example is the inverse reduction map for which we have presented explicit It is also worth to mention here about one open problem connected with our construction. Namely it would be interesting to check whether entanglement witnesses obtained from the inverse reduction map are decomposable. We can ask about the connection between decomposability property and the structure of the chosen map or chosen operator W (see Theorem 1).
then P gives a unique decomposition of the space C d of the form C d = Im P ⊕ ker P : ker P = (Im P ) ⊥ (A2) and dim(ker P ) = 1, dim(Im P ) = d − 1 so Im P is a hyperplane. Moreover if ⊂ Im P are two orthonormal bases in the subspace Im P then So the spectral decomposition of the projector P does not depend on the choice of the orthonormal basis in Im P .
It is known that any set of orthonormal vectors in C d (or in any linear space) may be extended to a basis of the space C d . Such extensions are not unique. The structure of the extensions of orthonormal bases of the space Im P to bases of the space C d describe the following: ⊂ Im P are two orthonormal bases in the subspace Im P and where |ψ d , |ψ d ∈ ker P. So it means that for a given orthonormal projector P of rank d − 1 there exist a vector |ψ ∈ ker P such that ||ψ|| = 1 and the extension of any orthonormal basis {|ψ i } d−1 i=1 in Im P to a basis in C d has the following form Any vector of the form e iϕ |ψ where |ψ ∈ ker P , ||ψ|| = 1 form an orthonormal basis in one-dimensional subspace ker P . There exist a bijective correspondence between the elements of the sets P d d−1 and P d 1 . The bijective correspondence between the elements of the sets P d d−1 and P d 1 can be expressed as follows where ∃! means : there exist unique. Moreover if P ∈ B C d : P 2 = P, P † = P, Tr(P ) = d − 1 then there exist a unique orthonormal projector Q ∈ B C d : Q 2 = Q, Q † = Q, Tr(Q) = 1 such that where |ψ ∈ ker P is any orthonormal basis vector of ker P and Im Q = ker P, ker Q = Im P and P Q = QP = 0.
Proof. To prove second statement let us consider orthonormal basis In particular it holds for orthonormal bases of C d that are extensions of the orthonormal bases of Im P e.i. for the bases of the form {|ψ 1 , ..., |ψ d−1 , |ψ }, where {|ψ 1 , ..., |ψ d−1 } is an orthonormal basis in Im P and |ψ ∈ ker P : ||ψ|| = 1 forms an orthonormal basis in one-dimensional ker P so we have where Q = |ψ ψ| and from Proposition 1 we know that the orthogonal projectors does not depend on the choice of bases in the range of these projectors so Q does not depend on the choice of the basis vector |ψ and is unique.
Appendix B: Auxiliary lemmas
After short introductory to the topic of unitary spaces contained in the Section A we are ready to present some conclusion which is contained in the two following propositions. First the Proposition 3 contains generalization of the bijection from the Corollary 2 for the rank k projectors, which allows us to formulate general statement contained in the Theorem 1. Finally the Proposition 4 is some auxiliary result important in the proof of above-mentioned theorem.
Proposition 3. Let P be an orthogonal projector i.e.
P ∈ B C d : P 2 = P, P † = P Tr(P ) = k, k = 1, .., d − 1, then P gives a unique decomposition of the space C d of the form C d = Im P ⊕ ker P : ker P = (Im P ) ⊥ (B2) and dim(ker P ) = d − k, dim(Im P ) = k. Moreover for any such P there exist a unique orthogonal projector Q ∈ B C d : Q 2 = Q, Q † = Q, such that where Im Q = ker P, ker Q = Im P and P Q = QP = 0, so we have where P d k = {P ∈ B C d : P 2 = P, P † = P, Tr(P ) = k}. Remark 5. Reader notices that for our purposes in the Theorem 1 we can choose bijection which establishes one to one correspondence between set P d k of rank k projectors and the set P d 1 of rank one projectors. In the following we will need also an easy to check Proposition 4. Let |ψ ∈ C d and |φ i ∈ C d , i = 1, .., d − 1 are such that ψ|ψ = 1, φ i |φ j = δ ij . (B6) where ω i = |ψ ⊗ |φ i , is an orthogonal projector of rank d − 1 (in fact P ∈ P d 2 d−1 ) and it is generated by simple tensors ω i , so it is of particular form. Note that | 2015-09-24T10:11:52.000Z | 2015-03-02T00:00:00.000 | {
"year": 2015,
"sha1": "169969bdd2cca3528e7073674d25e00fc2da2e18",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1503.00528",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "169969bdd2cca3528e7073674d25e00fc2da2e18",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
59358356 | pes2o/s2orc | v3-fos-license | Reduction of corneal scarring in rabbits by targeting the TGFB 1 pathway with a triple siRNA combination
Purpose: The transforming growth factor beta1 (TGFB1) pathway has been linked to fibrosis in several tissues including skin, liver, kidney and the cornea. In this study, a RNA interference-based approach using siRNAs targeting three critical scarring genes, TGFB1, TGFB receptor 2 (TGFBR2) and connective tissue growth factor (CTGF), was tested for effects on reducing alpha smooth muscle actin (SMA) and corneal scarring (haze) in excimer laser ablated rabbit corneas. Methods: Levels of TGFB1 and CTGF mRNAs were measured using qRT-PCR in the epithelial and endothelial cell layers of normal and excimer ablated rabbit corneas at 30 minutes, 1 day and 2 days after ablation. Two different scarring models were utilized to assess the effects of the triple siRNA combination on corneal scarring. In the first model, rabbit corneas were unevenly ablated creating a mesh pattern then treated immediately with the triple siRNA combination. After 1 day the ablated areas of corneas were collected and levels of mRNAs for TGFB1, TGFBR2 and CTGF were measured. After 14 days, levels of mRNA for SMA were measured and SMA protein immunolocalized in frozen sections. In the second model, rabbit corneas were uniformly ablated to a depth of 155 microns followed by three daily doses of the triple combination of siRNA. After 14 days, corneas were photographed and images were analyzed using Image J software to assess corneal scarring. Corneas were also analyzed for levels of SMA mRNA. Results: In both unwounded and wounded corneas, levels of TGFB1 and CTGF mRNA were always significantly higher in endothelial cells than in epithelial cells (10 to 30 fold). Thirty minutes after injury, levels of both TGFB1 and CTGF mRNAs increased approximately 20-fold in both epithelial and endothelial cells, and further increased approximately 60-fold in 2 days. In the first therapeutic experiment with a single siRNA dose, two of three rabbits showed substantial reductions of all three target genes after 1 day with a maximum knock down of 80% of TGFb1, 50% reduction of TGFBR2 and 40% reduction of CTGF mRNA levels and reduced SMA mRNA at day 14. In the second therapeutic experiment with multiple doses of siRNA treatment, both rabbits showed a ~22% reduction in scar formation at day 14 as calculated by image analysis. There was also a corresponding 70% and 60% reduction of SMA RNA expression. Conclusion: These results demonstrate that both TGFB1 and CTGF dramatically increase in rabbit corneal epithelial and endothelial cells after injury. Treatment of excimer ablated rabbit corneas with a triple combination of siRNAs effectively reduced levels of the target genes and SMA, leading to reduced corneal scarring at 14 days, suggesting that this triple siRNA combination may be an effective new approach to reducing scarring in cornea and other tissues.
INTRODUCTION
Corneal scarring remains a serious complication that can ultimately lead to functional vision loss.In an injured cornea, a cascade of molecular events is initiated by prolonged, elevated levels of transforming growth factor beta (TGFB1) which then combines with the Transforming Growth Factor Receptor II (TGFBR2) inducing the synthesis of Connective Tissue Growth Factor (CTGF) causing excessive scarring (corneal haze) that impairs vi-sion.The TGF-β system, has emerged as a key component of the fibrogenic response to wounding by regulating the transformation of quiescent corneal keratocytes into activated fibroblasts that synthesize ECM and into myofibroblasts that contract corneal matrix (Chen et al. 2000; Jester, Petroll, and Cavanagh 1999).These myofibroblasts are filled with alpha smooth muscle actin (SMA) that forms microfilaments that are the major source of light scattering in corneal scars [1].CTGF acting as a downstream mediator of TGFB1, down regulates synthesis of corneal crystallin proteins in quiescent keratocytes and up regulates synthesis of collagen.Thus, the excessive scattering of light that is clinically described as corneal scar and haze results from the combination of collagen laid down in irregular pattern in the wound and opaque activated fibroblasts and myofibroblasts that no longer synthesize the corneal crystallin proteins that keep their cytoplasm transparent.
We have previously shown the effect of PRK on CTGF levels in rat and mouse corneas and found that CTGF was present in all cell layers of the cornea.The levels of CTGF were found to continually rise from the time of wounding up through 28 days post wounding [1].However, in order to completely understand the role of TGFB1 in tissues repair and scarring, it is essential to understand the timing and site of the synthesis of these growth factors so that the appropriate cell layer is targeted for nucleic acid therapies.The experiments in this study are expected to reveal when and where TGFB1 and CTGF are synthesized after a corneal injury so that the best mode of action for an anti-fibrotic therapy can be chosen.
There currently are no FDA approved drugs that selectively reduce the expression of genes causing corneal scarring and haze.At present, the methods used to decrease corneal haze are topically applied steroids or antimetabolite drugs that target the cells capacity to respond to signaling.Mitomycin C is used during some ocular surgeries, but it may have very damaging side effects, such as epithelial defects, stromal melting, endothelial damage, and conjunctival thinning [2].Hence, there is a need to develop a targeted approach that can nullify the specific molecular pathways that give rise to a scar.
It is however difficult to achieve significant therapeutic effect by employing a one-target, one-drug paradigm on such a complex, multi-factorial signaling pathway.Hence, using a multi-target approach can interrupt or act on the complex signaling network at multiple points, and affect the cell in ways that an individual component cannot [3].In this study, we have tested a siRNA triple combination targeting TGFB1, TGFBR2 and CTGF.This triple siRNA combination was shown to be effective in reducing the expression of target (TGFB1, TGFBR2 and CTGF) and downstream mediators like Collagen-I and α-Smooth Muscle Actin (SMA) in both in vitro cell cul-ture system and ex vivo organ cultures [4].
Additionally, it is important to deliver these siRNA combinations to the corneal layer where there is high localization of the target growth factors after wounding.Although most of the targeted anti-fibrotic approaches target the stromal fibroblasts due to the eventual presence of myofibroblasts in this region, there has not been any research on the post-wounding localization ofTGFB1 and CTGF in the epithelium and the endothelium.A delivery method that targets the corneal layer with maximum postwounding growth factor localization is critical for an effective therapy.
Delivery of drugs to the cornea is a major challenge, as the mechanical barriers that protect the cornea (multilayered epithelium, tight junctions) constrain ocular drug delivery [5].The principal properties governing corneal drug absorption are its lipophilicity, partition coefficient and molecular size [6].Nanocarriers are the potential solution for targeted ocular drug delivery as they have been shown to be non-immunogenic, have relatively low toxicity, be resistant to protein/serum absorption and be stable in an enzymatic environment [7].We have previously showed the high efficacy of the nanoparticle kit used in this study in delivering fluorescently labeled siRNA to all layers of the cornea including the endothelium in an ex vivo organ culture model [8].
The overall goal of this study is to test and deliver a previously optimized effective triple siRNA combination to the appropriate corneal layer with high post wounding localization of TGFB1 and CTGFso that there is a maximal reduction of scar formation in rabbits.
Laser Ablation of Rabbits
Adult New Zealand Rabbits free of disease were used and treated according to ARVO Statement for the Use of Animals in Ophthalmic and Vision Research.Excimer ablation and collection of corneas was performed as previously described [9].Briefly, rabbits were anesthetized with isoflurane inhalation, and proparacaine eye drops provided topical anesthesia.Laser ablations were performed to both eyes of each rabbit with a Summit SVS exicimer laser that is committed to animal vision research.In this study, two different approaches were tested to obtain the most intense scarring in the rabbits.In the first approach, using the laser in phototherapeutic keratectomy mode, the central 6 mm diameter area of the cornea was ablated at a dose of 160 mJ/cm 2 to an initial depth of 80 microns to remove the epithelium and then the final 45 microns were ablated by placing a mesh over the cornea to make an uneven ablation.In the second approach, using the same laser parameters, the central 6 mm diameter area of the cornea was ablated to an even depth of 155 microns.
The eyes were then pretreated with 50 μM EDTA for 10 minutes.A total of 150 μl of the nanoparticle complexed with the siRNA triple combination was added to one of the eyes while the other was treated with the vehicle control and was considered as a paired negative control.The eyes were held open for 3 minutes to allow the nanoparticle to penetrate the stroma before being disturbed.No postoperative topical steroid was used to ensure that the wound healing process is not altered with anti-inflammatory agents.Corneas were collected at different time points according to the experiment, homogenized in a pestle with liquid nitrogen and then transferred to TRIzol.The RNA was then extracted using a hybrid RNA extraction protocol with RNeasy spin columns [10].
Gross Corneal Dissection
Rabbits without observable corneal wounds were anesthetized, excimer ablated to 125 microns and euthanized at the designated time points as described before.A scalpel was used to immediately scrape the epithelium off with care taken to ensure that the scraped mass was retained on the blade.The scraped epithelial mass was then transferred to 350 μl of tissue lysis buffer (Qiagen, buffer RLT) and the blade was rinsed with 250 μl of additional lysis buffer.The cornea was then excised from the globe by cutting with a fresh scalpel and scissors at the corneal/ scleral boundary.The cornea was placed face down and yet another fresh scalpel was used to scrape off and retain the endothelium as was done with the epithelium.The endothelial mass was transferred to 350 μl of lysis buffer and the blade rinsed with an additional 250 μl of lysis buffer.Each grossly isolated cellular layer was then subjected to ultrasonication on ice for further tissue disruption.The probe was rigorously washed, rinsed and dried in between each sample.The homogenates were then immediately loaded onto Qiagen gDNA removal columns and the RNA was purified in accordance with the manufacturer's provided protocol (Qiagen RNA easy, Qiagen, Inc., Cat. #74104).
The purified RNA was quantified via ultra-violet absorbance using a Nanodrop ND-1000 spectrophotometer set for RNA quantification.
Preparation of siRNA-Nanoparticles
A commercially available nanoparticle kit called Invivoplex ® was purchased from Aparnabio (Rockville, MD) and used according to manufacture's instructions.Briefly, siRNA triple combination solution was made at a concentration of 0.9 mg/ml.600 μL of this solution was added with 300 μL of the provided cargo buffer.This solution was added drop wise to 900 μL of the given nanoparticles over a magnetic stirrer.This preparation forms siRNA nanoparticles <50 nm that are stable for a week.
Reverse Transcription-Polymerase Chain Reaction
Total RNA was extracted using the Qiagen RNeasy mini isolation kit (Qiagen, Inc., Valencia, CA) and used according to the manufacturer's directions.cDNA was synthesized using the High Capacity cDNA Reverse Transcription Kit (Applied Biosytems, Carlsbad, CA) according to manufacturer's procedure.The level of mRNA for TGF-B1, CTGF and SMA were determined using the Real-Time PCR TaqMan assay.The primers and probes for each gene are defined in Table A1.The endogenous control, ribosoma18S RNA and GAPDH was used normalize target genes.Primers, probes and cDNA were combined with TaqMan Universal PCR Master Mix (Applied Biosytems, Carlsbad, CA) and amplification was performed by the Applied Biosystems 7300 HT Fast Real Time PCR System (Carlsbad, CA).A few samples were run without reverse transcriptase to measure the quantity of genomic DNA (gDNA) present in the sample.The thermal cycling conditions were as follows: 2 min at 50˚C, 10 min at 95˚C, 40 cycles of 15 sec at 95˚C, and 1 min at 60˚C.The relative gene expression of the growth factors was calculated using the 2-ΔΔCt method.
Macrophotography
Prior to general anesthesia, each eye was topically anesthetized with proparacaine and each pupil was dilated with phenylephrine 2.5% and tropicamide eye drops.Each rabbit was then generally anesthetized with inhaled isoflurane as described earlier.The eyelids were held open and out of the way with either an eyelid speculum or a pair of cotton swabs.A Nikon D7000, was outfitted with a macro lens capable of native 1:1 reproduction (either a 100 mm Tokina or 60 mm Nikkor) and the Nikon R1C1 Creative Lighting System (CLS) flash system.The D40 was set to the "Normal" program, ISO 200, manual exposure with a shutter speed of 1/500 second and f/16.While the 7000 was set to the "Standard" program, ISO 100, manual exposure with a shutter speed of 1/250 second and f/18.To visualize and measure haze, the flash power was set manually (1/16th, D40, 1/6.4thD7000) and neither the flash nor lens had a filter.For all images, the lens was set to manual focus and pre-focused to a 1:1 reproduction ratio and the camera was focused by moving the camera closer or further from the subject.Guide lights on the flash heads were used to facilitate haze visualization and focusing.
Immunohistochemistry
The rabbit corneas from the experiments were fixed overnight in 4% paraformaldehyde.They were then bi-within the same TGFB1 pathway-TGB1, TGFBR2 and CTGF [4,8].
sected, fixed in OCT and then sectioned in 10 μm slides.The slides were then washed with PBS and blocked in horse serum for 1 hour.Finally, they were incubated with SMA antibody-Cy3 (Sigma) for 1 hour at room temperature.The slides were mounted with DAPI and imaged using fluorescence microscope.
The Effect of PRK on mRNA Levels of TGFB1 and CTGF
The corneas of 11 rabbits were evenly ablated to 125 microns using an excimer laser.3 rabbits were sacrificed at 30 mins after ablation, and 4 rabbits each for Day1 and Day2 post-ablation.The corneas of three rabbits were unablated and used as control.At the designated time points, the corneas were collected and the epithelium and endothelial layers were scrapped using a surgical scalpel and collected in separate tubes.The expression levels of TGFB1 and CTGF were analyzed using qRT PCR.As shown in Figure 1, there is an initial spike in the levels of TGFB1 and CTGF as early as 30 minutes after ablation.There is then a decrease in the expressions at Day 1 followed by an exponential increase on Day2.The expression trend of TGFB1 and CTGF follows a similar trend in both the epithelial and endothelial layers.Figure 1(e) plots the RNA level expressions of TGFB1 and CTGF in terms of the epithelium.The expressions of both TGFB1 and CTGF were consistently higher in the endothelial layer when compared to the epithelium particularly at Day1 when the expression of CTGF in endothelium was ~35 times that of the epithelium.All expressions were calculated with respect to the unablated corneas and were nor-
Statistical Analysis
All experiments were performed in triplicate and all statistical analyses were conducted using Graph Pad prism (San Diego, CA).Student's t test or Analyses of Variances (ANOVA) with Tukey's post-hoc assessments were accordingly used to test for significance between the groups.Results were considered statistically significant where p < 0.05.
RESULTS
The levels of TGFB1 and CTGF were analyzed at several time points following ablation to find the best time to dose with anti-scarring drugs.Two different models of scarring were tested to find which of the two generated the most intense scaring in rabbits.Also in this study, the in vivo efficacy of a previously optimized triple siRNA combination that was effective in reducing downstream scarring genes (SMA and collagen-I) in both in vitro culture and ex vivo organ cultures was evaluated.The triple siRNA combination targets three critical scarring genes Figure 1.Post-ablation expression timeline of TGFB1 and CTGF.The corneas of 11 rabbits were evenly ablated to 125 microns using an excimer laser.3 rabbits were sacrificed at 30 mins after ablation, and 4 rabbits each for Day1 and Day2 post-ablation.The corneas of three rabbits were unablated and were used as the control.The corneas were collected and the epithelium and endothelial layers were scrapped using a scalpel and collected in separate tubes.The expression levels of TGFB1 and CTGF were analyzed using qRT PCR.All expressions were calculated with respect to the unablated corneas and was normalized using GAPDH as the housekeeping gene.OPEN ACCESS malized using GAPDH as the housekeeping gene.
The Effective Triple Combination (T1R2C1) Inhibits Target mRNA Accumulation in Rabbits
The corneas of 9 rabbits were unevenly ablated to 125 microns using an excimer laser.The right eye was treated with 150 μL of the effective triple combination (T1R2C1) complexed with nanoparticles and the left eye received equal volume of the vehicle control.One day later, 3 rabbits were humanely sacrificed and total RNA was extracted for analysis by RT-PCR.The effective triple combination (T1R2C1) gave an average of knockdown of 57% for TGFB1, 25% for TGFBR2 and 24% for CTGF (Figure 2).One of the rabbits (rabbit 1) had a maximum knockdown of 80% for TGFB1, 57% for TGFBR2 and 46% for CTGF indicating some the siRNA combination was effectively delivered to the corneal stroma in this animal.The knockdown percentages were calculated with respect to the left eye, which received vehicle control without the siRNA.
SMA Immunohistostaining in Triple siRNA Treated Rabbit Corneas
6 out of the 9 treated rabbits from the above experiment were used for a long-term experiment to observe scar formation.After 14 days, the intensity of scarring in both the treated and the control eyes was graded by a masked ophthalmologist and was also imaged using a digital camera.The rabbits were then humanely sacrificed and three cor- neas were collected for SMA immunohistostaining.The corneas were fixed overnight in 4% paraformaldehyde.They were then bisected, fixed in OCT, sectioned in 10 μm slides and stained for SMA.The treated cornea of two of the three rabbits selected for immunohistostaining had a lower haze grading score when compared to the control cornea.The control eye that was ablated and treated with vehicle control shows SMA staining in the basal epithelium and stroma while the triple siRNA treated right eye shows reduction in SMA staining (Figures 3(b) and (d)).
Three corneas from the 6 treated rabbits from the above experiment were collected for RNA level analysis by qRT PCR.SMA knockdown percentage of >40% was observed in 2 out of the 3 rabbits.The untreated left eye in these rabbits had haze-grading scores of 2 and 3 respectively.No knockdown was observed in the other rabbit, which had a haze grading score of one in the untreated left eye.The RNA level knockdown percentage of SMA was calculated with respect to the untreated left eye and all expressions were normalized to 18S rRNA.
Macrophotography Images of Reduction in
Scarring by Repeated siRNA Dosing The corneas of 3 rabbits were evenly ablated to 155 microns using an excimer laser.One of the rabbits had a pre-existing scar and spots of neovascularization on the right eye and hence had to be excluded.The right eye was treated with the effective triple combination (T1R2C1) complexed with nanoparticles for the first three days after ablation while the left eye was left untreated.After 14 days, the corneas of both rabbits were imaged using the macrophotography technique described in the methods section.The haze grading scores of both the rabbits were similar with Rabbit b showing a slight reduction in scarring after treatment (Figure 4).
Quantification of Scar Reduction by the siRNA Treatment
The digital images from the above experiment were subjected to anti-red gray scale conversion by only using the data in the blue channel.The contrast was increased by automatic brightness correction in Image J. The wounding region was split into two regions-Top (Region-I) and Bottom (Region-II) (Figures 5(a) and (b)).The pixel intensities of the regions of interest were normalized to that of the transparent unwounded regions of the corresponding corneas.The percentage of haze reduction was calculated by the reduction in pixel intensity of the scarring region in the treated eye with respect to that of the untreated eye.The region-I of both the rabbits show an average of ~22% reduction in scarring due to the siRNA treatment (Figure 5(c)).However there was no visible reduction in scar formation in region II of both the rabbits.After imaging, the wounding region in the corneal tissues was collected with an 8-mm biopsy punch for RNA analysis.In the corneas treated with the triple siRNA combination, both the rabbits show a corresponding reduction of 60% and 40% in the RNA level expression of SMA (Figure 5(d)).The knockdown percentages of SMA in the corneas were calculated with respect to the untreated eye.All expressions were normalized to 18S rRNA.
DISCUSSION
Several papers discuss the importance of the TGFB1 pathway acting through CTGF to activate quiescent corneal keratocytes and transform them into fibroblasts that synthesize collagen and further transform into myofibroblasts [11][12][13].However, there has been very little research on which cell layers of the cornea synthesize high levels of TGFB1 and CTGF.Our observation that both TGFB1 and CTGF mRNA levels are the highest in the corneal endothelium is novel, and it has reshaped the thinking behind the primary target cells and future delivery methods for corneal anti-fibrotic therapies.The immediate increase in the expressions of TGFB1 and CTGF as early as 30 minutes post ablation highlights also emphasized the importance of an immediate treatment.The nanoparticle delivery system used in this study was able to successfully deliver siRNA to all layers of the cornea including the endothelium in an ex vivo organ culture model [8].
The results of the initial therapeutic experiment assessing the efficacy of a single application of the triple siRNA combination demonstrated the "proof of principle" that the siRNAs could be effectively delivered into the target corneal cells.However, there was variability in the level of knockdown among the three rabbits, suggesting that the dosing was not optimized for in vivo experiments.Factors that could contribute to the variability include the presence of a nictitating membrane in addition to the eyelids and tears that tend to clear the surface of the cornea and reduce drug exposure time.It is also possible that the volume of the siRNA formulation used for the single dose (150 μL) in this experiment may also have been too high, which would have reduced the amount of the siRNAs that penetrated the cornea and were instead washed away once the trephine was removed.
The reduction in scar formation, 14 days after a single dose of triple siRNA treatment, also showed a positive trend.Three out of six rabbits had a reduction in scar formation as observed in the haze grading scores of a masked ophthalmologist.Both the digital image and the immunohistostaining staining for SMA in Figure 3 showed a reduction in scarring in the treated corneas when compared to the control.Both of these measurements were, however, qualitative, and we were unable to accurately quantify the exact reduction in scarring due to the lack of a standard imaging technology in the field of corneal scarring for those rabbits.
There was an interesting trend associated with the haze grading scores and the RNA knockdown percentage of SMA.We observed a RNA knockdown of 40% and 50% in the two rabbits that had a haze grading score of 2 and 3 respectively.However, no SMA knockdown was observed in the rabbit with a haze grading score of 1.This suggests that the triple siRNA combination is effective in knocking down SMA expressions in intense scars and is less effective in case of mild scarring.
A key paper in the literature reports that deeper ablations during PRK lead to more intense scarring in animals [9].In the second therapeutic experiment, we createda deep corneal ablation of 155 microns and also repeated the dosing of the triple siRNA combination for the first 3 days after ablation.Neither of the two rabbits in this experiment, however, developed an intense scar in the untreated control corneas.The haze grading scores of both the rabbits were similar with the one showing a slight reduction in scarring after siRNA treatment.However, the image analysis on the digital images revealed an average of ~22% reduction in scarring in region 1 of both the corneas treated with the siRNA combination (Figure 5).It is interesting to note that the siRNA treatment had an effect in reducing scarring in Region 1, but not in Re-gion 2. This could be due to the uneven transfection of the siRNA-nanoparticle complex into the different corneal layers.The RNA level expression of SMA was also reduced in both the rabbits by 60% and 40%, respectively.
In this study, we performed a series of pilot experiments to better understand the dynamics involved in the in vivo translation of an effective in vitro anti-fibrotic drug.The most immediate problem associated with the testing of an anti-fibrotic drug in the cornea of rabbits is the generation of a consistent and intense scar.We tried two different models of scarring in this study and although the corneas develop mild to average hazing, there is no consistent and intense scarring among animals.Since the variation in intensity of scarring among animals is natural and cannot be controlled, treating the corneas with TGFB1 post ablation for the first two days may help in the intensification of scar formation [14].
Another major hurdle in evaluating the efficacy of anti-fibrotic drugs in the cornea is the lack of a standard imaging technology to quantify the reduction in scarring.The current clinical method of haze grading by a masked ophthalmologist is qualitative and very subjective.Although the macrophotography technique described in this study is a lot more objective than haze grading, it also has its disadvantages.The dual flash heads create an unwanted reflection on the cornea due to which we are unable to measure the entire wounding region.A potential solution to this problem could be to image and analyze the corneas after excision.This would help in uniform illumination of the cornea without flash bias and the scar can also be measured as a function of the transparency.
Finally, the dosing regimen needs to be optimized so that maximum amount of inhibitory RNA can be delivered to the cornea.Lowering the volume and administering the siRNA cocktail for 2 -3 days after the surgery along with an agent to increase the viscosity of the drug, so that it sticks to the surface of the cornea for a longer duration, might help increase the drug exposure time [15].Other options could also include iontophoretically driving the drug to different layers of the cornea [16,17].
Although, the triple siRNA treatment in this study did not completely eliminate scarring, it did generate a strong positive trend in the reduction of scarring.It is important to understand and resolve the problems associated with the generation of intense scarring, drug delivery optimization and scar imaging before investing in a major in vivo experiment with many animals.Once these parameters are optimized, the triple siRNA combination could lead to a significant reduction of scarring in the cornea and perhaps also in other tissues.
Supported by Grants from the US Army Medical Research Acquisition
Figure 1.Post-ablation expression timeline of TGFB1 and CTGF.The corneas of 11 rabbits were evenly ablated to 125 microns using an excimer laser.3 rabbits were sacrificed at 30 mins after ablation, and 4 rabbits each for Day1 and Day2 post-ablation.The corneas of three rabbits were unablated and were used as the control.The corneas were collected and the epithelium and endothelial layers were scrapped using a scalpel and collected in separate tubes.The expression levels of TGFB1 and CTGF were analyzed using qRT PCR.All expressions were calculated with respect to the unablated corneas and was normalized using GAPDH as the housekeeping gene.Figure (e) plots the RNA expressionsof TGFB1 and CTGF in endothelium in terms of the epithelium.
Figure 2 .
Figure 2. Short-term knockdown of target growth factors.The corneas of 3 rabbits were unevenly ablated to 125 microns using an excimer laser.The right eye was treated with the effective triple combination (T1R2C1) complexed with nanoparticles and the left eye received the vehicle control.One day later, the rabbits were sacrificed and RNA was extracted for analysis by qRT PCR.The figure gives the RNA level knockdown percentages of the target growth factors calculated with respect to the left eye.All expressions were normalized to 18S rRNA.
Figure 3 .
Figure 3. SMA immunohistostaining in triple siRNA treated rabbit corneas.The corneas of 6 rabbits were unevenly ablated to 125 microns using an excimer laser.The right eye was treated with the effective triple combination (T1R2C1) complexed with nanoparticles and the left eye received the vehicle control.14 days later, the rabbits were sacrificed and three corneas were collected for SMA immunohistostaining.The corneas were fixed overnight in 4% paraformaldehyde.They were then bisected, fixed in OCT and then sectioned in 10 μm slides.To stain for SMA, slides were blocked in horse serum and then incubated with cy3 labeled SMA antibody (red).The control eye that was ablated and treated with vehicle control shows SMA staining in the basal epithelium and stroma while the triple siRNA treated right eye show reduction in SMA staining.
Figure 4 .
Figure 4.Reduction in scarring by repeated siRNA dosing.The corneas of 3 rabbits were evenly ablated to 155 microns using an excimer laser.The right eye was treated with the effective triple combination (T1R2C1) complexed with nanoparticles for the first three days after ablation while the left eye was left untreated.Both eyes were imaged using a digital camera after 14 days.
Figure 5 .
Figure 5. Quantification of scar reduction.The digital images from the above experiment were subjected to anti-red grayscale conversion by using the data in the blue channel.The contrast was increased by automatic brightness correction in ImageJ.The wounding region was split into two regions-Top (Region-I) and Bottom (Region-II).The pixel intensities of the regions of interest were normalized to that of the transparent unwounded regions of the corresponding corneas.The percentage of haze reduction was calculated by the reduction in pixel intensity of the scarring region in the treated eye with respect to that of the untreated eye (Figure (c)).After imaging, the wounding region in the corneal tissues was collected with a 6-mm biopsy punch for RNA analysis.Figure(d) gives the RNA level knockdown percentage of SMA in the corneas calculated with respect to the untreated eye.All expressions were normalized to 18S rRNA.
Figure 5. Quantification of scar reduction.The digital images from the above experiment were subjected to anti-red grayscale conversion by using the data in the blue channel.The contrast was increased by automatic brightness correction in ImageJ.The wounding region was split into two regions-Top (Region-I) and Bottom (Region-II).The pixel intensities of the regions of interest were normalized to that of the transparent unwounded regions of the corresponding corneas.The percentage of haze reduction was calculated by the reduction in pixel intensity of the scarring region in the treated eye with respect to that of the untreated eye (Figure (c)).After imaging, the wounding region in the corneal tissues was collected with a 6-mm biopsy punch for RNA analysis.Figure(d) gives the RNA level knockdown percentage of SMA in the corneas calculated with respect to the untreated eye.All expressions were normalized to 18S rRNA. | 2018-12-27T20:15:53.684Z | 2013-09-19T00:00:00.000 | {
"year": 2013,
"sha1": "8e05340d90aa0f394e69330ca82f5a09a579932a",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=38167",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8e05340d90aa0f394e69330ca82f5a09a579932a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
119074043 | pes2o/s2orc | v3-fos-license | Spin-Peierls transition with strong structural fluctuations in the vanadium oxide VOSb$_{2}$O$_{4}$
We report on the magnetic susceptibility and electron spin resonance measurements on polycrystalline samples of the vanadium oxide VOSb$_{2}$O$_{4}$, a quasi-one dimensional S=1/2 Heisenberg system. We show that the susceptibility vanishes at zero temperature, as in a gapped system, and we argue that this is due to a spin-Peierls transition with strong structural fluctuations.
Although the study of the spin-Peierls (SP) transition in S = 1/2 antiferromagnetic (AF) Heisenberg chains has started a long time ago with the discovery of the first SP transition in the organic system TTFCuBDT in 1975 [1,2], a major breakthrough in the field was the discovery in 1993 by Hase et al. [3] of the first inorganic system exhibiting a SP transition, namely CuGeO 3 . The possibility to grow large single crystals has led to a very intensive experimental activity, and the understanding of the properties of such systems, in particular in strong magnetic fields, has been dramatically improved.
However CuGeO 3 is representative of only one class of spin-Peierls systems, namely systems in which structural fluctuations are to a certain extent negligible. In such systems, the dimerization of the lattice is very brutal, and the susceptibility exhibits a characteristic cusp at the transition temperature. The irrelevance of structural fluctuations in a 1D system is very surprising, and the first theories actually predicted a strongly fluctuating regime above the transition [4]. This discrepancy was resolved by Cross and Fisher [5], who showed that an appropriate treatment of 3D phonons can lead to a significant suppression of fluctuations.
The study of fluctuations in spin-Peierls systems has recently restarted however with the careful analysis of the spin-Peierls transition in the organic system (BCPTTF) 2 X by Dumoulin et al. [6] in 1996 who convincingly showed the presence of very strong structural fluctuations above the spin-Peierls transition. Judging from the impact of CuGeO 3 on the field, the search for inorganic systems with similar properties is a real challenge. However the inorganic spin 1/2 chains synthetized so far do not seem to fill this gap. Most of them just do not show any sign of a SP instability, like Sr 2 CuO 3 [7] or MgVO 3 [8], while the transition observed in NaV 2 O 5 [9] is very abrupt and is now believed to involve charge degrees of freedom as well.
In this paper we report on the magnetic properties of a vanadium oxide, VOSb 2 O 4 , which we believe is the first example of an inorganic system that undergoes a SP transition with very strong fluctuation effects. This system is made of almost isolated chains of VO 5 pyramids. According to Darriet, Bovin and Galy [10] VOSb 2 O 4 crystallizes in the monoclinic system, space group C2/c, with the unit cell dimensions a=18.03Å, b=4.800Å, c=5.497 A, β=94.58 • (Z=4). The vanadium atoms are fivefold coordinated in a slightly distorted square pyramids with one characteristic short vanadyl bond V-O close to 1.59 A towards its apex and 2×2 longer bonds at 1.91 and 2.04Å with the oxygens of the square base. Along the [001] direction the apices of the VO 5 pyramids alternately point up and down relative to a plane of the square base (see Fig. 1(a)). The smallest in-chain V-V distance is approximately 3.01Å. The distances between the chains are 4.80 and 18.03Å for the [010] and the [100] direction, respectively. Thus from a magnetic point of view, the VOSb 2 O 4 structure can be viewed as infinite isolated chains of V 4+ ions running along the [001] direction. The antimony atoms exhibit the typical one-sided threefold coordination of the oxygen atoms having a stereoactive lone pair E [10] (see Fig. 1(b)).
Polycrystalline samples of VOSb 2 O 4 having a lightgreen color were synthesized by solid-state reaction [10]. ESR X-band spectra were collected using a Bruker ESP300 spectrometer equipped with a standard TE 102 cavity and a continuous helium flow cryostat that allows temperature scans between 4 and 300 K. The temperature and the field variation of the magnetization was measured with a Quantum Design SQUID magnetometer from 300 to 1.8 K in fields up to 4 T. The temperature dependence of the magnetic susceptibility χ raw (T ) of a 100 mg polycrystalline sample of VOSb 2 O 4 in a field of 2 T is shown in Fig. 2. Below room temperature, when the temperature is lowered, χ(T ) passes through a broad maximum at T max ≈ 160 K, which is typical of a S = 1/2 Heisenberg chain with J/k ≃ 250 K. However, on further cooling the sample, there is a rapid decrease which starts around 40 K, and the slope exhibits a clear maximum at T sp = 13 K. Finally, there is a minimum at 10 K followed by an increase of the susceptibility indicating the presence of magnetic impurities. The drop that starts around 40 K is reminiscent of a SP transition, but there is a dramatic difference with CuGeO 3 : there is no cusp in the susceptibility around the temperature where it becomes much smaller than the Bonner-Fisher prediction. On the contrary, the susceptibility drops rapidly but smoothly below 35 K, very much like in (BCPTTF) 2 X. Before we can start discussing the physical origin of this unusual behaviour, the first thing we must check is whether the susceptibility indeed goes to zero at zero temperature, as in a SP transition. Let us first study in more details the impurity contribution. To characterize more quantitatively this impurity contribution χ imp (T ) to χ raw (T ), particularly with an idea to separate it from the intrinsic susceptibility of the VOSb 2 O 4 phase, which we will call χ cor (T ),we have carried out magnetization measurements vs. H at various fixed temperatures from 80 K to 1.8 K. The results are shown in Fig. 3. An important information about χ imp (T ) is contained in the low temperature nonlinear dependence of M imp (H, T ) when µH > kT . Therefore we have to examine the data in Fig. 3 using the following equation is the Brillioun function, S is an impurity spin value and p imp defines the relative impurity concentration. The results of a fit of the experimental data with this equation are shown in Fig. 3 as solid lines. The following parameters S = 1/2, p imp =0.00573(5) were extracted ( g=1.975 was fixed in the fit procedure) together with the AF Curie-Weiss constant θ ≈ 0.6 K obtained from the low-T dependence of χ −1 raw (T ). We are now in a position to correct χ raw (T ) for the impurity contribution. In Fig. 2 χ imp (T ) = p imp C/(T + θ) (C = 0.366 cm 3 ·K/mol) is plotted (dashed line) along with the χ cor (T ) =χ raw (T ) − χ imp (T ). This behaviour is consistent with a zero contribution to the spin sus-ceptibility χ spin (T ) = χ cor (T ) − χ 0 at zero temperature if the sum of the diamagnetic and Van Vleck contributions χ 0 is equal to χ cor (T = 0). While the diamagnetic contribution can be estimated from standard tables (χ dia ≃ −1.01×10 −4 cm 3 /mol [11]), an unbiased estimate of the Van Vleck susceptibility would require susceptibility data at temperatures much larger than the typical exchange integrals, a regime which is not accessible. To go around this difficulty, we have performed extensive ESR measurements. A representative series of X-band ESR spectra recorded from 320 to 7.3 K on a polycrystalline sample (12 mg) is presented in Fig. 4(a). We note the axial symmetry of the obtained spectra, especially apparent at T = 20 K, reflecting the axial symmetry of the crystal field acting on V 4+ ions in the fivefold pyramidal environment. The computer spectra simulations over the temperature range 13 -320 K give two T -independent g-factors: g ⊥ =1.978, g =1.930 with the average value of 1.962 ± 0.002 already reported for low-dimensional vanadates [8,12]. At low T it is found that the measured spectra contain an additional ESR signal. The intensity of this additional signal roughly follows Curie law and the average g−factor is found to be 1.975 ± 0.005. We ascribe this signal to the magnetic impurities which are responsible for the steep increase of the magnetic susceptibility at low temperatures (see above).
To extract information from these ESR spectra, we have proceeded in the following way. Since the ESR is insensitive to the diamagnetic and Van Vleck contributions to the susceptibility, we are able, by double integration of the ESR spectra, to reconstruct the sum χ spin (T )+χ imp (T ) and then using an appropriate procedure for the substraction of χ imp (T ) to restore the Tdependence of χ spin (T ). In the case of VOSb 2 O 4 the substraction of χ imp (T ) is a rather tedious but unambiguous procedure because i) the impurity ESR spectra have quite different line parameters (such as the linewidth, g-factor and the temperature dependence of spectra intensity) as compared to the main spectra; ii) from the magnetization measurements we know the T −evolution of χ imp (T ) so, we can use this information to check the correctness of the substraction at each T . We drop out the technical details of this procedure and postpone them to our forthcoming paper. The χ ESR spin (T ) data as extracted from ESR are given in an inset of Fig. 4(b). It is clearly seen that the spin magnetic susceptibility of VOSb 2 O 4 goes to zero in the limit T → 0, a result which is qualitatively apparent from the examination of ESR spectra at 20, 13 and 7.3 K in Fig. 4(a). For example, the 7.3 K ESR spectrum is almost for 95% an impurity one. The fact that χ spin (T → 0)≈ 0 clearly evidences that the VOSb 2 O 4 ground state at low temperature is a nonmagnetic singlet S = 0. Note that the temperature dependence is consistent with that deduced from the susceptibility measurements after substraction of the impurity contributions and assuming that the Van Vleck contribution is such that χ cor = 0 (see inset of Fig. 4(b)).
Another very useful information is contained in the temperature dependence of the line width. As seen from Fig. 4(b), the peak-to-peak linewidth ∆H pp shows a characteristic V-like temperature dependence (a strong decrease of ∆H replaced at T ≈ 13 K by a rapid increase of ∆H). Such a behaviour has been previously observed in both SP materials NaV 2 O 5 and CuGeO 3 [13], the minimum temperature being equal to the SP transition temperature.
Let us now discuss the various possibilities to explain this behaviour. Assuming that the chains are well isolated magnetically, which is very reasonable given the geometry, we can think of only two possibilities to explain a vanishing susceptibility, hence the presence of a spin gap, at zero temperature: Frustration or dimerization due to a SP transition. Let us analyze both possibilities.
Frustration: It is well known that a coupling J 2 to second neighbours can lead to a spin gap if its ratio to the first-neighbour coupling J 1 is larger than 0.24 [14,15]. However the presence of a significant coupling between second neighbours will not just open a gap at low temperature, but will modify the temperature dependence of the susceptibility at high temperature as well. We have thus tried to fit χ spin (T ) with a significant value of J 2 . The resulting fit is very bad, and much worse actually than without J 2 between 40 K and 300 K. So this possibility seems unlikely. Besides, if we compare with MgVO 3 , another quasi-1D vanadium oxide which does not show any anomaly at low temperature [8], the chains have the same structure. But the magnetic measurements performed on MgVO 3 show no indication whatsoever of intra-chain frustration. So it seems more plausible that the difference between the magnetic properties of these systems comes from the interaction between the chains. In fact, the chains are further apart in VOSb 2 O 4 than in MgVO 3 , especially in the a direction, where most of the residual coupling is believed to occur in MgVO 3 . So it is not surprising that typical 1D effects show up in VOSb 2 O 4 and not in MgVO 3 .
Dimerization due to SP transition: In principle, a S = 1/2 chain is always unstable towards dimerization, but the transition temperature can be strongly reduced due to fluctuations of the lattice, especially if the system is very one-dimensional. In the present case, a good fit of the high-temperature susceptibility with the susceptibility of the S=1/2 chain [16][17][18] is possible (see Fig. 2), although with an effective g-factor smaller than the actual one measured in ESR. This discrepancy is actually ubiquitous in V 4+ vanadates, whose properties are quite well understood otherwise, and it seems legitimate not to worry too much about it. It might come from different factors ranging from a poor determination of the weight due to the absorption of water by the sample to the presence of some non-magnetic impurity phase.
The next question is whether we do have a SP transition. From the susceptibility measurements alone, it is not possible to conclude. But if there is a transition, it seems likely that it does not take place at the onset of the drop, like in CuGeO 3 , but at the temperature where the derivative of the susceptibility is maximal, i.e. 13 K. This scenario is actually favoured by ESR measurements since the line-width changes dramatically at the same temperature. However, clear signatures of the transitions, like new Bragg peaks or new phonons lines below 13 K, are not available yet.
If on the contrary the system remains fluctuating with a pseudo-gap down to zero temperature, as in the Lee-Rice-Anderson theory of the Peierls transition in metallic systems [4], the susceptibility is expected to decrease smoothly to zero. This would be consistent with our data. The behaviour of the line-width under such circumstances is not known however, and more work is needed to check whether our data can exclude this possibility.
To summarize, we have presented clear evidence that a spin gap opens in the quasi-1D vanadium oxide VOSb 2 O 4 from susceptibility, magnetization and ESR data. The overall behaviour strongly suggests that this is due to the inherent SP instability of this spin 1/2 chain, but with very strong fluctuations. Given the lack of inorganic materials exhibiting this physics so far, the properties of this system are likely to attract a lot of attention in the future. | 2019-04-14T01:57:21.895Z | 2001-03-21T00:00:00.000 | {
"year": 2001,
"sha1": "1df7040e33b253a60393b28de1fa1b0ee499012d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1df7040e33b253a60393b28de1fa1b0ee499012d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
218580213 | pes2o/s2orc | v3-fos-license | The critical role of infection prevention overlooked in Ethiopia, only one-half of health-care workers had safe practice: A systematic review and meta-analysis
Background Effective infection prevention and control measures, such as proper hand hygiene, the use of personal protective equipment, instrument processing, and safe injection practicein the healthcare facilitiesare essential elements of patient safety and lead to optimal patient outcomes. In Ethiopia, findings regarding infection prevention practices among healthcare workers have been highly variable and uncertain. This systematic review and meta-analysis estimates the pooled prevalence of safe infection prevention practices and summarizesthe associated factors among healthcare workers in Ethiopia. Methods PubMed, Science Direct, Google Scholar, and the Cochran library were systematically searched. We included all observational studies reporting the prevalence of safe infection prevention practices among healthcare workers in Ethiopia. Two authors independently extracted all necessary data using a standardized data extraction format. Qualitative and quantitative analyseswere employed. The Cochran Q test statistics and I2 tests were used to assess the heterogeneity of the studies. A random-effects meta-analysis model was used to estimate the pooled prevalence of safe infection prevention practice. Results Of the 187 articles identified through our search, 10 studies fulfilled the inclusion criteria and were included in the meta-analysis. The pooled prevalence of safe infection prevention practice in Ethiopia was 52.2% (95%CI: 40.9–63.4). The highest prevalence of safe practice was observed in Addis Ababa (capital city) 66.2% (95%CI: 60.6–71.8), followed by Amhara region 54.6% (95%CI: 51.1–58.1), and then Oromia region 48.5% (95%CI: 24.2–72.8), and the least safe practices were reported from South Nation Nationalities and People (SNNP) and Tigray regions with a pooled prevalence of 39.4% (95%CI: 13.9–64.8). In our qualitative syntheses, the odds of safe infection prevention practice were higher among healthcare workers who had good knowledge and a positive attitude towards infection prevention. Also, healthcare workers working in facilities with continuous running water supply, having infection prevention guideline, and those received training were significantly associated withhigher odds of safe infection prevention practice. Conclusions Infection prevention practices in Ethiopia was poor, with only half of the healthcare workers reporting safe practices. Further, the study found out that there were regional and professional variations in the prevalence of safe infection prevention practices. Therefore, the need to step-up efforts to intensify the current national infection prevention and patient safety initiative as key policy direction is strongly recommended, along with more attempts to increase healthcare worker’s adherence towards infection prevention guidelines.
workers who had good knowledge and a positive attitude towards infection prevention. Also, healthcare workers working in facilities with continuous running water supply, having infection prevention guideline, and those received training were significantly associated withhigher odds of safe infection prevention practice.
Conclusions
Infection prevention practices in Ethiopia was poor, with only half of the healthcare workers reporting safe practices. Further, the study found out that there were regional and professional variations in the prevalence of safe infection prevention practices. Therefore, the need to step-up efforts to intensify the current national infection prevention and patient safety initiative as key policy direction is strongly recommended, along with more attempts to increase healthcare worker's adherence towards infection prevention guidelines.
Background
Infection prevention and control is a set of practices, protocols, and procedures that are put in place to prevent infections that are associated with the healthcare system. Effective infection prevention and control measures, such as proper hand hygiene, the use of personal protective equipment (PPE), environmental cleaning, instrument processing, safe injection, and safe disposal of infectious wastes in the healthcare facilitiesmaximize patient outcomes and are essential to providing effective, efficient, and quality health care services [1][2][3]. Healthcare workers (HCWs) compliance with these recommended measures is termed as safe infection prevention practice.
Worldwide, healthcare-acquired infections (HAIs) affecting the quality of care of hundreds of millions of patients every year, contributing to increased morbidity, mortality, and substantial healthcare cost [1,2,4,5]. According to the World Health Organization (WHO), at any point in time forevery hundred hospitalized patients, ten will acquire at least one HAI [3]. The Centre for Disease Prevention and Control (CDC) estimates 2 million patients who will suffer from HAIs every year in the United States (US), and nearly one hundred thousand of them die [5], costing as much as 4.25 billion United States dollars [6]. Studies conducted in low-income settings showed that the prevalence of HAIs varies from 5.7% to 19.1%, with a pooled prevalence of 10.1% [7]; and the cumulative incidence range from 5.7% to 45.8% [8]. Further,in many cases,adherence towards infection prevention recommendations among healthcare workers (HCWs) in many low-income settings in general is poor [9][10][11][12][13].
In Ethiopia, the burden of HAIs is a major public health problem with a significant impact on hospitalized patients [14][15][16]. According to the finding of some pocket studies a high prevalence of HAIs has been reported from all corners of the country from 15.4% in north Ethiopia [15], 11.4%-19.4% in southwest Ethiopia [16,17], to 16.4% in central Ethiopia [18]. Although, a large proportion of HAIs can be prevented with inexpensive and cost effectiveinfection prevention and control measures;the evidence available suggests that healthcare facilities in Ethiopia do not have effective infection control programs [9]. In addition,HCWs compliance towards infection prevention and control (IPC) measures are critically low and a potential common problem in the country [9,19,20].
There is evidence that demonstrates the role of HCWs infection preventioncompliance on the reduction of HAIs [21-23] for example, Sickbert et al, in their study reported an improvement in hand hygiene compliance of healthcare workers by 10%, which associated with a significant reduction in overall HAIs [22]. According to the World Health Organization (WHO) report, it is estimated that effective infection prevention and control (IPC) measures reduce HAIs by at least 30% [21]. In this context, adherence to the recommended infection prevention and patient safety practice is the best way in preventing patients, healthcare workers, and communities at large from HAIs. And the long-term solution to reduce the problems of HAIslie on actions to implement effective IPC measures in healthcare facilities [3,9,10,13]. Despite these facts-in many low-income settings, with healthcare systems and resources similar to Ethiopia,lack of well-trained HCWs, lack of infection prevention and control policies, and lack of technical guidelines consistent with the available evidence essential to provide a robust framework to support the performance of good IPC practices made the promotion of IPC practices a bit challenging [9,15,[24][25][26][27][28].
To maximize the prevention of HAIs in Ethiopia, there has been a growing recognition of the need for safe infection prevention practice at all levels. Since the publication of the second Ethiopia National Infection Prevention Guidelines in 2012 [9], considerable progress has been made in understanding the basic principles, acceptance and use of evidence based Infection Prevention (IP) practices, including Clean and Safe Hospital (CASH), Clean Care is Safer Care campaigns, and Initiatives-Saving Lives through Safe Surgery (SaLTS). The national Infection Prevention and Patient Safety (IPPS) manual serve as a standardized IP reference manual for healthcare providersin all healthcare delivery systems. Also, it is intended to serve HCWs by providing clear guidance in the provisions of standard infection prevention and patient safety practices. The key components in the manual include standard precautions, hand hygiene, personal protective equipment, safe injection practice, processing instrument, and healthcare waste management [9]. Importantly, the existence of the IPC guidelines alone is not sufficient to ensure compliance and implementation of IPC recommendations; and findings clearly indicate that HCWs compliance is a prerequisite for successful guideline adoption. Previously conducted primary studies reported inconsistent findings regarding HCWs infection prevention practice in Ethiopia [19,20,27,[29][30][31][32][33]. For instance, a study done in southeast Ethiopia showed that only 36.3% of HCWs had safe infection prevention practice [20], 15.0% in southern Ethiopia [33], 66.1% in central Ethiopia [27], andin northern Ethiopia42.9% of HCWs had acceptable practice [19]. Although the reporting of such practices is important for the prevention and control of HAIs and improving quality of care, the existed studies had many differences in the geographical regions and preceded remarkable variations in the reported practices. Due to the aforementioned reason, we conducted a systematic review and meta-analysis of observational studies to estimate the pooled prevalence of safe infection prevention practices among HCWs in Ethiopia. Also, we aim to summarize descriptively the factors that were associated with safe infection prevention practice.
Search strategy
The protocol for this review was registered in the International Prospective Register of Systematic Reviews (PROSPERO), the University of York Centre for Reviews and Dissemination (record ID: CRD42019129167, on the 31 st May 2019).
Databases including PubMed/MEDLINE, Science Direct, Cochrane Library, and Google Scholar were systematically searched. Also, we screened the referencelists of identified articles to detect and identify additional relevant studies to add to this review. Furthermore, to find unpublished papers relevant to this systematic review and meta-analysis, Addis Ababa University Digital Library were searched. The search for the literatures was conductedbetweenthe15 th of April to the 31 st of May, 2019. The following terms and keywords were applied for PubMed/MEDLINE search: (infection prevention OR infection control OR standard precaution OR practice) AND (healthcare workers OR health workers OR health personnel OR healthcare providers) AND (health facilities OR hospitals OR public health facilities) AND (Ethiopia) as well as all possible combinations of these terms. For the other electronic databases, we used database-specific subject headings linked with the above terms and keywords used in PubMed. This review is reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [34] (S1 File). The search strategy is provided in supplementary document S2 and S3 Files.
Exclusion criteria
Articles with the following characteristics were excluded from this review • Studies whose full data were not accessible even after requests from the authors • Studies which did not report the overall prevalence of infection prevention practices
The outcome of the study
The pooled prevalence of safe infection prevention practices in Ethiopia was the primary outcome variable of this study,a random-effects meta-analysis model was used to estimate the pooled prevalence of safe infection prevention practice. The second objective of this study was to summarize descriptively the factors that were associated with safe infection prevention practices in Ethiopia from the included studies.
Operational definition
Safe infection prevention practice was defined as healthcare worker's overall compliance to the core components of infection prevention measures that including proper hand hygiene practice, regular utilization of personal protective equipment's as required, correct medical equipment processing practice, proper healthcare waste management, tuberculosis infection control, and safe injection and medication practices.
Data extraction
Two investigators (BS and YT) independently extracted the data from the studies included in our analysis as recommended by PRISMA guidelines [34]. The data were extracted using a standard data extraction forms. The following information were extracted from the selected studies: first author's name, year of publication, the type of study design, study setting including region, study population, sample size, sampling methods, the magnitude of infection prevention practice, infection prevention components assessed, and response rateof included studies.
Quality assessment
The assessment of methodological quality was carried out independently by two reviewers using the Newcastle-Ottawa Scale (NOS) [35]. Thisscale has three sections: 1 st selection (maximum 5 stars), (2) comparability between groups (maximum 2 stars), and (3) outcome assessment (maximum 3 stars). In summary,the maximum possible score was 10 stars, which represented the highest methodological quality. The two authors (BS and YT) independently assessed the quality of each original study using the quality assessment tool. Any disagreements during the data extraction were resolved through discussion and consensus. Finally, any article with a scale of greater than or equal to � 7 out of 10 was included in this Systematic Review and Meta-analysis. A detailed scoring result was described in the supplementary file (S4 File).
Data analysis and synthesis
Data obtained from the studies under review was entered into Microsoft Excel spreadsheet, then analyzedwere done using STATA Version 14 statistical software. Characteristics of each primary study were presented in a table. The standard errors for each original study were calculated using the binomial distribution formula. The presence of heterogeneity among the reported prevalencewas assessed by computing p-values for the Cochran Q test and I 2 test. Cochran's Q test was used to test the null hypothesis of no significant heterogeneity across the studies [36]. Although there can be no absolute rule for when heterogeneity becomes important, Higgins et al. tentatively suggested low for I 2 values between 25%-50%, moderate for 50%-75%, and high for �75% [36]. Subgroup analysis was done by the region where primary studies were conducted, publication year, sample size, sampling method, and type of healthcare facility. Publication bias was assessed using a funnel plot. In the absence of publication bias, the plot resembles a symmetrical large inverted funnel. Egger's weighted regression and Begg's rank correlation tests were used in checking the publication bias (P < 0.05), considered statistically significant [37]. We also conducted a leave-one-out sensitivity analysis to appraise the main studies that exerted an important impact on between-study heterogeneity.
Identification of studies
For this review, one hundred and eighty-seven studies were identified in the initial search. Of these, 118 were excluded during the evaluation of the title and abstract. After applying the inclusion and exclusion criteria, a total of 10 studies were included in the final systematic review and meta-analysis (Fig 1).
Characteristics of included studies
A total of 10 articles [19,20,27,31,33,[38][39][40][41][42] were included in meta-analysis. The aggregate study sample included 3,510 participants (a mean of 351 and a median of 314participants). The largest study conducted by Geberemariyam BS., et al had (648 participants) in the Oromia region [20] while the smallest study by Abreha N., et al. in Addis Ababa had 108 participants [41]. Selected studies were conducted between 2014 and 2019. All the included studies were cross-sectional by design. With regards to regional distribution, about (30%) of the studies were conducted in Addis Ababa [27,40,41]. The prevalence of safe infection prevention practices ranged between 15% [33], and 72.5% [41] in South Nation Nationalities and People (SNNPs) Region and Addis Ababa, respectively. Concerning the quality score, all included studies were of a reputable methodological quality, scoring 7 out of 10-points (Table 1).
Meta-Analysis
Prevalence of safe infection prevention practices. A total of ten studies were included in the meta-analysis. From these studies, the pooled prevalence of safe infection prevention practices in Ethiopia was 52.2% (95%CI: 40.9-63.4). A significant higher heterogeneity among the ten included studies was found (I 2 = 98.0%; Q = 453.55, Variance Tau-squared = 319.63, p<0.001). Due to the existence of this heterogeneity, we used a random-effect meta-analysis model to estimate pooled prevalence (Fig 2). According to the sensitivity analysis, there was no single influential study that significantly accounted for it ( Table 2).
Subgroup analyses
The subgroup analyses of infection prevention practice prevalence. The results of the subgroup analysis showed that the pooled prevalence of safe infection prevention practices were highest in Addis Ababa (capital city) 66.2% (95%CI: 60.6-71.8) [I 2 = 51.4%, p = 0.128], and 54.6% (95%CI: 51.1-58.1) [I 2 = 0.0%, p = 0.825] in Amhara Region;48.5% (95%CI: 24.2-72.8) in Oromia Regional State; and the least safe practiceswere reported from other regions (SNNP and Tigray regions)with pooled prevalence of 39.4% (95%CI:13.9-64.8). A considerable heterogeneity was also found [I 2 = 97.7%; p<0.001]; and [I 2 = 98.8%; p<0.001] for the Oromia Regional State, and other regions (SNNP and Tigray), respectively. The prevalence of infection prevention practices was analyzed separately for either nurses or all other healthcare workers. The findings show the prevalence of safe infection prevention practices more in studies conducted exclusively on nurses than in other health care workers (66.4% vs. 48.6%). We also conducted a subgroup analysis based on the study setting. The pooled prevalence of safe infection prevention practice showed more in studies conducted exclusively in hospitals than in those that include health centers (53.5% vs. 49.8%). More details on the prevalence of safe infection prevention practices for subgroups are presented in Table 3.
Publication bias
In the present study, Begg's and Egger's tests were utilized to detect the presence of publication bias. However, none of the tests revealed significant publication bias (p-values of 0.210 and 0.246, respectively) for the prevalence of safe infection prevention practice in Ethiopia (Fig 3). Table 2 shows the sensitivity analysis of prevalence for each study being removed at a time. To identify the potential source of heterogeneity in the analysis, a leave-one-out sensitivity analysis on the prevalence of infection prevention practice in Ethiopia was employed. The results of this sensitivity analysis showed that the findings were robust and not dependent on a single study. The pooled estimated prevalence of infection prevention practice varied between 56.2 (95%CI: 48.1-64.4) and 50.0 (95%CI: 38.3-61.6) after removing a single study. Moreover, to identify the possible sources of variations across studies,the meta-regression model was performed by considering the geographical region, publication year, and sample size as covariates. The geographical region (p-value = 0.260), publication year (p-value = 0.864), and sample size (p-value = 0.820) were not statistically significant source of heterogeneity (Table 3).
Narrative review
From the ten studies, we summarized descriptively the factors that were associated with safe infection prevention practices in Ethiopia. Factors were categorized into the following three domains: socio-demographic factors (four factors), behavior-related factors (three factors), and healthcare facility-related factors (five factors). The overview of these factors including the strength of association and corresponding articles was presented in Table 4.
Behavioral related factors
Having good knowledge ofinfection prevention measures was identified as a factor associated with safe infection prevention practices [27]. In the same way, having a positive attitude towards infection prevention measures, and awareness on infection prevention guideline were the most commonly identified factors associated with the aforementioned practice [27,33] ( Table 4).
Healthcare facility related factors
As illustrated in Table 4, four healthcare facility-related factors were positively and significantly associated with safe infection prevention practices in Ethiopia. Healthcare workers who worked in facilities with continuous water supply have higher odds onsafe infection prevention practice [27]. Similarly, healthcare workers who worked in facilities with access to infection prevention guidelines in the working department have higher odds on the prevention practice [19,20,27]. Lastly, factors such as the type of healthcare facility, current working department, and completion of formal infection prevention training, were the most important factors associated with this prevention practice [20,27,33,38,42].
Discussion
Infection prevention and patient safety in healthcare settings is a nationwide initiative in Ethiopia, that involves the regular implementation of recommended infection prevention practices in every aspect of patient care. Such practices include hand hygiene, injection safety and medication safety, and health care waste management, among others. In Ethiopia, findings regarding the prevalence of safe infection prevention practices have been highly variable. We conducted this systematic review and meta-analysis to estimate the pooled prevalence of safe infection prevention practicesamong HCWs in Ethiopia. Based on the meta-analysis result, only one-half of the HCWs in Ethiopia had safe infection prevention practices. In our qualitative syntheses, healthcare workers' socio-demographic, behavioral and healthcare facility-related factors were important variables associated with infection prevention practice. The result of the ten included studies noted that the pooled prevalence of safe infection prevention practice in Ethiopia was 52.2%. This finding brought important information, and these signified that unsafe practices in healthcare facilities are a major public health concern in Ethiopia. As the burdens of HAIs are increasing [14][15][16][17][18], the current suboptimal infection prevention practices have serious implications to both the HCWs and patients.
On one hand, contracting an infection while in the healthcare facilitydue to poor infection prevention practice violates the basic idea that healthcare is meant to make people well. In fact, the risk of contracting HAIs is variable and multifaceted:prevalently, it depends on a patient's immune status, the local prevalence of various pathogens, and the institutional and individual HCW infection prevention practices. Hence, the need for having strong infection prevention programs nationally; and at the healthcare facility level has been not overlooked [29,30,32,43,44]. Un sustained compliance with infection prevention possibly places HCWs at equal, if not at higher risk ofcontracting bacterial and viral infections such as HIV, HBV, HCV, and MRSA in healthcare facilities [9]. In light of this, studies conducted in Ethiopia even showed a positive correlation between poor standard precaution practices and a high prevalence of blood and body fluid exposure [20,27,45,46]. For this reason, the Federal Ministry of Health infection control professionals, healthcare facility administrators, and hospital epidemiologists must pay considerable attention to curve the current poor suboptimal infection prevention practices [47,48].
In the subgroup analysis, a variation in HCWs infection prevention practices across geographical regions was found. Safe infection prevention practices were consistently more frequent in central Ethiopia (Addis Ababa) and less in Tigray and SNNP regions-the reason for these regional differences may be explained by studies conducted in central Ethiopia included mainly in tertiary and referral hospitals which and commonly staffed are with skilled and experienced healthcare professionals as compared to those in other regions. Another possible explanation for this variation might be due to the difference in environmental infrastructures and behavioral characteristics of HCWs. Our findings may, therefore, indicate the need to promote appropriate infection prevention and patient safety practices for HCWs in Ethiopia. Moreover, to address regional variationsthere is a strong need of implementing readily available, relatively inexpensive, practical and scientifically proven infection prevention and patient safety practices in different regions of Ethiopia.
Our meta-analysis also found that the prevalence of safe infection prevention practices differed between nurses and other healthcare workers. The possible explanation for this observed discrepancy may be due to the training and roles of healthcare workers; the nurses were engaged in inpatient care, and they may have better understanding regarding infection prevention. Still, this prevalence is suboptimal and great concern, therefore, is necessary to strive for a better quality of healthcare.
In this review,we summarize the findings of the included studies on factors associated with safe infection prevention practice identified three main domains of determinant factors; namely socio-demographic, behavioral, and healthcare facility-related factors. Healthcare workers in facilities with access to infection prevention guidelines and those receiving formal infection prevention training have higher odds onsafe infection prevention practice. Obviously, this may be due to health professionals who have adequate knowledge and attitude to implement the recommended infection prevention and patient safety practices in the healthcare facilities possibly have better IPC compliance [27]. In this sense, the current systematic review suggests that it may be more effective to improve HCWs infection prevention practices through regular in-service training [49]. Furthermore, a holistic approach that involves the behaviors of HCWs and facilities that are essential for effective infection prevention and control measures should be integrated. Since infection Prevention and Patient Safety recommendations could easily be implemented if everyone in the health service delivery system, from the level of policy makers to healthcare providers at the facility level collaborate [9,27,31].
Finally, despite there were similar trends for many of the African countries in the practice of healthcare worker's infection prevention and control practice, we would suggest caution against applying the present results to countries located in other regions of the African, as the healthcare system, healthcare workers training, and a government policy may affect HCWs infection prevention compliance.
Limitations of the study
This systematic review and meta-analysis have several limitations. The first limitation considered to conduct this review was to include English language articles only. Second, all of the studies included in this review were cross-sectional as a result; the outcome variable might be affected by other confounding variables. Third, this meta-analysis represented only studies that were reported from the four regions of the country-this irregular distribution of studies from around the country limits the study findings. Fourth, the majority of the studies included in this review had relatively small sample sizes which could have affected the estimated safe infection prevention practice reports. Fifth, a small number of studies were included in subgroup analyses which reduce the precision of the estimate and considerable heterogeneity was identified among the studies. Sixth, almost all studies included in this meta-analysis were often based on self-reported data from healthcare providers, which tended to have overestimated compliance and limited the strengths of the findings. Lastly, since most of the included primary studies did not cover a good range of components of infection prevention practices. We strongly recommend caution while interpreting the estimated pooled prevalence finding.
Conclusions
Infection prevention practices in Ethiopia was poor, with only half of the healthcare workers reportingsafe practices. There were regional and professional variations in the prevalence on the safe practices-it is therefore important for all HCWs to adhere to the existing infection control guidelines by embedding them in everyday practice. It is also imperative for healthcare administrators to ensure the implementation of infection prevention and patient safety programs in all healthcare settings. Our study highlights the need for the Ethiopian Federal Ministry of Health to step-up efforts to intensify the current national infection prevention and patient safety initiatives. | 2020-05-11T19:01:44.400Z | 2020-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "b8c637049a637a60e759cbb4a4ee15e8861cd7eb",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0245469&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d8dd9fa388d48d71bc9b5cc58e089f005b02fff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204831724 | pes2o/s2orc | v3-fos-license | Development of Coffee Biochar Filler for the Production of Electrical Conductive Reinforced Plastic
In this work we focused our attention on an innovative use of food residual biomasses. In particular, we produced biochar from coffee waste and used it as filler in epoxy resin composites with the aim to increase their electrical properties. Electrical conductivity was studied for the biochar and biochar-based composite in function of pressure applied. The results obtained were compared with carbon black and carbon black composites. We demonstrated that, even if the coffee biochar had less conductivity compared with carbon black in powder form, it created composites with better conductivity in comparison with carbon black composites. In addition, composite mechanical properties were tested and they generally improved with respect to neat epoxy resin.
Introduction
Anthropogenic waste stream management is one of the main unresolved problems of industrialized societies [1,2]. In the food waste sector, coffee residuals could be considered not only a waste material but a resource. Recently Christoph Sänger [3] reported that worldwide coffee production was 159.7 million of bags in crop year 2017/18 (about 9.6 MTons), with a mean of 5 kg/capita per year in traditional markets (Germany, Italy, France, USA and Japan) and an increasing consumption in emerging markets (South Korea, Russia, Turkey and China). The coffee waste stream becomes a relevant problem not only after consumption but also during the wet processing of coffee beans when 1 ton of fresh berries results in only about 400 kg of wet waste pulp. Several solutions have been proposed to solve the problem of waste coffee biochar, such as the production of biogas [4] and flavours [5], use as filler in ceramics [6] or as absorbent for the removal of basic dyes from aqueous solutions [7]. Coffee wastes have been also used as feedstock for pyrolytic conversion producing hydrogen-rich gas [8] and fuel-quality biochar [9]. Biochar has been used not only as solid fuel but also as high performance material [10,11], as a flame retardant additive [12,13], for electrochemical [14] and energy storage applications [15] and for production of composites [16][17][18][19].
Traditionally, in the realm of carbon fillers in polymer composites, carbon black (CB) plays the main role especially in the automotive field with an estimated consumption of 8.1 MTon/year according to data released by the International CB association [20]. CB has been used for producing conductive composites [21] but, as recently reported by Quosai et al. [22], coffee-based biochar also shows remarkably conductive properties. Furthermore, coffee biochar production has an indisputable advantage if compared with CB. Coffee biochar production uses a food waste stream while oil-based feedstock is required for CB production. This decreases the environmental impact of the production process [23][24][25].
Among different polymers, in this work we focused our attention on epoxy resins doped with these two carbon fillers. As is well known, epoxy resin is a thermoset polymer widely applied in the field of coatings [26], adhesives [27], casting [28], potting [29], composites [30], laminates [31] and encapsulation Polymers 2019, 11,1916 2 of 17 of semiconductor devices [32]. Epoxy resins are used intensively because of their peculiar properties such as high strength, good stiffness, good thermal stability and excellent heat, moisture and chemical resistance [33,34]. Another, unneglectable advantage of epoxy resin is the possibility of being dispersed into the cross-linked polymeric matrix additives, such as micro-encapsulated amines [35,36], that could be realised after material failure promoting the self-healing process of the epoxy composite [37].
In the field of composites materials, production of conductive reinforced plastic materials has attracted an increasing interest in the last few decades [38,39]. Large-scale application fields deserve particular attention. For example conductive epoxy resin has a large-scale application in the field of coatings and adhesives [40]. In these large-scale applications, filler cost is a crucial issue. Epoxy resins have been used as a polymeric host for plenty of carbonaceous materials for the production of conductive reinforced materials [41][42][43][44], but the cost of carbon filler has to be take in account. High-cost carbon fillers such as carbon nanotubes and graphene are problematic for large-scale applications. These carbon fillers induce an increment of its electrical and mechanical properties in the host polymer matrix [45][46][47][48] but are not a suitable choice for industrial scale production. This is mainly due to the high-cost, up to 300 k$/kg [49], and the problem of low productivity of the plants is well known [50]. Thus, low cost carbon fillers which are not derived from fossil fuels, such as CB, are a topic of relevant interest.
In this study, we investigated the use of biochar derived from pyrolytic conversion of the coffee waste stream, such as low cost carbon fillers derived by recycling materials. Results were compared with CB-based composites. Mechanical properties were also investigated for full composite characterization.
Carbonaceous Materials Preparation and Characterization
Exhausted coffee powder was selected as a real case study. It was collected from Bar Katia (Turin, Italy) supplied by Vergnano (Arabica mixture). Coffee was collected and dried at 105 • C for 72 h. Coffee samples (100 g) were pyrolyzed using a vertical furnace and a quartz reactor, heating rate of 15 • C/min and kept at the final temperature (400, 600, 800 and 1000 • C) for 30 min in an argon atmosphere. Samples were named as C400, C600, C800 and C1000 respectively. Biochar was grinded using a mechanical mixer (Savatec BB90E) for 10 min in order to decrease the particle size. Commercial CB (VULCAN ® 9 N115) was used to compare with coffee biochar.
Ash contents of coffee and carbon-based materials (biochars and CB) were evaluated using a static furnace set at 550 or 800 • C respectively for 6 h.
All samples were investigated from morphological point of view using a field emission scanning electrical microscopy (FE-SEM, Zeis SupraTM 40, Oberkochen, Germany). The microscope was equipped with an energy dispersive X-ray detector (EDX, Oxford Inca Energy 450, Oberkochen, Germany) that was used to explore the carbon composition of biochars.
Particle size distribution of carbon fillers was evaluated using a laser granulometry (Fritsch Analysette 22, Idar-Oberstein, Germany) after a dispersion in ethanol and sonication in an ultrasonic bath for 10 min.
Composites Preparation
Biochar, derived from coffee, and commercial CB containing epoxy composites were produced using a two component bis-phenol A (BPA) diglycidyl resin (CORES epoxy resin, LPL). Carbonaceous filler (15 wt. %) were dispersed into epoxy monomer using a tip ultrasonicator apparatus (Sonics Vibra-cell) for 15 min. After the addition of the curing agent, the mixture was ultrasonicated for another 2 min and left into the moulds for 16 h at room temperature. A final thermal curing was performed using a ventilated oven (I.S.C.O. Srl "The scientific manufacturer") at 70 • C for 6 h.
Electrical Characterization
The measurement set-up was derived from Gabhi et al. [51] and is sketched in Figure 1a for fillers and Figure 1b for composites. The instrument was composed of two solid copper cylinders, 30 mm in diameter and 5 cm in length, encapsulated in a hollow Plexiglas cylinder with a nominal inner diameter of 30 mm in the case of filler electrical characterization. In this configuration, the inner diameter was slightly higher so that it was possible to force the copper rods inside the Plexiglas cavity and the upper rod could slide inside the cylinder during the measurement. This arrangement created an internal chamber between the two cylinders, where the carbon powder could be inserted. In the case of composites, the Plexiglas cylinder was removed and the sample was positioned between the aligned copper cylinders. The electrical resistance of the powders or composites was measured at increasing loads (up to 1500 bar) applied by a hydraulic press (Specac Atlas Manual Hydraulic Press 15T). Electrically insulating sheets were placed between the conductive cylinders and the load surfaces in order to ensure that the electrical signal passed through the sample. The resistance of the carbon fillers was measured using an Agilent 34401A multimeter. Biochar, derived from coffee, and commercial CB containing epoxy composites were produced using a two component bis-phenol A (BPA) diglycidyl resin (CORES epoxy resin, LPL). Carbonaceous filler (15 wt %) were dispersed into epoxy monomer using a tip ultrasonicator apparatus (Sonics Vibra-cell) for 15 min. After the addition of the curing agent, the mixture was ultrasonicated for another 2 min and left into the moulds for 16 h at room temperature. A final thermal curing was performed using a ventilated oven (I.S.C.O. Srl "The scientific manufacturer") at 70 °C for 6 h.
Electrical Characterization
The measurement set-up was derived from Gabhi et al. [51] and is sketched in Figure 1a for fillers and Figure 1b for composites. The instrument was composed of two solid copper cylinders, 30 mm in diameter and 5 cm in length, encapsulated in a hollow Plexiglas cylinder with a nominal inner diameter of 30 mm in the case of filler electrical characterization. In this configuration, the inner diameter was slightly higher so that it was possible to force the copper rods inside the Plexiglas cavity and the upper rod could slide inside the cylinder during the measurement. This arrangement created an internal chamber between the two cylinders, where the carbon powder could be inserted. In the case of composites, the Plexiglas cylinder was removed and the sample was positioned between the aligned copper cylinders. The electrical resistance of the powders or composites was measured at increasing loads (up to 1500 bar) applied by a hydraulic press (Specac Atlas Manual Hydraulic Press 15T). Electrically insulating sheets were placed between the conductive cylinders and the load surfaces in order to ensure that the electrical signal passed through the sample. The resistance of the carbon fillers was measured using an Agilent 34401A multimeter.
Composites Mechanical Characterization
Carbonaceous materials containing composites were produced as dog-bone shaped according to the ASTM 638 procedure. Samples were tested using a mechanical stress test (MTS) machine (MTS Q-test10) in tensile test mode until break point. Data were analysed using a self-developed software compiled using Matlab.
Data Analysis
Statistical analysis used were based on t-tests with a significance level of 0.05 (p < 0.05) were carried out using Excel™ software (Microsoft Corp.) and the "data analysis" tool.
Composites Mechanical Characterization
Carbonaceous materials containing composites were produced as dog-bone shaped according to the ASTM 638 procedure. Samples were tested using a mechanical stress test (MTS) machine (MTS Q-test10) in tensile test mode until break point. Data were analysed using a self-developed software compiled using Matlab.
Data Analysis
Statistical analysis used were based on t-tests with a significance level of 0.05 (p < 0.05) were carried out using Excel™ software (Microsoft Corp.) and the "data analysis" tool.
Carbonaceous Materials Characterization
Pyrolysis of spent grounds coffee proceeded according to the mechanism reported by Setter et al. [52]. The main mechanisms that occurred during the degradative processes were those related with decomposition of the small lignin fraction [53] and the most abundant polysaccharides (i.e., cellulose and hemicellulose) [54] with the formation of bio-oils rich in anhydrosugars, furans and acetic acid with trace of aromatics [55][56][57].
Pyrolysis of spent grounds coffee proceeded according to the mechanism reported by Setter et al. [52]. The main mechanisms that occurred during the degradative processes were those related with decomposition of the small lignin fraction [53] and the most abundant polysaccharides (i.e. cellulose and hemicellulose) [54] with the formation of bio-oils rich in anhydrosugars, furans and acetic acid with trace of aromatics [55][56][57].
Ash content of feedstock and carbonaceous materials were preliminary investigated and summarized in Figure 2. The ash content of neat coffee was 1.70 ± 0.14 wt. % and it increased with temperature increments reaching a value around 9 wt. % in the case of C800 (8.92 ± 0.61 wt. %) and C1000 (9.09 ± 0.09 wt. %). As expected, CB showed a very low ash content (0.07 ± 0.01 wt. %) according to Medalia et al. [58] mainly as oxides. Ash content increment at higher temperatures was imputable to advance pyrolytic degradation of the organic matrix leading to the concentration of inorganic residue [59] that did not undergo any temperature induced degradation.
The effect of pyrolytic temperature on biochar morphology was studied using FE-SEM as shown in Figure 3. Neat coffee displayed flaked collapsed structures (Figure 3a, b) that was retained by C400 after pyrolysis at 400 °C (Figure 3e, f). With the increase of temperature to 600 °C lead to the formation of porous structures with average diameters close to 30 μm separated by carbon lamellae with a thickness around 1 μm (Figure 3g, h). At 800 °C, the biochar recovered lost the structure due to the massive release of volatile organic matters during the overall pyrolytic process that induced the collapse of carbonaceous structures together with an improved grindability [60]. At 1000 °C, the increased temperature allowed the massive formation of carbon-carbon bonds that promoted the stabilization of the porous architecture with nanoscale lamellae structures. CB showed a typical highly aggregate spherule-based shape with average diameter of single particles around 50 nm. Ash contents of neat coffee, carbon black (CB) and coffee biochar samples heated at 400, 600, 800 and 1000 • C (C400, C600, C800 and C1000 respectively). Columns marked with different letters were significantly different (p < 0.05).
The ash content of neat coffee was 1.70 ± 0.14 wt. % and it increased with temperature increments reaching a value around 9 wt. % in the case of C800 (8.92 ± 0.61 wt. %) and C1000 (9.09 ± 0.09 wt. %). As expected, CB showed a very low ash content (0.07 ± 0.01 wt. %) according to Medalia et al. [58] mainly as oxides. Ash content increment at higher temperatures was imputable to advance pyrolytic degradation of the organic matrix leading to the concentration of inorganic residue [59] that did not undergo any temperature induced degradation.
The effect of pyrolytic temperature on biochar morphology was studied using FE-SEM as shown in Figure 3. Neat coffee displayed flaked collapsed structures (Figure 3a, b) that was retained by C400 after pyrolysis at 400 • C (Figure 3e,f). With the increase of temperature to 600 • C lead to the formation of porous structures with average diameters close to 30 µm separated by carbon lamellae with a thickness around 1 µm (Figure 3g,h). At 800 • C, the biochar recovered lost the structure due to the massive release of volatile organic matters during the overall pyrolytic process that induced the collapse of carbonaceous structures together with an improved grindability [60]. At 1000 • C, the increased temperature allowed the massive formation of carbon-carbon bonds that promoted the stabilization of the porous architecture with nanoscale lamellae structures. CB showed a typical highly aggregate spherule-based shape with average diameter of single particles around 50 nm. Organic component of significant carbonaceous materials was also analysed using both FT-IR and Raman spectrometry techniques. Among carbonaceous materials, we reported neat coffee, C1000 and CB. Results are shown in Figure 4. Organic component of significant carbonaceous materials was also analysed using both FT-IR and Raman spectrometry techniques. Among carbonaceous materials, we reported neat coffee, C1000 and CB. Results are shown in Figure 4. The FT-IR spectrum of neat coffee showed the broad band of νO-H (3300-3500 cm -1 ), the bands of saturated νC-H (2850-2950 cm -1 ), νC=O (1710-1741 cm -1 ) due to the carboxylic functionalities, νC=C (1540-1638 cm -1 ) due to the presence of aromatic structures, saturated and unsaturated δC-H (1370-1440 cm -1 ), saturated νC-C (1243 cm -1 ), νC-O (1030-1148 cm -1 ) and out-of-plane δO-H below 700 cm -1 . Those bands clearly identified a lignocellulosic derived matrix with massive presence of polysaccharides and aromatics. C1000 did not show any of the characteristic bands of organic matrix but show an envelope of bands below 1800 cm -1 due to carbon skeletal movements. Contrary, CB showed low bands intensity below 1000 cm -1 due the lower variety of carbon structure embedded into particles.
Raman spectra normalized on G peak are shown in Figure 5. Coffee biochars had the typical profiles of amorphous materials [61] in contrast to CB which was more graphitic. The graphitic structure for CB could be observed by the deep gorge between D and G peaks and their shaped structure. An increase of ID/IG ratio was evident for biochars moving from a pyrolytic temperature of 400 to 1000 °C. This increase of ID/IG ratio could be ascribed to the progressively loss of residual functional groups with the increase of temperature. This observation was also supported by the decrease of fluorescence [62]. Due to the loss of less intense parts of these weak interactions, biochar underwent an appreciable disorganization together with aromatic structure formation, in particular up to 600 °C, without the completion of a proper graphitization process that occurs at higher temperature [63].
The evolution of biochar structures due to temperature increment could be monitored though Raman according to Ferrari et al. [61]. Accordingly, the D peaks ( Figure 5) showed wave numbers close to 1350 cm -1 that is typical of transition from amorphous carbon to nanocrystalline graphite. At the same time the biochar G peaks showed wavenumbers close to 1580 cm -1 with exception of C1000. The FT-IR spectrum of neat coffee showed the broad band of ν O-H (3300-3500 cm −1 ), the bands of saturated ν C-H (2850-2950 cm −1 ), ν C=O (1710-1741 cm −1 ) due to the carboxylic functionalities, ν C=C (1540-1638 cm −1 ) due to the presence of aromatic structures, saturated and unsaturated δ C-H (1370-1440 cm −1 ), saturated ν C-C (1243 cm −1 ), ν C-O (1030-1148 cm −1 ) and out-of-plane δ O-H below 700 cm −1 . Those bands clearly identified a lignocellulosic derived matrix with massive presence of polysaccharides and aromatics. C1000 did not show any of the characteristic bands of organic matrix but show an envelope of bands below 1800 cm −1 due to carbon skeletal movements. Contrary, CB showed low bands intensity below 1000 cm −1 due the lower variety of carbon structure embedded into particles.
Raman spectra normalized on G peak are shown in Figure 5. Coffee biochars had the typical profiles of amorphous materials [61] in contrast to CB which was more graphitic. The graphitic structure for CB could be observed by the deep gorge between D and G peaks and their shaped structure. An increase of I D /I G ratio was evident for biochars moving from a pyrolytic temperature of 400 to 1000 • C. This increase of I D /I G ratio could be ascribed to the progressively loss of residual functional groups with the increase of temperature. This observation was also supported by the decrease of fluorescence [62]. Due to the loss of less intense parts of these weak interactions, biochar underwent an appreciable disorganization together with aromatic structure formation, in particular up to 600 • C, without the completion of a proper graphitization process that occurs at higher temperature [63]. This last showed a G peak at 1600 cm -1 due the high amount of nanocrystalline domains not yet rearranged in the ordered structure [64]. The above mentioned consideration was also supported by EDX analysis that showed the carbon content that significantly increased from C400 to C600-C1000 while oxygen content decreased. Carbon content of C600-C1000 were not significantly different from CB even if CB showed a more ordered structure. This support the hypothesis that the driving force of the biochar enhanced conductivity is the reorganization of nanocrystalline domains and not merely the carbon content, shown in Figure 6. Traces of Mg, P, K and Ca were also detected. The evolution of biochar structures due to temperature increment could be monitored though Raman according to Ferrari et al. [61]. Accordingly, the D peaks ( Figure 5) showed wave numbers close to 1350 cm −1 that is typical of transition from amorphous carbon to nanocrystalline graphite. At the same time the biochar G peaks showed wavenumbers close to 1580 cm −1 with exception of C1000. This last showed a G peak at 1600 cm −1 due the high amount of nanocrystalline domains not yet rearranged in the ordered structure [64].
The above mentioned consideration was also supported by EDX analysis that showed the carbon content that significantly increased from C400 to C600-C1000 while oxygen content decreased. Carbon content of C600-C1000 were not significantly different from CB even if CB showed a more ordered structure. This support the hypothesis that the driving force of the biochar enhanced conductivity is the reorganization of nanocrystalline domains and not merely the carbon content, shown in Figure 6. Traces of Mg, P, K and Ca were also detected. Columns marked with different letters are significantly different (p < 0.05).
Electrical Characterization of Carbonaceous Filler and Composites
The set-up shown in Figure 1a was used for biochar powders electrical characterization. Around 3 g of carbonaceous powder, which creates a few millimetres distance between copper cylinders was positioned in the chamber. After the closure of the chamber a pressure was applied with the aim of compacting the powder. The pressure range was from 0 to 1500 bar (step of 250 bar). For each step the stabilized value of resistivity was registered such as the distance between the copper cylinders. The same procedure was repeated for composites of few millimetre thickness. Carbonaceous powders and composites decreased their resistance value during compression until they reached a plateau when high pressure was reached. The decreasing of resistance value could be correlated with the decreasing of space between carbon particles as sketched in Figure 7. In the case of powders, the void among particles collapsed with a production of compact carbon agglomerate as shown in Figure 7a. In the case of composites, Figure 7b shows the mechanism where the polymer chains flow let the carbon particles situate. The resistance value, R, with the value of surface S and distance l between copper surfaces were used in Ohm law (σ = l/RS) to evaluate the conductivity σ. The conductivity value of carbon powder and composites were evaluated following this procedure: 1) A starting value of conductivity was evaluated without any sample in order to measure the value of resistance of the system. This value was subtracted to the resistance value read with samples. 2) The same quantity of carbon powders (CB and biochar) were positioned between copper cylinders and kept by the Plexiglas hollow cylinder. The measurement was repeated several times in order to have a reliable value. 3) Composites were positioned between copper cylinders, in this case the Plexiglas hollow cylinder was not necessary and the value of conductivity was measured in different sample portions. Preliminary results are shown in Figure 8 showing the conductivity of the biochar powders (red line) and percolation curves of related composites. The resistance value, R, with the value of surface S and distance l between copper surfaces were used in Ohm law (σ = l/RS) to evaluate the conductivity σ. The conductivity value of carbon powder and composites were evaluated following this procedure: (1) A starting value of conductivity was evaluated without any sample in order to measure the value of resistance of the system. This value was subtracted to the resistance value read with samples. (2) The same quantity of carbon powders (CB and biochar) were positioned between copper cylinders and kept by the Plexiglas hollow cylinder. The measurement was repeated several times in order to have a reliable value. (3) Composites were positioned between copper cylinders, in this case the Plexiglas hollow cylinder was not necessary and the value of conductivity was measured in different sample portions.
Preliminary results are shown in Figure 8 showing the conductivity of the biochar powders (red line) and percolation curves of related composites. C400 did not show an appreciable conductivity while an increment of pyrolytic temperature up to 600 °C induced a conductivity of up to 0.02 S/m. Further increments of processing temperature up to 800 and 1000 °C led to a conductivity of up to 0.04 and 35.96 S/m. This remarkable increment of conductivity between 800 and 1000 °C was due the enlargement of aromatic region formed as consequence of high temperature carbonization [64]. This deeply affected the electrical behaviour of related composites. Consequently, C400 and C600 containing composites were not conductive for all the range of filler percentage investigated. CB composites were not conductive until the filler concentration of 15 wt. % reaching a conductivity of 5.4 × 10 -8 S/m with a filler loading of 20 wt. %. C1000 composites showed the best performances showing a detectable electrical conductivity with a 15 wt. % of filler and reaching a conductivity of 2.02 S/m with a filler loading of 20 wt. %. Accordingly, with these data, electrical properties of C1000 and C1000 containing composites were studied under a wide range of static pressures comparing with the related CB and CB composites as shown in Figure 9. C400 did not show an appreciable conductivity while an increment of pyrolytic temperature up to 600 • C induced a conductivity of up to 0.02 S/m. Further increments of processing temperature up to 800 and 1000 • C led to a conductivity of up to 0.04 and 35.96 S/m. This remarkable increment of conductivity between 800 and 1000 • C was due the enlargement of aromatic region formed as consequence of high temperature carbonization [64]. This deeply affected the electrical behaviour of related composites. Consequently, C400 and C600 containing composites were not conductive for all the range of filler percentage investigated. CB composites were not conductive until the filler concentration of 15 wt. % reaching a conductivity of 5.4 × 10 −8 S/m with a filler loading of 20 wt. %. C1000 composites showed the best performances showing a detectable electrical conductivity with a 15 wt. % of filler and reaching a conductivity of 2.02 S/m with a filler loading of 20 wt. %. Accordingly, with these data, electrical properties of C1000 and C1000 containing composites were studied under a wide range of static pressures comparing with the related CB and CB composites as shown in Figure 9. CB powder reached a conductivity around 1700 S/m while in the same conditions C1000 reached a conductivity of 300 S/m. Composites containing of CB and C1000 were conductive but the results showed a different trend compared with the relative powders. CB 15 wt. % reached the value of 4 × 10 -3 S/m and its conductive value showed an influence of applied pressure in the first compression movement. C1000 15 wt. % reached to 10 -2 S/m, with an increment around one order of magnitude if compared with CB 15 wt. %. This difference was more relevant for a filler concentration of 20 wt. %. In this case, the conductivity of CB-based composites dropped down to 10 -5 -10 -4 S/m in contrast to C1000 which reached ~10 S/m. The high conductive value for coffee biochar could be due to more uniform filler dispersion inside epoxy resin. Dispersion of the filler inside the epoxy matrix was investigated through FE-SEM ( Figure 10) after samples were cryo-fractured using liquid nitrogen and compared to composites with a filler loading of 15 wt. % due to the similarity of conductivity. CB containing composites showed a dark and clear area (Figure 10a) with different compositions. The clear ones were rich in CB aggregates (Figure 10b, c) while the darkest were poor (Figure 10d). C1000 containing composites showed smooth surfaces with holes (Figure 10e) due the expulsion of embedded C1000 particles during the fracturing (Figure 10e) as clearly shown in Figure 10g. Particles size analysis (Figure 11) showed clearly that C1000 was composed by two particle populations, one around 100 μm and one around 20 μm. Considering the average size of C1000 particles into the composites was reasonable it was assumed that the bigger ones underwent a disruption during the ultrasonication forming small sized well-dispersed particles. CB particles size showed also that it would be more appropriate speaking of CB aggregates instead of single particles [66]. Aggregates could be justified also from Figure 10c where the CB single particles were less than 100 nm but they created agglomerates that also the particle size analysis (Figure 11) was not able to detect. Figure 10g. Particles size analysis ( Figure 11) showed clearly that C1000 was composed by two particle populations, one around 100 µm and one around 20 µm. Considering the average size of C1000 particles into the composites was reasonable it was assumed that the bigger ones underwent a disruption during the ultrasonication forming small sized well-dispersed particles. CB particles size showed also that it would be more appropriate speaking of CB aggregates instead of single particles [66]. Aggregates could be justified also from Figure 10c where the CB single particles were less than 100 nm but they created agglomerates that also the particle size analysis (Figure 11) was not able to detect.
Composites Mechanical Characterization
With the aim of confirming mechanical consistence of samples, a stress-strain curve was investigated for composites of 15 wt. % for CB and C1000 compared with neat resin. Mechanical tests on dog-bones shaped samples are summarized in the Figure 12.
Composites Mechanical Characterization
With the aim of confirming mechanical consistence of samples, a stress-strain curve was investigated for composites of 15 wt. % for CB and C1000 compared with neat resin. Mechanical tests on dog-bones shaped samples are summarized in the Figure 12.
Composites Mechanical Characterization
With the aim of confirming mechanical consistence of samples, a stress-strain curve was investigated for composites of 15 wt. % for CB and C1000 compared with neat resin. Mechanical tests on dog-bones shaped samples are summarized in the Figure 12. According to data report in Figure 13, maximum elongation of neat resin (3.50% ± 0.64%) was the highest compared with those of C1000 and CB containing composites (1.16% ± 0.09% and 1.63% ± 0.08% respectively). Neat resin showed also a remarkably higher toughness (0.48 ± 0.03 MJ/m 3 ) compared with composites that showed values not significantly different from each other close to 0.18 MJ/m 3 . Young s modulus (YM) showed a significant difference between C1000-based composites (3258 ± 273 MPa) and CB ones (1940 ± 163 MPa). These last values were quite close to those of neat resin (1510 ± 160 MPa) and a similar trend was observed with ultimate tensile strength with values of CB composites not significantly different from those of neat resin (both close to 19 MPa) and higher values for biochar-based composites (up to 24.9 ± 1.5 MPa). According to data report in Figure 13, maximum elongation of neat resin (3.50% ± 0.64%) was the highest compared with those of C1000 and CB containing composites (1.16% ± 0.09% and 1.63% ± 0.08% respectively). Neat resin showed also a remarkably higher toughness (0.48 ± 0.03 MJ/m 3 ) compared with composites that showed values not significantly different from each other close to 0.18 MJ/m 3 . Young′s modulus (YM) showed a significant difference between C1000-based composites (3258 ± 273 MPa) and CB ones (1940 ± 163 MPa). These last values were quite close to those of neat resin (1510 ± 160 MPa) and a similar trend was observed with ultimate tensile strength with values of CB composites not significantly different from those of neat resin (both close to 19 MPa) and higher values for biochar-based composites (up to 24.9 ± 1.5 MPa). Composites behaviour observed during the mechanical tests enlightened the different interaction between different carbonaceous filler with epoxy matrix with magnification of filler-resin interaction, and in the case of biochar-based composites with an increase brittleness and a reduced elongation.
As reported by Chodak et al. [39] about CB containing poly(propylene) composites, the formation of a diffuse particles network is detrimental for the mechanical properties. The same behaviour was observed in the production of CB-based composites which presented a decrement of Ultimate tensile strength compared with CB1000 ones. CB1000 were very close to the percolation threshold ( Figure 7) and this induced a very relevant decrement of maximum elongation. Working below the percolation threshold allowed the preservation of some of the appealing properties of a Figure 13. Summary of (a) ultimate tensile strength, (b) Young s modulus, (c) toughness and (d) maximum elongation of neat resin, biochar and carbon-based composites. Columns marked with same letters were not significantly different (p < 0.05).
Composites behaviour observed during the mechanical tests enlightened the different interaction between different carbonaceous filler with epoxy matrix with magnification of filler-resin interaction, and in the case of biochar-based composites with an increase brittleness and a reduced elongation.
As reported by Chodak et al. [39] about CB containing poly(propylene) composites, the formation of a diffuse particles network is detrimental for the mechanical properties. The same behaviour was observed in the production of CB-based composites which presented a decrement of Ultimate tensile strength compared with CB1000 ones. CB1000 were very close to the percolation threshold ( Figure 7) and this induced a very relevant decrement of maximum elongation. Working below the percolation threshold allowed the preservation of some of the appealing properties of a brittle resin (i.e., high Young's modulus and ultimate tensile strength) together with the magnification of electrical conductivity.
Conclusions
The coffee waste stream was efficiently used as feedstock for pyrolytic conversion at different temperatures. The effect of process temperature on the properties of biochar was investigated and it was observed that further increments of temperature improved the porous stability and conductivity of the material. This phenomenon was probably due to both the formation of new C-C bonds and to the rearrangement of graphitic and quasi-graphitic domains formed during pyrolysis as shown by Raman characterization.
The most relevant result of this study was that even if neat biochar produced at 1000 • C showed less conductivity with respect to CB when it was dispersed in composite, the electrical properties of a composite containing coffee biochar were some orders of magnitude higher than composites containing CB. In the case of 20 wt. % of C1000, composites showed four orders of magnitudes more that composites containing 20 wt. % of CB. This could be ascribed to the uniform dispersion of coffee biochar, in contrast to CB which creates agglomerations. These agglomerations induced a non-uniform composite structure in the CB containing composites. Mechanical properties of composites with coffee biochar were verified and they were not compromised with respect to composites containing C1000, showing better UTS and YM. Both materials were more brittle than neat resin but C1000 showed some of the properties of high performances resins. Mechanical properties also showed a direct correlation with filler dispersion. Where the filler dispersion was uniform, the mechanical performances were improved.
A new era could be at the door for carbon fillers in polymer composites. Considering the sustainability of coffee biochar production, the results reported show how biomass-derived carbon could be a sound replacement for oil-derived carbon fillers such as CB. | 2019-10-17T08:55:38.923Z | 2019-10-16T00:00:00.000 | {
"year": 2019,
"sha1": "12a5a19d1d3a678c9477241bf3b0c04ac704c072",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/12/1916/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94187b4de4fcb79773354658a5978679ee135032",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
248778017 | pes2o/s2orc | v3-fos-license | Paradigm Shift in Prostate Cancer Diagnosis: Pre-Biopsy Prostate Magnetic Resonance Imaging and Targeted Biopsy
With regard to the indolent clinical characteristics of prostate cancer (PCa), the more selective detection of clinically significant PCa (CSC) has been emphasized in its diagnosis and management. Magnetic resonance imaging (MRI) has advanced technically, and recent international cooperation has provided a standardized imaging and reporting system for prostate MRI. Accordingly, prostate MRI has recently been investigated and utilized as a triage tool before biopsy to guide tissue sampling to increase the detection rate of CSC beyond the staging tool for patients in whom PCa was already confirmed on conventional systematic biopsy. Radiologists must understand the current paradigm shift for better PCa diagnosis and management. This article reviewed the recent literature, demonstrating the diagnostic value of pre-biopsy prostate MRI with targeted biopsy and discussed unsolved issues regarding the paradigm shift in the diagnosis of PCa.
INTRODUCTION
https://doi.org/10.3348/kjr.2022.0059 kjronline.org [5]. Second, the sampling power may decrease as prostate volume increases in association with the underlying benign prostatic hypertrophy. Finally, overdiagnosis of low-risk PCa is an important issue because serum PSA levels are not specific to clinically significant PCa (CSC). Low-risk PCa, characterized by an early stage and a low pathologic grade, has a substantially excellent prognosis [6,7]. Therefore, active management can be clinically doubtful for silent PCa, especially in elderly male with a short life expectancy; this is because radical prostatectomy, which is the standard therapeutic option for localized PCa, yields morbidity [8]. The conventional diagnostic pathway has been reported to be ineffective in the selective detection of CSC, despite its diagnostic ability to detect overall PCa. Several factors, such as volume, pathologic grade, and local extent of the index lesion, should be actively managed owing to its aggressiveness and poor prognosis [9]. CSC was defined based on surgical specimen findings as follows: tumor volume ≥ 0.5 cm 3 , Gleason score > 6, or presence of extraprostatic extension [10]. Among the variable criteria for CSC, tumors with group 2 International Society of Urological Pathology (ISUP) grade (i.e., Gleason score 3 + 4) or higher constitute the most common and important criteria for CSC in both biopsy and prostatectomy specimens [11].
Over the last twenty years, prostate magnetic resonance imaging (MRI) has advanced technologically and has been widely investigated in PCa detection, localization, and characterization. With increasing knowledge on prostate MRI, to compensate for the limitations of conventional diagnostic strategies, several researchers have suggested the potential of prostate MRI before biopsy in PCa diagnosis. However, heterogeneity in imaging protocols and interpretive methods has been recognized as an obstacle in utilizing prostate MRI beyond cancer staging. Recently, global collaboration has attempted to standardize the protocol and interpretation of multiparametric prostate MRI (mpMRI), and efforts have brought about promising results, such as the formation of international guidelines termed the Prostate Imaging Reporting and Data System (PI-RADS) [12]. Positive results may induce a change in PCa diagnosis using prostate MRI before biopsy to identify specific target lesions or even to determine whether to perform a biopsy (Fig. 1B). The current paradigm shift in PCa diagnosis aims at the following points: improvement in CSC detection, reduction in the number of unnecessary biopsies or biopsy cores, and prevention of the over-detection of clinically insignificant PCa. The purpose of this review is to introduce the achievements of current investigations in association with pre-biopsy MRI and MRI-targeted biopsy, for PCa diagnosis and providing insight into the unsolved issues of this
Pre-Biopsy MRI and MRI-Targeted Biopsy in Patients with Prior Negative Biopsy
Persistent or even increased PSA levels are clinically dilemmatic in patients with prior negative biopsy results. Conventionally, a repetitive systematic TRUS-guided biopsy is the only diagnostic approach. However, repetitive systematic biopsies yield a decreasing PCa detection rate at each subsequent sampling. In a previous study of 2526 patients, the cancer detection rates of serial systematic biopsies after initial biopsy were 17%, 14%, 11%, and 7%, respectively [13]. Similarly, in another study, the detection rates of PCa on the first and second biopsies in 1051 male with prior negative biopsy were 10% and 5%, respectively [14]. Therefore, previous guidelines recommend at least a single session of TRUS-guided biopsy for patients with an initial negative biopsy result. To improve the detection rate of PCa in repetitive biopsies, a study performed saturation biopsies with a markedly increased number of cores [15]. In this study, the PCa detection rate was 34% on the first repeat biopsy in male with a prior negative biopsy. However, limitation of saturation biopsy is the necessity for general anesthesia or conscious sedation, which is not mandatory in conventional biopsy. Furthermore, the increased detection rate of PCa is primarily attributed to the increased detection of clinically insignificant PCa.
Although prostate MRI has been used for staging pathologically confirmed PCa on TRUS-guided systematic biopsy, technical development and accumulated data on prostate MRI have enabled the utilization of MRI before re-biopsy, to potentially solve the limitations of both repetitive systematic biopsy and saturation biopsy. Hambrock et al. [16] reported a superior PCa detection rate of MRI with targeted biopsy compared with systematic TRUS-guided biopsy in male with a prior negative biopsy. In the study by Portalez et al. [17], more targeted cores were found to be PCa than random systematic cores (36.3% vs. 4.9%) in patients with prior negative biopsy. In another study, the positive biopsy yield was higher in MRI-prompted biopsies than in systematic samplings (92% vs. 23%), and 77% of tumors were exclusively detected in MRI-prompted zones [18]. Furthermore, the authors demonstrated that the anterior and apical regions contained most of the tumors that were missed by prior systematic TRUS-guided biopsy. In a study by Sonn et al. [19], more CSC was detected on targeted biopsy than on systematic biopsy, and the degree of suspicion on MRI was the most powerful predictor of CSC in male with prior negative biopsy. In a systematic review and meta-analysis of the abovementioned studies, MRI-targeted biopsy improved both overall PCa and CSC detection rates (relative sensitivity, 1.62 and 1.22, respectively) compared with systematic TRUS-guided biopsy in male with prior negative biopsy [20]. The added value of MRI-targeted biopsy is related to the tumor location, where contact between biopsy needles and PCa can be easily avoided during systematic biopsy. According to accumulating data, recent international guidelines recommend pre-biopsy MRI and targeted biopsy in patients with prior negative biopsy [21][22][23][24]. Therefore, pre-biopsy MRI should be considered in repeat biopsy cases if both quality-controlled prostate MRI and experienced operators are available for targeted biopsies.
Pre-Biopsy MRI and MRI-Targeted Biopsy in Biopsy-Naïve Patients
In biopsy-naïve patients with clinically suspicious PCa, there has been growing interest in adopting pre-biopsy MRI and MRI-targeted biopsy (Table 1). Panebianco et al. [25] reported the results of pre-biopsy MRI and targeted biopsy in a randomized prospective analysis of 1140 male who were initially evaluated for PCa. In their study, the proportion of male with an overall PCa diagnosis was higher in those randomized to the MRI-first strategy than in those randomized to the standard TRUS-guided biopsy. However, another prospective randomized study by Tonttila et al. [26] did not find a significant difference between the prebiopsy MRI group and the standard TRUS-guided biopsy group among 113 biopsy-naïve patients, although the pre-biopsy MRI group showed a slightly higher detection rate for both overall PCa and CSC than the standard TRUSguided biopsy group. Similarly, there was no significant difference in the detection rates of the two biopsy strategies, for overall PCa and CSC, in a study by Baco et al. [27]. However, the authors emphasized the utility of MRItargeted biopsy because the majority of CSCs (87%) were detected by targeted biopsy. In a more recently published study by Porpiglia et al. [28], the diagnostic pathway using pre-biopsy MRI was stated to be superior to the standard pathway, in detecting both overall PCa and CSC. This topic was further investigated in multicenter-based studies, such as in a prospective study including 626 biopsyhttps://doi.org/10.3348/kjr.2022.0059 kjronline.org naïve men by van der Leest et al. [29], where the MRI pathway (i.e., pre-biopsy MRI with MRI-targeted biopsy) showed an identical detection rate for CSC as the standard pathway (25% vs. 23%). The analysis showed that the MRI pathway enabled biopsy avoidance in 49% of the enrolled patients, at the cost of missing CSC in 4% [30]. In another prospective study including 576 male without a previous biopsy by Ahmed et al. [30] (PROMIS trial), MRI-targeted biopsy was more sensitive and less specific in detecting CSC (sensitivity, 93%; specificity, 41%) than TRUS-guided biopsy (sensitivity, 48%; specificity, 96%). Triage using MRI allowed 27% of the patients to avoid biopsy. Furthermore, a recent study including 500 biopsy-naïve male (PRECISION trial) reported similar results [31], as MRI-targeted biopsy was superior to standard TRUS-guided biopsy (38% vs. 26%), and fewer patients were diagnosed with clinically insignificant PCa in the MRI pathway than in the standard pathway (adjusted difference, -13%). Another prospective study (MRI-FIRST trial) in 251 biopsy-naïve patients demonstrated that targeted biopsy was similar to systematic biopsy and added value to systematic biopsy in detecting CSC [32]. In summary, recent data from large and highquality prospective studies consistently demonstrated the superiority of pre-biopsy MRI with MRI-targeted biopsy over standard TRUS-guided biopsy in detecting CSC (Fig. 1B), which potentially reduced unnecessary biopsies in biopsynaïve patients with clinical suspicion of PCa. These results may support the current paradigm shift in the diagnostic strategies for PCa.
Interpretation of Pre-Biopsy MRI and Indication of MRI-Targeted Biopsy
Precise and standardized interpretation of prostate MRI is essential for utilizing MRI as a triage system in the initial assessment of patients with clinically suspected PCa. In 2014, PI-RADS was initially proposed by the European Society of Urogenital Radiology (ESUR); in 2015, it was updated to its second version by the ESUR and American Urologic Association (AUA) [12,33]. In the updated version, the PI-RADS was further simplified to improve CSC detection. This system defines each category according to the probability of CSC. PI-RADS category 4 or 5 was assigned if CSC was likely or highly likely to be present, and category 3 if a lesion was equivalent to the probability of CSC. The guidelines distinctly described that biopsy should be considered for category 4 or 5, but not for category 1 or 2. For category 3, the PI-RADS ambiguously described that biopsy may or may not be appropriate, depending on nonimaging factors. This is because the PI-RADS was developed and modified based on the consensus of the expert committee; therefore, it encourages researchers to validate these guidelines.
Many studies have reported either biopsy or surgical pathological findings in each of the PI-RADS version 2 categories. A study reported that CSC detection rates on MRI-targeted biopsy were 44%-49% for category 4 lesions, and 72%-74% for category 5 lesions [34]. In this study, 11% of the category 3 lesions were CSCs. In the PRECISION trial, CSCs were identified in 12% of category 3 lesions, 60% of category 4 lesions, and 83% of category 5 lesions. In a prospective study by van der Leest et al. [29], CSCs were [35] reported that the PCa detection rates were 15% for category 3 lesions, 39% for category 4 lesions, and 72% for category 5 lesions; moreover, the overall PCa detection rates were 35% for PI-RADS scores greater than or equal to 3 and 49% for PI-RADS scores greater than or equal to 4. Considering the reported data, category 4 or 5 lesions should be targeted during biopsy because of the high probability of CSC, as recommended by the PI-RADS. However, routine inclusion of category 3 lesions in targeted biopsy may still be controversial because both the number of biopsy avoidances and the detection rate of CSC can be increased by omitting biopsies for category 3 lesions. If category 3 lesions were excluded from targeted biopsy, biopsy avoidance would increase from 28% to 48%, with a higher detection rate of CSC (from 28% to 71%) in the PRECISION trial. Similarly, biopsy avoidance increased from 49% to 56%, with an increased detection rate of CSC (from 25% to 55%) in the study by van der Leest et al. [29]. These findings are associated with a relatively low rate of CSC detection in category 3 compared with that in category 4 or 5. However, the absolute number of missed CSCs would increase if targeted biopsy was omitted for category 3 lesions. Table 2 summarizes the literature reporting PCa detection rates in patients with a PI-RADS version 2 score of 3 on prostate MRI. Tan et al. [36] reported that 3 of 31 (9.7%) PI-RADS category 3 lesions were confirmed as CSCs in their analysis. In a retrospective analysis by Sheridan et al. [37], 19 of 111 (17.1%) PI-RADS category 3 lesions were CSCs on MRI-TRUS fusion biopsy. In another study, 26 of 156 patients (16.7%) with PI-RADS category 3 lesions showed CSCs on targeted biopsy [38]. Therefore, a targeted biopsy is required to increase the absolute number and sensitivity of CSC detection, even though the detection rate and specificity may decrease. Almost all recent multicenter prospective trials have included equivocal lesions (i.e., score 3 on a Likert scale or category 3 in the PI-RADS) in MRI-targeted biopsy to prevent under-diagnosis of PCa [29][30][31][32]39,40].
In the recently modified PI-RADS version 2.1, there were some changes, especially in the definitions of Categories 2 and 3 in the transition zone [41]. Rosenkrantz et al. [34] reported a relatively wide discrepancy in the frequency of scoring category 3 in PI-RADS version 2 among radiologists. This tendency resulted in discrepancies in the overall PCa and CSC detection rates for category 3 lesions. On the basis of these findings, the authors proposed several adjustments to PI-RADS version 2 for more concordant and better interpretations. Several recent studies reported slightly improved diagnostic performance of PI-RADS version 2.1 compared with the previous version, in both transitional and peripheral zone cancer [42][43][44].The recent change in the PI-RADS might influence the frequency of scoring category 3 in MRI interpretation; accordingly, the results of targeted biopsy might be slightly different from the reported range according to the previous PI-RADS version.
In summary, a targeted biopsy should be performed for lesions of category 3 or higher on mpMRI, according to the latest PI-RADS version. Furthermore, more data should be collected and analyzed in future PI-RADS versions to validate the effectiveness and appropriateness of MRItargeted biopsies for category 3 lesions.
Techniques for MRI-Targeted Biopsy
There are three different technical strategies for MRI- kjronline.org targeted biopsies: in-bore MRI biopsy, MRI-TRUS fusion biopsy, and cognitive registration TRUS biopsy [45]. In-bore MRI biopsy is the first technique developed for targeted prostate biopsy under MRI guidance; it allows direct and precise sampling of suspicious lesions on MRI [46]. Several studies have reported PCa detection rates ranging from 15% to 52% by adopting this technique in patients with prior negative systematic biopsy [46][47][48][49][50][51][52][53]. However, this technique requires specialized MRI-compatible equipment. Furthermore, systematic biopsy is not affordable because each sampling takes a substantial amount of time compared to TRUS biopsy; and is therefore costly. Conversely, cognitive registration for TRUS biopsy requires no additional software or equipment. The operator reviews the lesion and anatomy of the prostate gland on MRI and then estimates the target using real-time TRUS imaging. Both targeted and systematic biopsies can be sequentially performed under TRUS guidance. Therefore, this biopsy technique has advantages over in-bore MRI biopsies in terms of time and cost. It has been proven to be a better technique for detecting CSC than non-targeted systematic biopsy [54][55][56][57][58][59][60][61][62]. However, a disadvantage is that the processes of cognitive fusion and visual registration are operatordependent. Furthermore, visual discrepancies between parallel axial images and fanwise-acquired TRUS images may result in incorrect registration, especially for lesions located in the far apex or base of the prostate gland [63]. Instead of cognitive registration, MRI-TRUS fusion biopsy utilizes software-based platforms for fusion during biopsy to minimize operator errors. Therefore, MRI-TRUS fusion biopsy yields a moderate position regarding cost, procedure time, and technical availability, compared with the other biopsy techniques. Several studies have reported the diagnostic performance of each MRI-targeted biopsy method; however, only a few studies have directly compared the results of each technique (Table 3). Initial studies compared the diagnostic performance between MRI-TRUS fusion biopsy and cognitive registration TRUS biopsy, but could not demonstrate a significant superiority of MRI-TRUS fusion biopsy in detecting PCa [64][65][66]. Arsov et al. [67] found no significant difference between in-bore MRI biopsy and MRI-TRUS fusion biopsy in detecting both overall PCa (37% vs. 39%) and CSC (29% vs. 32%). Yaxley et al. [68] also reported no advantage of in-bore MRI biopsy over cognitive registration TRUS biopsy in detecting overall PCa and CSC. In a prospective trial by Hamid et al. [69], neither the overall PCa nor CSC detection rates were significantly different between cognitive registration and MRI-TRUS fusion techniques. However, Kaufmann et al. [70] found a significant advantage of in-bore MRI or MRI-TRUS fusion biopsy over cognitive registration for overall PCa detection, although they also failed to find superiority of any technique in detecting CSC. Similarly, a recent meta-analysis reported that in-bore MRI biopsy showed superior diagnostic performance in overall PCa detection compared with cognitive registration TRUS biopsy [71]. However, MRI-TRUS fusion biopsy showed a similar performance to in-bore MRI biopsy in detecting overall PCa and CSC, and there was no significant difference between any single biopsy technique, in detecting CSC. According to a multicenter randomized controlled trial kjronline.org (FUTURE trial), the detection rates of overall PCa and CSC were not significantly different among the three techniques in a repeat biopsy setting in patients with prior negative biopsies [39]. In the trial, the detection rates of PCa and CSC respectively were 55% and 33% by in-bore MRI biopsy, 49% and 34% by MRI-TRUS fusion biopsy, and 44% and 33% by cognitive registration TRUS biopsy (all p > 0.05). These results suggest that the software or equipment for MRI-TRUS fusion or in-bore MRI biopsy is not mandatory for MRI-targeted biopsy, as cognitive registration TRUS biopsy has shown a similar diagnostic performance, especially in detecting CSC. Nonetheless, we must perceive a potential bias in the results because the majority of procedures in the literature might be performed by experienced operators. The outcome of cognitive registration for TRUS biopsy can be influenced by the skill and experience of the operator. Therefore, the use of in-bore MRI and MRI-TRUS fusion biopsy is recommended if they are available, as they may enable a more standardized and uniform fusion process than cognitive registration TRUS biopsy, especially if the lesions are small or invisible on TRUS.
Optimal Number of Biopsy Cores Per Lesion during MRI-Targeted Biopsy
The ideal number of targeted biopsies per lesion was not determined because of a lack of accumulated data. A recent study reported that increasing the number of biopsy core samples from one to three and three to five per target lesion increased the detection rate of CSC by 6.4% and 2.4%, respectively [72]. The authors also described that increasing the number of samples to more than five per lesion would be ineffective because it would diminish the incremental detection rate of CSC. These results can be explained in terms of the characteristics of the Gleason score for PCa grading. The final Gleason score is calculated as the sum of the first half of the score, based on the dominant morphological pattern, and the second half, based on the non-dominant pattern of the highest grade [73,74]. Accordingly, undersampling can lead to underestimation of the Gleason score, and this phenomenon occurs more frequently in low-volume tumors [75]. Several studies have demonstrated that upto 60% of clinically insignificant PCa determined on biopsy changed to CSC among prostatectomy specimens [76][77][78]. Therefore, multiple cores obtained from a target lesion may lead to the detection of CSC, which could be falsely determined on a single biopsy core, as clinically insignificant PCa. This is especially crucial in determining the eligibility for active surveillance (AS) because a Gleason score of 6 on biopsy is one of the most common and important inclusion criteria for AS [79,80]. While it is reasonable to obtain multiple cores during targeted biopsies for the detection of CSC, there may be a risk of oversampling or increased detection of clinically insignificant cancer. The current consensus statement of the AUA and Society of Abdominal Radiology also recommends at least two cores per target lesion [81]. Nevertheless, the operators cannot neglect the increasing cost and potential complication rate associated with the number of biopsy cores. Therefore, the definitive number of biopsy cores should be determined by each operator during biopsy, considering individual confidence in targeting and lesion characteristics, such as size, location, and visibility during biopsy.
Necessity of Routine Systematic Biopsy in Conjunction with MRI-Targeted Biopsy
Although the PRECISION trial demonstrated the superiority of pre-biopsy MRI with or without the targeted biopsy pathway, the results regarding whether systematic biopsy should be performed in conjunction with targeted biopsy remain unclear. A recent prospective multicenter study (MRI-FIRST trial) compared CSC detection rates between targeted, systematic, and targeted systematic biopsies [32]. The CSC detection rate in the biopsy-naïve cohort was higher in the combined biopsy group (37.5% for ISUP grade ≥ 2; 21.1% for ISUP grade ≥ 3) than in either the systematic (29.9% for ISUP grade ≥ 2; 15.1% for ISUP grade ≥ 3) or targeted biopsy groups (32.3% for ISUP grade ≥ 2; 19.9% for ISUP grade ≥ 3). In a recent study by Kim et al. [82], combined targeted and systematic biopsy yielded an increased detection rate of 5.6%, compared to targeted biopsy alone. In another prospective study, the underdetection rate of CSC was higher in targeted biopsy only (9%) than in the combination of targeted and systematic biopsies (2%) [29]. In the repeated biopsy cohort, combined targeted and systematic biopsies increased the detection rate of ISUP grade ≥ 2 and ≥ 3 PCa by approximately 40% and 50%, respectively [83]. Therefore, targeted biopsy should be accompanied by systematic biopsy to increase the detection rate of CSC for the initial assessment of biopsynaïve patients as well as repeated biopsy patients. The disadvantages of systematic biopsy include the cost and https://doi.org/10.3348/kjr.2022.0059 kjronline.org potential increase in biopsy-related complications. However, there are no results from large prospective trials that directly compare the complication rates between targeted biopsy only and targeted biopsy with systematic biopsy. In a systematic review of prostate biopsy complications, more biopsy cores were somewhat related to minor complications associated with pain, bleeding, infection, hematospermia, and erectile dysfunction, although there were substantial controversies among the results of these studies [84]. However, no previous studies have demonstrated a definite relationship between the number of biopsy cores and fatal complications. Therefore, the addition of systematic biopsy may not inflict a significant disadvantage in the management of patients, considering the diagnostic benefit.
Systematic Biopsy in Patients without Any Target Lesion on MRI
Several studies have reported that some PCa can be missed on MRI by analyzing preoperative prostate MRI with surgical pathological data or systematic TRUS biopsy. According to a systematic review and meta-analysis, the median negative predictive value (NPV) of mpMRI was 82.4% for overall PCa, and 88.1% for CSC in 48 studies (median disease prevalence rate, 50.4% and 32.9% for overall PCa and CSC, respectively) [85]. In conclusion, this study emphasized the variation in the NPV of MRI depending on the prevalence of PCa, definition of CSC, and interpretation of positive MRI findings (i.e., Likert scale or PI-RADS version 1). In a recent retrospective study that adopted PI-RADS version 2 for MRI interpretation by Kim et al. [86], cancer-negative findings on pre-biopsy MRI yielded a missed detection rate of 12.6% for PCa, including 3.9% for CSC (disease prevalence rate, 25% and 8.9% for overall PCa and CSC, respectively). In the PROMIS trial, 158 of 576 (27.4%) biopsy-naïve patients showed no target lesion on MRI, of which 17 (10.8%) had CSC on template prostate mapping biopsy (disease prevalence rate, 71% and 40% for overall PCa and CSC, respectively). Although the NPV and false-negative value of MRI are variable depending on the study design, omitting a biopsy on the basis of negative MRI findings may result in missed PCa, including CSC. This is because small-volume PCa, especially of less than 1.0 cm 3 , can be invisible on mpMRI [87][88][89].
Performing fewer biopsies may have the advantage of avoiding cost-and procedure-related problems, at the expense of missing cancer. Therefore, it is difficult to determine the diagnostic risk and economic benefits of MRI for determining prostate biopsy. The following issues should be thoroughly considered to conclude whether omitting a biopsy based on negative MRI findings is clinically justifiable. First, the exact epidemiology of PCa should be understood at institutional and national levels. Furthermore, a cost-effectiveness analysis should be conducted to assess the economic benefit of omitting a biopsy and economic loss for MRI under the national medical environment. Faria et al. [90] attempted to optimize PCa diagnosis in terms of effectiveness and cost-effectiveness, according to the PROMIS trial. They concluded that the MRI-first strategy was effective and cost-effective for CSC diagnosis under the circumstances of the UK National Health Service. Second, the interpretation of MRI findings should be standardized and quality-controlled. The results of published studies may be acquired from imaging and biopsy data handled by experienced radiologists or urologists. False-negative MRI findings can result from MRI reading errors in addition to the technical limitations of mpMRI [91]. Therefore, radiologists should use the most recently updated version of PI-RADS. Furthermore, quality control may be mandatory in terms of the imaging protocols and equipment for mpMRI. Finally, patient stratification can be useful in determining candidates who are more eligible to skip biopsy based on negative MRI findings. Panebianco et al. [92] concluded that systematic biopsy should be recommended in younger patients with high or increasing PSA levels despite negative MRI findings. Omitting a biopsy can be relatively effective in low-risk patients, in whom the NPV of MRI may be high owing to the low prevalence of PCa. Meanwhile, systematic biopsy may be necessary in high-risk patients because omitting a biopsy may be at the expense of substantial CSC under-detection.
In patients with negative MRI findings, serum tumor markers can be useful as supplementary indicators for active monitoring or intervention. Washino et al. [93] reported a threshold PSA density of < 0.15 ng/mL 2 , which may avoid unnecessary biopsy in conjunction with a PI-RADS version 2 score of ≤ 3 on MRI. In addition to PSA density, the prostate health index outperformed PSA, free PSA, and free-to-total PSA levels in predicting PCa; accordingly, the results suggest its potential as a biomarker to triage patients with negative MRI findings [94]. However, the levels of tumor markers used to stratify patients can be affected by the methodology of MRI interpretation and the definition of negative MRI findings. Therefore, further data https://doi.org/10.3348/kjr.2022.0059 kjronline.org are needed to determine the threshold values of these novel biomarkers to stratify patients with negative MRI findings, which can be properly interpreted according to the latest version of the PI-RADS.
Quality Control for Prostate MRI and Imaging Interpretation
Accurate and standardized imaging interpretations based on quality-controlled mpMRI are preconditions for using MRI as a new diagnostic strategy for PCa. Although the PI-RADS has standardized the process of prostate imaging and interpretation, subjectivity in imaging interpretation owing to the intrinsic limitations of the system still remains. Furthermore, the system does not guarantee the quality of the acquired images or the performance of individual radiologists in practice. The ESUR and the European Urological Association Section of Urologic Imaging recently provided a consensus statement on recommendations for controlling image quality and interpretation performance [95]. Furthermore, a new scoring system called the Prostate Imaging Quality was suggested according to the PRECISION trial [96]. Although these attempts are currently incipient, the accumulation of consensus statements and guidelines for quality standards may impel preparation for national or international certifications in the near future. Radiologists need to consider not only the technical aspects of PI-RADS, but also the efforts for quality control in prostate imaging and interpretation.
CONCLUSION
Pre-biopsy MRI with subsequent targeted biopsy has added value in diagnosing CSC in both biopsy-naïve patients and those with prior negative biopsy results. The accumulated data seems to be sufficient for a paradigm shift in diagnosing PCa because recent prospective studies have consistently demonstrated the superiority of the MRIfirst strategy over the conventional diagnostic pathway. Cognitive registration TRUS biopsy is the most costeffective method for targeted biopsy without significant limitations in CSC detection rate, although in-bore MRI or MRI-TRUS fusion biopsy is recommended if available. During targeted biopsy, systematic biopsy seems to be necessary in both biopsy-naïve and repeated biopsy patients, especially in those at high risk for CSC. However, whether a systematic biopsy can be omitted in patients without a target lesion on MRI remains controversial. Risk stratification and a stepwise strategy can be effective, although further data are necessary to address this issue. Quality control of imaging and interpretation is an important precondition for these above issues.
Availability of Data and Material
Data sharing does not apply to this article as no datasets were generated or analyzed during the current study.
Conflicts of Interest
Chan Kyo Kim who is on the editorial board of the Korean Journal of Radiology was not involved in the editorial evaluation or decision to publish this article. All remaining authors have declared no conflicts of interest.
Author Contributions
Conceptualization: all authors. | 2022-05-15T06:22:04.128Z | 2022-05-09T00:00:00.000 | {
"year": 2022,
"sha1": "c238f448c2a070c6502dd2107f3a0b5ad30c2678",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a5bfe5c152bdf7bc89938e45a4d445f593b8c9cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119138129 | pes2o/s2orc | v3-fos-license | Nonlinear Dirac Equation On Graphs With Localized Nonlinearities: Bound States And Nonrelativistic Limit
In this paper we study the nonlinear Dirac (NLD) equation on noncompact metric graphs with localized Kerr nonlinearities, in the case of Kirchhoff-type conditions at the vertices. Precisely, we discuss existence and multiplicity of the bound states (arising as critical points of the NLD action functional) and we prove that, in the $L^2$-subcritical case, they converge to the bound states of the NLS equation in the nonrelativistic limit.
Introduction
The investigation of evolution equations on metric graphs (see, e.g., Figure 1) has become very popular nowadays as they are assumed to represent effective models for the study of the dynamics of physical systems confined in branched spatial domains.A specific attention has been addressed to the focusing nonlinear Schrödinger (NLS) equation, i.e.
with suitable vertex conditions, as it is supposed to well approximate (for p = 4) the behavior of Bose-Einstein condensates in ramified traps (see, e.g., [29] and the references therein).
From the mathematical point of view, the discussion has been mainly focused on the study of the stationary solutions of (1), namely functions of the form v(t, x) = e −iλt u(x), with λ ∈ R, that solve the stationary version of (1), i.e. −u ′′ − |u| p−2 u = λu , with vertex conditions of δ-type.In particular, the most investigated subcase has been that of the Kirchhoff vertex conditions, which impose at each vertex: (i) continuity of the function (for details see (15)), (ii) "balance" of the derivatives (for details see (16)).For a short bibliography limited to the case of noncompact metric graphs with a finite number of edges (which is the framework discussed in the paper) we refer the reader to, e.g., [1,2,3,4,18,19,35,39,40] and the references therein.Following [28,37], also a simplified version of this model has recently gained a particular attention: the case of a nonlinearity localized on the compact core K of the graph, which is the subgraph consisting of all the bounded edges (see, for instance, Figure 2); namely, with Kirchhoff vertex conditions and χ K denoting the characteristic function of K.This problem has been studied in the L 2 -subcritical case in [48,49,51], while some new results on the L 2 -critical case have been presented in [22].
Remark 1.1.We also mention some interesting results on the problem of the bound states on compact graphs.For a purely variational approach we recall, e.g., [21], whereas for a bifurcation approach we recall, e.g., [36].
As a further development, in the last years also the study of the Dirac operator on metric graphs has generated a growing interest (see, e.g., [6,12,16,43]).In particular, [47] proposed (although in the toy case of the infinite 3-star graph, depicted in Figure 3) the study of the nonlinear Dirac equation on networks, namely (1) with the laplacian replaced by the Dirac operator where m > 0 and c > 0 are two parameters representing the mass of the generic particle of the system and the speed of light (respectively), and σ 1 and σ 3 are the so-called Pauli matrices, i.e.
σ 1 := 0 1 1 0 and and with the wave function v replaced by the spinor χ := (χ 1 , χ 2 ) T .Precisely, [47] suggests the study again of the stationary solutions, that is χ(t, x) = e −iωt ψ(x), with ω ∈ R, that solve The attention recently attracted by the linear and the nonlinear Dirac equations is due to their applications, as effective equations, in many physical models, as in solid state physics and nonlinear optics [30,31].
While originally the NLDE appeared as a field equation for relativistic interacting fermions [34], then it was used in particle physics to simulate features of quark confinement, acoustic physics, and in the context of Bose-Einstein condensates [31].
Recently, it also appeared that some properties of physical models, as thin carbon structures, are well described using as an effective equation for non-relativistic electronic properties , the Dirac equation.We mention, thereupon, the seminal papers by Fefferman and Weinstein [25,26], the work of Arbunich and Sparber [10] (where a rigorous justification of linear and nonlinear equations in two-dimensional honeycomb structures is given) and the referenced therein.In addition, we recall that the existence of stationary solutions for cubic and Hartree-type Dirac equations for honeycomb structures and graphene samples has been investigated in [14,13,15]; whereas, for an overview on global existence results for one dimensional NLDE we refer to [17,41].
On the other hand, in the context of metric graphs the interest for the Nonlinear Dirac equation arises if one aims at taking into account relativistic effects.In particular, it applies in the analysis of effective models of condensed matter physics and field theory ( [47]).Moreover, Dirac solitons in networks may be realized in optics, in atomic physics, etc. (see again [47] and the references therein).
In this paper, we discuss the case of (5) with localized nonlinearity (or, equivalently, the Dirac analogous of (2)), namely The reduction to this simplified model arises as one assumes that the nonlinearity affects only the compact core of the graph.This idea was originally exploited in the case of Schrödinger equation in [28] and it represents a preliminary step toward the investigation of the case with the "extended" nonlinearity, i.e. (5), which will be discussed in a forthcoming paper.
It is finally worth stressing that, as for the Schrödinger case, the operator D needs some suitable vertex conditions, which make the operator self-adjoint.In this paper, we limit ourselves to the discussion of those conditions that converge to the Kirchhoff ones in the nonrelativistic, and that we call Kirchhoff-type.The reason is that they identify (as well as Kirchhoff for Schrödinger) the free case; namely, the case in which there are no attractive or repulsive effects at the vertices, which then play the role of mere junctions between the edges.
Roughly speaking these conditions "split" the requirements of Kirchhoff conditions: the continuity condition is imposed only on the first component of the spinor, while the second component (in place of the derivative) has to satisfy a "balance" condition (see ( 8)&( 9)).
The paper is organized as follows: (i) in Section 2 we briefly recall some basics on metric graphs and on the properties of the Dirac operator with Kirchhoff-type vertex conditions, and then we state the main results of the paper (Section 2.4): -existence and multiplicity of the bound states (Theorem 2.11); -nonrelativistic limit for the bound states (Theorem 2.12); (ii) in Section 3 we show the proof of Theorem 2.11; (iii) in Section 4 we show the proof of Theorem 2.12; (iv) in Appendix A we discuss more in details the properties of the Dirac operator with Kirchhoff-type conditions on metric graphs, while Appendix B deals with the definition of the form domain of the Dirac operator.
Acknowledgements
We wish to thank Eric Séré for fruitful discussions.
Setting and main Results
In this section we aim at presenting the main results of the paper.However, the statements of Theorem 2.11 and Theorem 2.12 require some basics on metric graphs and on the Dirac operator.
2.1.Metric graphs and functional setting.A complete discussion of the definition and the features of metric graphs can be found in [1,11,33] and the references therein.Here we limit ourselves to recall some basic notions.
Throughout, a metric graph G = (V, E) is a connected multigraph (i.e., multiple edges and selfloops are allowed) with a finite number of edges and vertices.Each edge is a finite or half-infinite segment of line and the edges are glued together at their endpoints (the vertices of G) according to the topology of the graph (see Figure 1).
Unbounded edges are identified with (copies of) R + = [0, +∞) and are called half-lines, while bounded edges are identified with closed and bounded intervals I e = [0, ℓ e ], ℓ e > 0. Each edge (bounded or unbounded) is endowed with a coordinate x e , chosen in the corresponding interval, which has an arbitrary orientation if the interval is bounded, whereas presents the natural orientation in case of a half-line.
As a consequence, the graph G is a locally compact metric space, the metric given by the shortest distance along the edges.Clearly, since we assume a finite number of edges and vertices, G is compact if and only if it does not contain any half-line.A further important notion, introduced in [2,48] is the following.Definition 2.1.If G is a metric graph, we define its compact core K as the metric subgraph of G consisting of all its bounded edges.In addition, we denote by ℓ the measure of K, namely A function u : G → C can be regarded as a family of functions (u e ), where u e : I e → C is the restriction of u to the edge (represented by) I e .The usual L p spaces can be defined in the natural way, with norm while H 1 (G) is the space of functions u = (u e ) such that u e ∈ H 1 (I e ) for every edge e ∈ E, with norm (and in this way one can also define H 2 (G), H 3 (G), etc . . .).Consistently, a spinor ψ = (ψ 1 , ψ 2 ) T : G → C 2 is a family of 1d-spinors and thus endowed with the norm while endowed with the norm (and so on for H 2 (G, C 2 ), H 3 (G, C 2 ), etc . . .).Equivalently, one can say that L p (G, C 2 ) is the space of the spinors such that ψ 1 , ψ 2 ∈ L p (G), with and that H 1 (G, C 2 ) is the space of the spinors such that ψ 1 , ψ 2 ∈ H 1 (G), with Remark 2.2.The usual definition of the space H 1 (G) consists also of a global continuity requirement, which forces all the components of a function that are incident to a vertex to assume the same value at that vertex.However, for the aims of this paper it is worth keeping this global continuity notion separate and introduce it when it is actually required (see ( 15)).
2.2.
The Dirac operator with Kirchhoff-type conditions.The expression given by (3) of the Dirac operator on a metric graph is purely formal, since it does not clarify what happens at the vertices of the graph, given that the derivative d dx is well defined just in the interior of the edges.
As well as for the lalpacian in the Schrödinger case, the way to give a rigorous meaning to (3) is to find suitable self-adjoint realizations of the operator.However, an extensive discussion of all the possible self-adjoint realizations of the Dirac operator on graphs goes beyond the aims of this paper.Throughout, we limit ourselves to the case of the Kirchhoff-type conditions (introduced in [47]), which represent the free case for the Dirac operator.For more details on self-adjoint extensions of the Dirac operator on metric graphs we refer the reader to [16,43].Definition 2.3.Let G be a metric graph and let m, c > 0. We call Dirac operator with Kirchhofftype vertex conditions the operator D : σ 1 , σ 3 being the matrices defined in (4), and domain dom(D) := ψ ∈ H 1 (G, C 2 ) : ψ satisfies ( 8) and ( 9) , where "e ≻ v" meaning that the edge e is incident at the vertex v and ψ 2 e (v) ± standing for ψ 2 e (0) or −ψ 2 e (ℓ e ) according to whether x e is equal to 0 or ℓ e at v. Remark 2.4.Note that the operator D actually depends of the parameters m, c, which represent (as pointed out in Section 1) the mass of the generic particle and the speed of light (respectively).For the sake of simplicity we omit this dependence unless it be necessary to avoid misunderstandings.
The basic properties of the operator (3) with the above conditions are summarized in the following Proposition 2.5.The Dirac operator D introduced by Definition 2.3 is self-adjoint on L 2 (G, C 2 ).In addition, its spectrum is The discussion of the proof of Proposition 2.5 is briefly presented in Appendix A.
Remark 2.6.Observe that the self-adjointness of D follows directly from the main result of [16], which holds for a wide class of linear vertex conditions.
2.3.
The associated quadratic form.The standard cases of the Dirac operator R d , with d = 1, 2, 3, do not actually require any further remark on the associated quadratic form, which can be easily defined using the Fourier transform (see e.g.[23]).Unfortunately, in the framework of the (noncompact) metric graphs this tool is not available and hence it is necessary to resort to the Spectral Theorem, which represents a classical, but more abstract way, to diagonalize the operator and, consequently, define the associated quadratic form Q D and its domain dom(Q D ) as, for instance, where µ D ψ denotes the spectral measure associated with D and ψ.Unfortunately, this definition is not the most suitable for the purposes of the paper.An alternative way to define the form domain of D (that is, dom(Q D )) is to use the well known Real Interpolation Theory [5,9].Here we just mention some basics, referring to Appendix B for some further details.
Define the space namely the interpolated space of order 1/2 between L 2 and the domain of the Dirac operator.First, we note that Y is a closed subspace of with respect to the norm induced by H 1/2 (G, C 2 ).Indeed, dom(D) is clearly a closed subspace of H 1 (G, C 2 ) and there results (arguing edge by edge) that , so that the closedness of Y follows by the very definition of interpolation spaces .As a consequence, by Sobolev embeddings there results that and that, in addition, the embegging in L p (K, C 2 ) is compact, due to the compactness of K.
On the other hand, there holds (see Appendix B) and hence the form domain inherits all the properties pointed out before, which are in fact crucial in the following of the paper.
Finally, for the sake of simplicity (and following the literature on the NLD equation), we denote throughout the form domain by Y , in view of (13), and with • , • denoting the euclidean sesquilinear product of C 2 , since this does not give rise to misunderstandings.In particular, as soon as ψ and/or ϕ are smooth enough (e.g., if they belong to the operator domain) the previous expressions gain an actual meaning as Lebesgue integrals.
We also recall that in the sequel we denote by • | • duality pairings (the function spaces involved being clear from the context).
Remark 2.7.Note that the the combination between Spectral Theorem and Interpolation Theory is (to the best of our knowledge) the sole possibility to define the quadratic form, since also classical duality arguments fail due to the fact that it is not true in general that H −1/2 (G, C 2 ) is the topological dual of H 1/2 (G, C 2 ) (due to the presence of bounded edges).
Main results.
We can now state the main results of the paper.Preliminarily, we give the definition of the bound states of the NLD and of the NLS equations on noncompact metric graphs with localized nonlinearities.Definition 2.8 (Bound states of the NLDE).Let G be a noncompact metric graph with nonempty compact core K and let p 2. Then, a bound state of the NLDE with Kirchhoff-type vertex conditions and nonlinearity localized on K is a spinor 0 ≡ ψ ∈ dom(D) for which there exists ω ∈ R such that D e ψ e − χ K |ψ e | p−2 ψ e = ωψ e , ∀e ∈ E, (14) with χ K the characteristic function of the compact core K. Definition 2.9 (Bound states of the NLSE).Let G be a noncompact metric graph with nonempty compact core K, and let p 2 and α > 0.Then, a bound state of the NLSE equation with Kirchhoff vertex conditions and focusing nonlinearity localized on K is a function 0 e≻v where due dxe (v) stands for u ′ e (0) or −u ′ e (ℓ e ) according to whether x e is equal to 0 or ℓ e at v, and for which there exists λ ∈ R such that Remark 2.10.We recall that conditions ( 15)&( 16) make the laplacian self-adjoint on G and are called Kirchhoff conditions.We also recall that the parameters ω and λ are usually referred to as frequencies of the bound states of the NLDE and NLSE (respectively), whereas α is usually connected to the scattering length of the particles.
Theorem 2.11 (Existence and multiplicity of the bound states).Let G be a noncompact metric graph with nonempty compact core and let m, c > 0 and p 2. Then, for every ω ∈ (−mc 2 , mc 2 ) there exists infinitely many (distinct) pairs of bound states of frequency ω of the NLDE.
Some comments are in order.First of all, to the best of our knowledge this is the first rigorous result on the stationary solutions of the nonlinear Dirac equation on metric graphs.
On the other hand, some relevant differences can be observed with respect to the Schrödinger case.Bound states of Theorem 2.11 arise (as we will extensively show in the next section) as critical points of the functional However, due to the spectral properties of D, the kinetic part of L (that is, the quadratic form associated with D) is unbounded from below even if one constrains the functional to the set of the spinors with L 2 -norm fixed, in contrast to the NLS functional.As a consequence, no minimization can be performed and, hence, the extensions of the direct methods of calculus of variations developed for the Schrödinger case are useless.Furthermore, such a kinetic part is also strongly indefinite, so that the functional possesses a significantly more complex geometry with respect to the NLS case, thus calling for more technical (albeit classical) tools of Critical Point Theory.
Finally, the spinorial structure of the problem as well as the implicit definition of the kinetic part of the functional, whose domain is not embedded in L ∞ (G, C 2 ), prevent the (direct) use of the tools developed for the NLSE on graphs such as, for instance, rearrangements and "graph surgery".
In view of these issues, in the proof of Theorem 2.11 we rather adapted some techniques from the literature on the NLDE on standard noncompact domains.Anyway, the fact that we are dealing with a nonlinearity localized only on a compact part of the graph makes the study of the geometry of the functional a bit more delicate as we will see in Lemma 3.4 (while it clearly simplifies the compactness issues with respect to the extended case).For the same reason, the uniform H 1 -boundedness needed to study the non relativistic limit of the bound states (see below) is achieved in different steps (see Section 4).
The second (and main) result of the paper, on the other hand, shows the connection between the NLDE and the NLSE, suggested by the physical interpretation of the two models.
Before presenting the statement, we recall that, by the definition of D, the bound states obtained via Theorem 2.11 depend in fact on the speed of light c.As a consequence, they should be meant as bound states of frequency ω of the NLDE at speed of light c.Theorem 2.12 (Nonrelativistic limit of the bound states).Let G be a noncompact metric graph with nonempty compact core, and let m > 0, p ∈ (2, 6) and λ < 0. Let also (c n ), (ω n ) be two real sequences such that as n → +∞.
) at speed of light c n , then, up to subsequences, there holds as n → +∞, where u is a bound state of frequency λ of the NLSE with α = 2m.
First, we recall that the expression "speed of light c n , with c n → ∞" has to be meant as if it becomes bigger and bigger with respect to the proper scale of the phenomenon one is focusing on.In addition, for any choices for which the proof of Theorem 2.12 holds, there results that the parameter α in the NLSE solved by the limit function u is equal to 2m.
The main interest of Theorem 2.12 arises from the fact that it suggests that the two models provided by the NLDE and the NLSE are indistinguishable at those scales where the relativistc effects become negligible.Hence our result provides a mathematical evidence to this intuitive guess.
Moreover, we point out that Theorem 2.12, in contrast to Theorem 2.11, holds only for a fixed range of power exponents, namely the so-called L 2 -subcritical case p ∈ (2, 6).However, this is the only range of powers for which multiplicity results are known for the NLSE (see [48]).On the other hand, these results are parametrized by the L 2 norm of the wave function while Theorem 2.12 is parametrized by the frequency and hence (in some sense) it presents as a byproduct a new result for the NLSE.
Remark 2.13.We also mention that Theorem 2.11 and Theorem 2.12 can be proved, without significant modifications, also in the case of more general nonlinearities, by means of several ad hoc assumptions.We limit ourselves to the power case in this context for the sake of simplicity.
Existence of infinitely many bound states
In this Section we prove Theorem 2.11.Note that, since the parameter c here does not play any role, we set c = 1 throughout the section.In addition, in the sequel (unless stated otherwise) we always tacitly assume that the mass parameter m is positive, the frequency ω ∈ (−m, m), the power of the nonlinearity p > 2 and that G is a noncompact metric graph with nonempty compact core.
3.1.Preliminary results.The first point is to prove that the bound states coincide with the critical points of the C 2 action functional L : Y → R defined by Recall that (as c = 1) the spectrum of D is given by Proposition 3.1.A spinor is a bound state of frequency ω of the NLDE if and only if it is a critical point of L.
Proof.One can easily see that a bound state of frequency ω of the NLDE is a critical point of L.
Let us prove, therefore, the converse.Assume that ψ is a critical point of L, namely that ψ ∈ Y and Now, for any fixed edge e ∈ E, if one chooses (namely, ϕ 1 possesses the sole component ϕ 1 e , which is a test function of I e ), then ϕ 1 e dx e , so that ψ 2 e ∈ H 1 (I e ) and an integration by parts yields the first line of ( 14).On the other hand, simply exchanging the role of ϕ 1 and ϕ 2 in (24), one can see that ψ 1 e ∈ H 1 (I e ) and satisfies the second line of ( 14), as well.
It is then left to prove that ψ fulfills ( 8) and (9).First, fix a vertex v of the compact core and choose Integrating by parts in (23) and using (14), there results e≻v ϕ 1 e (v)ψ 2 e (v) ± = 0 and, hence, ψ 2 satisfies (9) (recall the meaning of ψ 2 e (v) ± explained in Definition 2.3).On the other hand, let v be a vertex of the compact core with degree greater than or equal to 2 (for vertices of degree 1 (8) is satisfied for free).Moreover, let where e 1 and e 2 are two edges incident at v, and ϕ 2 e ≡ 0 on each edge not incident at v. Again, integrating by parts in (23) and using (14), Then, repeating the procedure for any pair of edges incident at v one gets (8).
Finally, iterating the same arguments on all the vertices one concludes the proof.
Remark 3.2.In addition to Proposition 3.1, it is worth mentioning that, due to the linear behavior outside the compact core, the bound states are known explicitly on the half-lines.Pecisely, if e ∈ E is a half-line with starting point v, then The second preliminary step is to prove that the functional L possesses a so-called linking geometry ( [23,50]), since this is the main tool in order to prove the existence of Palais-Smale sequences.
Recall that, according to (22) we can decompose the form domain Y as the orthogonal sum of the positive and negative spectral subspaces for the operator D, i.e.
As a consequence, every ψ ∈ Y can be written as ψ = P + ψ + P − ψ =: ψ + + ψ − , where P ± are the orthogonal projectors onto Y ± .In addition one can find an equivalent (but more convenient) norm for Y , i.e.In view of the previous remarks and using again the Spectral theorem, the action functional ( 21) can be rewritten as follows which is the best form in order to prove that L has in fact a linking geometry (see e.g.[50, Section II.8]).
Lemma 3.4.For every N ∈ N there exist R = R(N, p) > 0 and an N -dimensional space where Proof.Let e be a bounded edge, associated with the segment I e = [0, ℓ e ], and let V be the space of the spinors which is clearly a subset of dom(D) and hence of Y .Moreover, a simple computation shows that and thus, in view of (26), if η 1 = 0 then η + = 0. Assume first that dimV + = ∞, where It is clear that, if ϕ ξ , then If, on the contrary, ξ ϕ , then some further effort is required.Since ψ ∈ ∂M N , ξ = R and thus From the Hölder inequality (recall that ℓ = |K|) and hence Now, by definition, ξ = N j=1 λ j η + j , for some λ j ∈ C. On the other hand, denoting by η − j the spinors such that η − j + η + j =: while, as ξ + χ = N j=1 λ j η j vanishes outside I ⊂ K, Combining (32) and (33) we get and, plugging into (31), Then, since χ and ξ are orthogonal by construction and χ + ξ belongs to a finite dimensional space (so that its L 2 -norm is equivalent to the Y -norm), there exists C > 0 such that and thus, for R large, the claim is proved.Finally, consider the case dimV + < ∞.As dimV = ∞, we have dimV − = ∞.On the other hand, there holds σ 2 V − ⊂ Y + and that σ 2 V + ⊂ Y − , where σ 2 is the Pauli matrix as this matrix anticommutes with the Dirac operator.Therefore (also recalling that σ 2 in unitary), if one defines V = σ 2 V , which consists of spinors of the form so that V + = σ 2 V − and V − = σ 2 V + , then (arguing as before) one can prove again (27).
Lemma 3.5.There exist r, ρ > 0 such that inf where Proof.The proof is an immediate consequence of the definition of L given in (21), in view of the fact that p > 2 and ω ∈ (−m, m).
Finally, we introduce a further representation of the functional L, which will be useful in the sequel.Preliminarily, note that as the spectrum of the (self-adjoint) operator (D − ω) is given by (and as |ω| < m), one can define an equivalent norm and the two spectral projectors P ± ω on the positive and negative (respectively) spectral subspaces of (D − ω).As a consequence, and ( 26) can be written as
3.2.
Existence and multiplicity of the bound states.The aim of this section is to prove, for p > 2, the existence of infinitely many (pairs of) bound states of the NLDE for any frequency ω ∈ (−m, m).The techniques used below (such as Krasnoselskij genus, pseudo-gradient flow, . . . ) are well-known in the literature in their abstract setting and can be found for instance in [44,50] (see also [23] for an application to nonlinear Dirac equations).
Recall the definition of Krasnoselskij genus for the subsets of Y .
Definition 3.6.Let A be the family of sets A ⊂ Y \{0} such that A is closed and symmetric (namely, ψ ∈ A ⇒ −ψ ∈ A).For every A ∈ A, the genus of A is the natural number defined by If no such ϕ exists, then one sets γ[A] = ∞.
In addition, one easily sees that the action functional L is even, i.e.
As a consequence, it is well known (see, e.g., [44,Appendix]) that there exists an odd pseudogradient flow (h t ) t∈R associated with the functional L, which satisfies some useful properties.This construction is based on well-known arguments and, thus, here we only present an outline of the proof, refering the reader to [44,Appendix] and [50, Chapter II] for details.Since the interaction term is concentrated on a compact set K ⊂ G, the compactness of Sobolev embeddings imply that h t can be chosen of the following form where Λ t is an isomorphism and K t is a compact map, for all t 0.Moreover, one can also prove that that is, Y ± are invariant under the action of Λ t for all t 0. Fix, then, ε > 0 such that ρ − ε > 0 (with ρ given by Lemma 3.5).Exploiting suitable cut-off functions on the pseudogradient vector field, one can get that, for all t 0, namely, the level sets of the action below ρ − ε are not modified by the flow.
In view of these remarks, we can state the following lemma.
Proof.For each fixed ψ ∈ Y , the function t → L • h t (ψ) is increasing.Then Lemma 3.4 implies that Note, also, that by the group property of the pseudogradient flow Then, from (37), a degree-theory argument (see e.g.[50, Section II.8]) shows that On the other hand, by (37) and Lemma 3.4, it is easy to see that , and thus where and where we used the fact that Λ s is an isomorphism for all s ∈ R and preserves Y ± .Now, since h t (0) = 0 and Λ t (0) = 0, we have , and hence, exploiting (39) and the monotonicity of the genus, , namely, is homeomorphic to a N -dimensional sphere.
Using Lemma 3.7 we can prove the existence of the Palais-Smale sequences at the min-max levels.
Corollary 3.8.Let the assumptions of Lemma 3.7 be satisfied and define, for any N ∈ N, with Then, for every N ∈ N, there exists a Palais-Smale sequence In addition, there results Proof.The existence of a Palais-Smale sequence for L at level α N follows by standard deformation arguments, and then we only sketch the proof (see [44,50] for details).
Preliminarily, we note that by Lemma 3.7 (and the definition of M N ) the classes F N are not empty, and hence that the levels α N are well defined.Now, suppose, by contradiction, that there is no Palais-Smale sequence at level α N .Then, since L ∈ C 1 , there exist δ, ε > 0 such that In addition, from (40) there exists and hence, combining with (43), we can see that there exists T > 0 such that As a consequence, if one shows that h −T (X ε ) ∈ F N , then obtains a contradiction.First, observe that h −T (X ε ) ∈ A as h s is odd, so that it suffices to prove that On the other hand,
and then the monotonicity of the genus gives
Therefore, h −T (X ε ) ∈ F N and this entails that which is a contradiction.Finally, the first line of (42) follows again by monotonicity of the genus, whereas the second one is a direct (up to some computations) consequence of Lemmas 3.4, 3.5 and 3.7.Remark 3.9.It is easy to see that there are no non-trivial critical points for the action functional L at levels α 0. Indeed, let ψ ∈ Y be such that dL(ψ) = 0 and L(ψ which implies that α 0. Suppose, now, that α = 0. Consequently, ψ vanishes on the compact core K.Then, it follows that ψ 1 e (v) = ψ 2 e (v) = 0, ∀v, e ∈ K, and thus, exploiting ( 8) and (25), that ψ ≡ 0 on G. Now, before giving the proof of Theorem 2.11, we discuss the compactness properties of Palais-Smale sequences.Proposition 3.10.For every α > 0, Palais-Smale sequences at level α are bounded in Y .
Proof.Let (ψ n ) be a Palais-Smale sequence at level α > 0 and assume by contradiction that, up to subsequences, and (recalling the definition of P ± ω given by ( 36)) As a consequence, using the Hölder inequality and (12), we get On the other hand, by the definition of P ± ω , one sees that and, combining with (44), Arguing as before, one also finds that and hence , 1) as p > 2. Lemma 3.11.For every α > 0, Palais-Smale sequences at level α are pre-compact in Y .
Proof.Let (ψ n ) be a Palais-Smale sequence at level α > 0. From Proposition 3.10, it is bounded and then, up to subsequences, On the other hand, by definition and (again) by Hölder inequality and ( 45) As a consequence, combining with (46), In addition, since (ψ n − ψ) ⇁ 0 in Y , we get and, summing with (47), there results Since, analogously, one can prove that we obtain , which concludes the proof.
Finally, we have all the ingredients in order to prove Theorem 2.11.
Proof of Theorem 2.11.By Corollary 3.8, for every N ∈ N, there exists at least a Palais-Smale sequence at level α N > 0 (defined by (40)) and, by Proposition 3.11, it converges to a critical point of L, which is via Proposition 3.1 a bound state of the NLDE.Now, if the inequalities in (42) are strict, then one immediately obtains the claim.However, if α j = α j+1 = • • • = α j+q = α, for some q 1, then the claim follows by [8,Proposition 10.8] as the properties of the genus imply the existence of infinitely many critical points at level α.
Nonrelativistic limit of solutions
In this section we prove Theorem 2.12; namely, that there exists a wide class of (pairs of) sequences (c n ), (ω n ) for which the nonrelativistic limit holds.More precisely, we show that with such a choice of parameters the bound states of the NLDE converge, as c n → +∞, to the bound states of the NLSE with α = 2m.
The strategy that we use is the one developed by M.J. Esteban and E. Séré in [24] for the case of Dirac-Fock equations.However, the differences between both the equations and the frameworks discussed call for some relevant modifications.In particular, while in [24] one of the main point is the estimate of the sequence of the Lagrange multipliers of bound states with L 2 -norm fixed, here the major point (since there is no constraint) is to prove that the limit is non-trivial.Moreover, we also have to distinguish different cases according to the exponent p ∈ (2, 6) of the nonlinearity.
Preliminarily, note that, since here the role of the (sequence of the) speed of light is central, we cannot set any more c = 1.As a consequence, all the previous results has to be meant with m replaced by mc 2 n (and ω replaced by ω n ).In addition, we denote by D n the Dirac operator with c = c n and with L n the action functional with D = D n and ω = ω n .There are clearly many other quantities which actually depend on the index n (such as, for instance, the form domain Y , Z N , . . .), but since such a dependence is not crucial we omit it for the sake of simplicity.In addition, in the following, we will always make the assumptions ( 18),( 19) and ( 20) on the parameters (c n ), (ω n ).In particular, those assumptions immediately imply that Now, from Theorem 2.11, for every fixed N ∈ N, there exist at least a pair of bound states of frequency ω n at level α n N of the NLDE at speed of light c n .Hence, we denote throughout by (ψ n ) a sequence of bound states corresponding to those values of parameters.Since all the following results hold for every fixed N ∈ N, the dependence on N is understood in the sequel (unless stated otherwise).
4.1.H 1 -boundedness of the sequence of the bound states.The first step is to prove that the sequence (ψ n ) defined above is bounded in L p (K, C 2 ).Lemma 4.1.Under the assumptions (18), ( 19) and (20), the sequence (ψ n ) is bounded in L p (K, C 2 ) (uniformly with respect to n), as well as the associated minimax levels (α n N ).Proof.First, recalling (41) and following the notation of the proof of Lemma 3.4, one sees that In addition, following again the proof of Lemma 3.4, given an orthonormal basis η + j , j = 1, ..., N , of Z N , every spinor ψ ∈ Y − ⊕ Z N can be decomposed as Arguing as in ( 30)-( 34) we get On the other hand, exploiting ( 29) and ( 48), there results Hence, combining ( 49) and ( 50), and thus, since V does not depend on n and since p > 2 Finally, as ψ n is a critical point of the action functional, which concludes the proof.
We can now prove that boundedness on L p (K, C 2 ) entails boundedness on L 2 (G, C 2 ).
Proof.For the sake of simplicity, denote by ψ ± the projections of the spinor ψ ∈ Y given by ( 36) (with ω = ω n ).As the spectrum of the operator and ψ n satisfies ( 14) (with c = c n and ω = ω n ), Hölder inequality yields for some C > 0, where in the last inequality we used the fact that the decomposition ωn induces an analogous decomposition on L p (K), that is Moreover, using (51) one can prove that Then, combining the above observations with Lemma 4.1 and ( 48), there results An analogous argument gives M < ∞ and then which concludes the proof.
Finally, we can deduce boundedness in H 1 (G, C 2 ).Preliminarily, we recall two Gagliardo-Nirenberg inequalities for spinors that can be easily deduced from those on functions (see e.g.[49,Proposition 2.6]).For every p 2, there exists C p > 0 such that Moreover, there exists C ∞ > 0 such that 6).Under the assumptions (18), ( 19) and (20), the sequence Proof.First, recall that, since ψ n are bound states they satisfy (edge by edge) The L 2 (G, C 2 )-norm squared of the right-hand side of (54) reads Let us estimate the last two integrals.Using (53), Lemma 4.1 and Lemma 4.2, we get On the other hand, by (52) and Lemma 4.2 Since an easy computation shows that combining (55), (56), (57) and (58), we obtain that , so that, from a repeated use of ( 18) and (19), Cm.
Hence, the claim follows by the assumption p < 6.
Remark 4.4.The above results also hold if (20) is replaced by the weaker assumption (48).
4.2.Passage to the limit.The last step consists in proving that the first components of the sequence of bound states (ψ n ) converge to a bound state of the NLSE, while the second components converge to zero.
For the sake of simplicity we assume throughout that the parameters p and λ are fixed and fulfill p ∈ (2, 6) and λ < 0.
In addition, we set and, given the two sequences (c n ) and (ω n ) introduced in the previous section (which satisfy ( 18), ( 19),( 20) and ( 48)), we define Clearly, (48) implies that b n −→ 2m, as n → ∞ (60) while (20) gives We also recall that a function w : G → C is a bound state of the NLSE with fixed frequency λ and α = 2m if and only if it is a critical point of the C 2 functional J : H → R defined by where H := {w ∈ H 1 (G) : (15) holds} with the norm induced by H 1 (G) (this can be easily proved arguing as in [1,Proposition 3.3]).It is also worth mentioning that a Palais-Smale sequence for J is a sequence Furthermore, there holds the following property.
Lemma 4.5.Let (w n ) be a bounded sequence in H and, for every n, define the linear functional Then, (w n ) is a Palais-Smale sequence for J if and only if Proof.The proof is trivial noting that and exploiting (60), (61) and the fact that (w n ) is bounded in H.
The strategy to prove Theorem 2.12 is the following: (i) prove that the sequence (v n ) converges to 0 in H 1 (G); (ii) prove that the sequence (u n ) is bounded away from zero in H 1 (G); (iii) prove that the sequence (u n ) satisfies (62), as by Lemmas 4.3 and 4.5, this entails that it is a Palais-Smale sequence for J; (iv) prove that the sequence (u n ) converges (up to subsequences) in H to a function u, which is then a bound state of the NLSE with frequency λ < 0. We observe that we always tacitly use in the following the fact that, since each ψ n is a bound state of the NLDE, then u n ∈ H, whereas v n ∈ H, but satisfies (9).In addition, we highlight that, in the sequel, we often use a "formal" commutation between the differential operator (•) ′ and χ K .Clearly, this is just a compact notation (which avoids tedious edge by edge computations) that simply recalls the different form of the NLDE on the bounded edges due to the presence of the localized nonlinearity.
As a first step, we prove item (i).As a byproduct of the proof, we also find an estimate of the speed of convergence of (v n ).
Lemma 4.6.The sequence (v n ) converges to 0 in H 1 (G) as n → ∞.More precisely, there holds Proof.As (ψ n ) is a bound state of the NLDE, rewriting the equation in terms of its components, Dividing (64) by c n and using (48) and Lemma 4.3, we have On the other hand, dividing (65) by c 2 n and using again Lemma 4.3, there results Finally, combining with (66), one obtains (63).
Item (ii) requires some further effort.
Lemma 4.7.There exists µ > 0 such that Proof.Assume, by contradiction, that (67) does not hold, namely that, up to subsequences, Dividing by c n and rearranging terms, (64) yields and then, using (48), we find Moreover, (65) can be rewritten as and, since (again by ( 48) so that, combining with (70), Note that (71) also shows that u n is of class C 1 on each edge.Now, plugging (71) into (69), one obtains Clearly, (73) is to be meant in a distributional sense.However, observing that it can be written as and that consequently the l.h.s.belongs to L 2 (G) and is continuous edge by edge (recalling also that u n is of class C 1 edge by edge by (65)), the following multiplications by u n and integrations (by parts) can be proved to be rigorous in the Lebesgue sense.(v n,e (v) ± meant as in Definition 2.3).Moreover, as u n and v n satisfy the vertex conditions ( 8) and (9) (respectively), one has while, for any v ∈ K and e ≻ v, there results (where we used Lemma 4.3, (72) and Sobolev embeddings).As a consequence (since the number of the edges and the vertices is finite) Let us focus on the r.h.s. of (73).After multiplication times u n and integration over G we have The latter term can be easily estimated using the Hölder inequality and (60), ( 68) and (72), i.e.
b n On the contrary, the former one requires some further efforts.Clearly, Using (69) and again Lemma 4.3, we immediately find that It is, then, left to estimate I 2 .We distinguish two cases.Estimate for I 2 , case p ∈ (2, 4): as p − 4 < 0 there holds As a consequence Moreover, H 1 (G) , whereas H 1 (G) , so that (since p > 2) Estimate for I 2 , case p ∈ [4,6): as p − 4 0, there holds C.
and then arguing as before one can easily find (as well) that Summing up, we proved that for all p ∈ (2, 6),there results and hence, combining with (73), ( 74) and (75), we obtain that , which is the contradiction that concludes the proof.
We now prove item (iii).Proof.By Lemma 4.5 it is sufficient to prove (62).Take, then, ϕ ∈ H with ϕ H 1 (G) 1. Multiplying (73) by ϕ and integrating over G (which is rigorous as we showed in the proof of Lemma 4.7) one gets Arguing as in the proof of Lemma 4.7 and using Lemma 4.6, one can check that (where throughout we mean that o(1) is independent of ϕ).Now, the first integral at the r.h.s. of (76) reads where the former term is estimated by whereas the latter is estimated by (exploiting Lemmas 4.3 and 4.6).It is, then, left to discuss the last term at the r.h.s. of (76).First note that and that Let us distinguish two cases (as in the proof of Lemma 4.7).Assume first that p ∈ (2, 4).Therefore 0 < p−2 2 < 1, and this implies that . (where we used again Lemmas 4.3 and 4.6).
Finally, we have all the ingredients to prove point (iv) and thus Theorem 2.12.
holds ( •, • H denoting the scalar product in H) and the mapping Definition A.2. Let Π = {H, Γ 0 , Γ 1 } be a boundary triplet for the adjoint operator A * .Consider, in addition, the operator A 0 := A * | ker Γ 0 and denote by ρ(A 0 ) its resolvent set.Then, the operator valued functions γ(•) : are called the γ-field and Weyl function, respectively, associated with Π.
Then, we can sketch how to apply the Theory of Boundary Triplets to metric graphs.First, observe that the set E of the edges of a metric graph G can be decomposed in two subsets: E s of the bounded edges and E h of the half-lines.
Fix, then, e ∈ E s and consider the corresponding minimal operator D e on H e = L 2 (I e ) ⊗ C 2 , with the same action of ( 6 whose domain is given by the direct sum of the domains of the addends.The spectrum of the operator D 0 , it is given by the superposition of the spectra of each addend, that is Precisely, in [20] it is proved that each segment I e , e ∈ E s contributes to the point spectrum of D 0 with eigenvalues given by while the spectrum on half-lines, on the contrary, is purely absolutely continuous and is given by (with obvious definition of the domains).Define, also, the trace operators One can prove that {H, Γ 0 , Γ 1 }, with H = C M and M = 2|E s | + |E h |, is a boundary triplet for the operator D * , and it is possible to find the corresponding gamma-field and Weyl function arguing as before.
On the other hand, note that boundary conditions ( 8)-( 9) are "local", in the sense that at each vertex they are expressed independently of the conditions on other vertices.As a consequence, they can be expressed by means of suitable block diagonal matrices A, B ∈ C M ×M , with AB * = BA * , as AΓ 0 ψ = BΓ 1 ψ (the model case at the end of the section clarifies the above notation).Observe also that the sign convention of (9) can be incorporated in the definition of the matrix B.
Summing up, the Dirac operator with Kirchoff-type conditions can be defined as and thus the operator is self-adjoint (again) by construction.
Remark A.3.The boundary triplets method provides an alternative way to prove the self-adjointness of the Dirac operator with conditions ( 8)-( 9), different from the classical approach à la Von Neumann adopted in [16].
It is then left to prove (10).As for the Schrödinger case [32], the following Krein-type formula for resolvent operators can be proved and thus the resolvent of the operator D can be regarded as a perturbation of the resolvent of the operator D 0 .In the above formula γ(•) and M (•) are the gamma-field and the Weyl function, respectively, associated with D (see [20]).It turns out that the operator appearing in the righthand side of (85) is of finite rank.Therefore using Weyl's Theorem [46,Thm XIII.14]Finally, recall that the point eigenvalues (83) for D 0 are embedded in the continuous spectrum (84).Hence, in order to conclude the proof of Proposition 2.5 we have to show that they cannot enter the gap (−mc 2 , mc 2 ) as vertex conditions ( 8)-( 9) are imposed.
Then, ψ 1 turns out to be an eigenfunction of the laplacian with Kirchhoff vertex conditions on G. Hence, multiplying (88) by ψ 1 and integrating, one can see that |λ| > mc 2 , thus proving that there cannot be any eigenvalue of D in (−mc 2 , mc 2 ).In other words, imposing Kirchoff-type vertex conditions, the eigenvalues (83) can "move" to the thresholds ±mc 2 , but cannot "enter the gap".
A.1.A model case: the triple junction.Let us consider an example in order to clarify the main ideas explained before.Consider a 3-star graph with one bounded edge and two half-lines, as depicted in Figure 4.In this case the finite edge is identified with the interval I = [0, L] and 0 corresponds to the common vertex of the segment and the half-lines.A suitable choice for the trace operators is (where, choosing the parameters a, b ∈ C, we can fix the value of the spinor on the non-connected vertex).Since, as already remarked, conditions (8)-( 9) are defined independently on each vertex, one can iterate the above construction for a more general graph structure, thus obtaining matrices A, B with a block structure, each block corresponding to a vertex (for the sake of brevity we omit the details).
Appendix B. Definition of the form domain
In Section 2.3 we claimed that the form domain of the Dirac operator D can be defined interpolating between L 2 (G, C 2 ) and the operator domain (7).The aim of this section is to provide a more detailed justification of this statement, combining Spectral Theorem and Real Interpolation Theory.
One of the most commonly used forms of the Spectral Theorem states, roughly speaking, that every self-adjoint operator on a Hilbert space is isometric to a multiplication operator on a suitable L 2 -space.In this sense the operator can be "diagonalized" in an abstract way.The above theorem essentially says that H is isometric to the multiplication operator by f (still denoted by the same symbol) on the space L 2 (M, dµ), whose domain is given by dom(f ) := ϕ ∈ L 2 (M, dµ) : f (•)ϕ(•) ∈ L 2 (M, dµ) .endowed with the norm The form domain of f has an obvious explicit definition, as f is a multiplication operator, that is Anyway, it can be recovered using real interpolation theory (we follow the presentation given in [5,9]).Consider the Hilbert spaces H 0 := L 2 (M, dµ) with the norm x 0 := x L 2 (dµ) , and H 1 := dom(f ), so that H 1 ⊂ H 0 .Define, in addition, the following quadratic version of Peetre's K-functional The squared norm x 2 1 is a densely defined quadratic form on H 0 , represented by x 2 1 = (1 + f 2 (•))x, x 0 , where •, • 0 is the scalar product of H 0 .
Figure 2 .
Figure 2. the compact core of the graph in Figure 1.
Remark 3 . 3 .
Borel functional calculus for self-ajoint operators[45, Theorem VIII.5] allows to define the operators |D| α , α > 0, and more general operators of the form f (D), where f is a Borel function on R.
Lemma 4 . 8 .
The sequence (u n ) is a Palais-Smale sequence for J.
.
) and domain H 1 0 (I e ) ⊗ C 2 .The domain of the adjoint operator, which acts as D e , is dom( D * e ) = H 1 (I e ) ⊗ C 2 and a suitable choice of trace operators (introduced in[27]) is given by Γ e 0,1 :H 1 (I e ) ⊗ C 2 → C 2In addition, given the boundary triplet {H e , Γ e 0 , Γ e 1 }, with H e = C 2 , one can compute the gamma field and the Weyl function using (82), and prove that D * e has defect indices n ± ( D e ) = 2. Note, also, that the operator D e , with the same action of D * e and domain dom(D e ) = ker Γ e 0 , is self-adjoint by construction.Analogously, fix e ′ ∈ E h and consider the minimal operatorD e ′ on H e ′ = L 2 (R + ) ⊗ C 2 ,with the same action as before and domain H 1 0 (R + ) ⊗ C 2 .The adjoint operator has domain dom( D * e ′ ) = H 1 (R + ) ⊗ C 2 and the trace operators Γ e ′ 0,1 : H 1 (R * ) ⊗ C 2 → C can be defined as Γ gamma field and the Weyl function are provided by (82) (with respect to the boundary triplet {H e ′ , Γ e ′ 0 , Γ e ′ 1 }, with H e ′ = C), while the defect indices are n ± ( D e ′ ) = 1.As before, the operator D e ′ := D * e ′ , dom(D e ′ ) := ker Γ e 0 is self-adjoint by construction.As a further step, consider the operator on H = e∈Es H e ⊕ e ′ ∈E h H e ′ defined as the direct sum D 0 := e∈Es D e ⊕ e ′ ∈E h D e ′ , Let us describe, now, the Dirac operator introduced in Definition 2.3 using Boundary Triplets.Consider the operatorD := e∈Es D e ⊕ e ′ ∈E h D e ′ , one can conclude from (85) thatσ ess (D) = σ ess (D 0 ) = (−∞, −mc 2 ] ∪ [mc 2 , +∞).
e 1 e 2 e 3 Figure 4 .
Figure 4.A 3-star graph with a finite edge.
Therefore, multiplying (73) by u n and integrating (by parts) over G, at the l.h.s.we obtain | 2 dx where we denote by u n,e (and v n,e ) the restriction of u n (and v n ) to the edge (represented by) I e , and d dxe is to be meant as in Definition 2.9.Using (65) and the fact that u n is of class C 1 (edge by edge), we find that (v)v n,e (v) ± |u n,e (v)| 2 + |v n,e (v)| 2 | 2019-01-16T18:15:22.000Z | 2018-07-18T00:00:00.000 | {
"year": 2018,
"sha1": "34d366dbb4c65c40381d0e0d7093ea1c33ec8079",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1807.06937",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3afd254b3379e7f4338724280f80532c4554e008",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
248300047 | pes2o/s2orc | v3-fos-license | Optimize_Prime@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil
This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team — Optimize_Prime, in the ACL 2022 shared task “Abusive Comment Detection in Tamil.” This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Code-mixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45.
Introduction
The rise in social media platforms like Facebook and Twitter has led to the exchange of massive amounts of information on the internet.With the increase in the number of users and platforms, problems like hate speech and cyberbullying have also increased (Chakravarthi, 2020).Abusive comments are comments that are offensive towards a particular individual or a group of individuals.Online abuse has led to problems like lowered self-esteem, depression, harassment, and even suicide in some severe cases.Hence detecting and dealing with such comments is of utmost importance.Classifying detected comments helps determine the severity of the comment and will also help the authorities take appropriate action against the individual.Our task is to detect and classify abusive comments written in Tamil.Abusive comment detection is a text classification problem.Text classification is a technique that extracts features from text and assigns a set of predefined categories(classes) to it.
Traditionally, text classification was done using linear classifiers on the sentence embeddings of text.This was followed by Recurrent Neural Networks like LSTMs, which gave promising results.After the paper (Vaswani et al., 2017), transformers were introduced in the field of natural language processing.They have an attention layer that provides context to words in the text.The introduction of the transformer architecture has led to the development of many other variations of the transformer like BERT (Devlin et al., 2018), XLM-RoBERTa (Conneau et al., 2019), MuRIL (Khanuja et al., 2021), etc.In this paper, we use different transformer-based models for abusive comment detection in Tamil.We have also used RNN models like LSTMs, a newer model ULMFit and a type of Ensemble model.We compared the results obtained from all three approaches to determine the optimum model for this task.
Related Work
Tamil is a low-resource language, so finding properly annotated data is challenging.In order to encourage research in Tamil, datasets have been created by Chakravarthi et al. (2020).The paper, Pitsilis et al. (2018) tries an RNN based approach for detecting offensive language in tweets.Arora (2020) developed a model for detecting hate speech in Tamil-English codemixed social media comments using a pre-trained version of ULM-FiT.After the introduction of transformers in Vaswani et al. (2017), the use of transformers for NLP tasks increased.
The release of BERT (Devlin et al., 2018) the way for many more variations of transformers.
In the paper, Mishra and Mishra (2019) the results for the HASOC in Indo-European languages were showcased where they used MultiLingual BERT and monolingual BERT.Some work has been done by Ziehe et al. (2021) in English, Malayalam, and Tamil, aiming to detect Hope Speech which is also a text classification task.They fine-tuned XLM-RoBERTa (Conneau et al., 2019) for Hope Speech Detection.
Dataset Description
The shared task on Abusive Comment Detection in Tamil-ACL 2022 aims to detect and reduce abusive comments on social media.The main objective of the shared task is to design systems to detect and classify instances of hate speech in Tamil and Tamil-English codemixed YouTube comments.The Abusive Comment Detection Dataset Priyadharshini et al. ( 2022) consists of Tamil and Tamil-English comments collected from the YouTube comments section.The dataset consists of a comment and its corresponding label belonging to the nine labels in the Dataset: Misandry, Counterspeech, Misogyny , Xenophobia, Hope-Speech, Homophobia, Transphobic, Not-Tamil, and Noneof-the-above.
Tamil Data
The Train, Dev, and Test datasets have 2240, 560 and 700 data points, respectively.Each data point in the training data has the text in Tamil followed by its corresponding label.
Tamil-English Codemixed
The train, dev, and test datasets have 5948, 1488, and 1859 data points, respectively.Each data point has the actual comment in a codemixed format.
Codemixed means text that alternates between two languages.In this case, the two languages are Tamil and English.
There is a significant class imbalance observed in the dataset.The 'Not-Tamil' label has no test or dev data instances, so the classification is done only for eight labels.
Methodology
To classify Youtube Comments, we used three different approaches: Ensemble models, Recurrent Neural Networks, and Transformers.1
Data cleaning
We removed punctuations, URL patterns, and stop words from the text.For better contextual understanding, we replaced emojis with their textual equivalents.For example, the laughing emoji was replaced by the Tamil equivalent of laughter.
Data cleaning boosted the performance of all RNN models and all Transformer models except for MuRIL.MuRIL and all Ensemble models worked better without data cleaning.
Handling data imbalance
There is a significant class imbalance in the data.To reduce the class imbalance, we used the following techniques: over-sampling, overunder sampling, Synthetic Minority Over-sampling (SMOTE) (Chawla et al., 2002), and assigning class weights.In over-under sampling, we under-sample the classes having more instances than expected and over-sample those having lesser instances than expected while keeping the length of the dataset constant.Over-under sampling worked best for all transformer and ensemble models, but it reduced the performance of RNN models.Assigning weights boosted the performance of the M-BERT -Logistic Regression ensemble model.
Ensemble model
As shown in figure 1, we concatenate different machine learning models with multilingual BERT(M-BERT) (Devlin et al., 2018).Multilingual BERT is a BERT-based transformer trained in 104 languages.It simultaneously encodes knowledge of all these languages.M-BERT generates a sentence embedding vector of length 768.We then pass these embeddings to different machine learning models, as shown in Table 2.We used grid search with weighted-average f1 score as the scoring parameter for 5 -10 cross-validation folds to fine-tune the hyperparameters.
RNN Models
We have used two RNN models, Long Short-Term Memory(LSTM) networks and ULM-Fit.
Vanilla LSTM
Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997), can capture semantic information and long-term dependencies.We use LSTM to set a baseline score for RNN models.We create word embeddings by choosing the top 64,000 most frequently occurring words in the dataset.The embedding layer then creates 100dimension vectors.The rest of the model includes a spatial drop out of 0.2, a single LSTM layer, and a final softmax activation function.
ULMFit
In transfer learning approaches, models are trained on large corpora, and their word embeddings are fine-tuned for specific tasks.In many state-of-theart models, this approach is successful (Mikolov et al., 2013).Although Howard and Ruder (2018) argues that we should use a better approach instead of randomly initializing the remaining parameter.They have proposed ULMFiT: Universal Language Model Fine-tuning for Text Classification.
We use team gauravarora's (Arora, 2020) opensourced models from the shared task at HASOC-Dravidian-CodeMix FIRE-2020.They build corpora for language modeling from a large set of Wikipedia articles.
These models are based on the Fastai (Howard and Gugger, 2020) implementation of ULMFiT.We tuned the models on Tamil, Codemix data sets individually and on the Tamil -codemix combined dataset.
For tokenization, we used the Senterpiece module.The language model is based on AWD-LSTM Figure 1: Ensemble model architecture (Merity et al., 2018).The model consists of a regular LSTM cell with spatial dropout, followed by the classification model consisting of two linear layers followed by a softmax.
Transformer models
Our data sets consist of Tamil and Tamil-English codemixed data; we use four transformers MuRIL, XLM-RoBERTa, M-BERT, and Indic BERT.MuRIL (Khanuja et al., 2021) is a language model built explicitly for Indian languages and trained on large amounts of Indic text corpora.XLM-RoBERTa (Conneau et al., 2019) is a multilingual version of RoBERTa (Liu et al., 2019).Moreover, it is pre-trained on 2.5 TB of filtered CommonCrawl data containing 100 languages.M-BERT (Devlin et al., 2018) or multilingual BERT is pre-trained on 104 languages using masked language modeling (MLM) objective.Indic BERT (Kakwani et al., 2020) is a multilingual ALBERT (Lan et al., 2019) model developed by AI4Bharat and, it is trained on large-scale corpora of major 12 Indian languages, including Tamil.We use HuggingFace (Wolf et al., 2019) for training with SimpleTransformers.The training was stopped early if the f1 score did not improve for three consecutive epochs.
Results
The results obtained by all the models can be viewed in Table 2.
Ensemble Models
In the case of the ensemble models, Support Vector Machine obtained the best result for the Tamil data.
A macro-averaged f1 score of 0.33 and weightedaverage f1 score of 0.6 was obtained.
In the case of the codemixed data, Multi-Layer Perceptron obtained the best score among the ensemble models.It achieved a macro-averaged f1 score of 0.35 and a weighted-average f1 score of 0.65.Tree-based algorithms like decision trees, random forest, and XGBoost did not perform well.
RNNs
Among all the RNN models, ULMFiT fine-tuned on codemix data had the highest macro avg f1 of 0.40 and weighted avg f1 score of 0.68.ULMFit fine-tuned on Tamil data had the highest weighted avg f1 score of 0.63 while ULMFiT finetuned on combined dataset had the highest macro-averaged f1 score of 0.36.
Transformers
Out of the four transformers, we obtained the best results for MuRIL and XLM-RoBERTa in the case of Tamil.The macro-averaged f1 scores were 0.43 for both models, and the weighted-averaged f1 scores were 0.68 and 0.66 for MuRIL and XLM-RoBERTa, respectively.MuRIL and M-BERT outperformed all the other models for the Tamil-English codemixed data.The macro-averaged f1 scores of 0.45 and weightedaverage f1 scores of 0.61 were obtained by MuRIL and M-BERT.
Conclusion
This paper aims to detect and classify abusive comments.We tried three approaches for abusive comment detection in Tamil and Tamil-English Code-Mixed data: Ensemble models, Recurrent Neural Networks, and transformer-based models.
For the Tamil data, MuRIL and XLM-RoBERTa provided the best results with a macro-averaged f1 score of 0.43.Classes like Homophobia and Misandry were predicted with higher accuracy than others like Transphobic and Counter-Speech.Sentences that are not abusive are also classified well.
For the codemixed data, MuRIL and M-BERT outperformed all other models with a macroaveraged f1 score of 0.45.Classes like Xenophobia, Misandry, and Transphobic were predicted with higher accuracy than others for the codemixed data.Sentences that are not abusive are also classified well.
In the future, various techniques can be tried to improve the performance of the models.In order to boost the performance, genetic algorithm-based ensembling methods could be used.
Table 1 :
Distribution of classes in data
Table 2 :
Results obtained by all the models on the Tamil as well as codemixed data.*ULMFiT CD is trained by combining Tamil and CodeMix data. | 2022-04-22T06:47:22.705Z | 2022-04-19T00:00:00.000 | {
"year": 2022,
"sha1": "b84ca85dff8c3ed9d1018d184176721c08a9c684",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2022.dravidianlangtech-1.36.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "b84ca85dff8c3ed9d1018d184176721c08a9c684",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
3765982 | pes2o/s2orc | v3-fos-license | Metabolic syndrome in White-European men presenting for secondary couple's infertility: an investigation of the clinical and reproductive burden
We aimed to determine the impact of metabolic syndrome (MetS) on reproductive function in men with secondary infertility, a condition that has received relatively little attention from researchers. Complete demographic, clinical, and laboratory data from 167 consecutive secondary infertile men were analyzed. Health-significant comorbidities were scored with the Charlson Comorbidity Index (CCI; categorised 0 vs 1 vs 2 or higher). NCEP-ATP III criteria were used to define MetS. Semen analysis values were assessed based on the 2010 World Health Organization (WHO) reference criteria. Descriptive statistics and logistic regression models tested the association between semen parameters and clinical characteristics and MetS. MetS was found in 20 (12%) of 167 men. Patients with MetS were older (P < 0.001) and had a greater BMI (P < 0.001) compared with those without MetS. MetS patients had lower levels of total testosterone (P = 0.001), sex hormone-binding globulin, inhibin B, and anti-Müllerian hormone (all P ≤ 0.03), and they were hypogonadal at a higher prevalence (P = 0.01) than patients without MetS. Moreover, MetS patients presented lower values of semen volume, sperm concentration, and sperm normal morphology (all P ≤ 0.03). At multivariate logistic regression analysis, no parameters predicted sperm concentration, normal sperm morphology, and total progressive motility. Our data show that almost 1 of 8 White-European men presenting for secondary couple's infertility is diagnosed with MetS. MetS was found to be associated with a higher prevalence of hypogonadism, decreased semen volume, decreased sperm concentration, and normal morphology in a specific cohort of White-European men.
INTRODUCTION
According to the World Health Organization (WHO), secondary infertility is defined as a couple's inability to bear a child, either due to the failure to conceive or the inability to carry a pregnancy to live birth following either a previous pregnancy or a previous ability to carry a pregnancy to live birth. 1 Despite being poorly studied, secondary infertility rates have been increasing over time and are not negligible compared to primary infertility rates; overall, a male factor is involved in up to 50% of cases. 2 Several factors are recognized as potential causes of secondary couple's infertility. 3 Among them, the potential detrimental contribution of advanced age in terms of male reproductive function remains ambiguous. 4 Although not unequivocal, 5 epidemiological data have suggested that increasing paternal age (more than 35-40 years) is associated with delayed conception, 6 an increased risk of spontaneous pregnancy loss, 7 and a decreased success rate at both intra-uterine insemination 8 and in vitro fertilization. 9,10 Likewise, it has been previously shown that semen volume, sperm motility, and sperm ORIGINAL ARTICLE Metabolic syndrome in White-European men presenting for secondary couple's infertility: an investigation of the clinical and reproductive burden women and is alleged also for men. 2 An excess of adipose tissue is responsible for hormonal imbalance, especially when considering the hypothalamic-pituitary-gonadal (HPG) axis. 16 Nevertheless, the evidence linking obesity to impaired semen parameters is still not univocal despite extensive recent attempts to provide clarity on this controversial topic and its related issues. [17][18][19][20] Similarly, diabetes mellitus (DM) also perturbs both sexual and reproductive hormonal homeostasis and seems to affect spermatogenesis at various levels. 21 Although MetS was shown to have a detrimental effect on male reproductive health 22,23 in primary infertile men, the impact of MetS on male reproductive function has never been analyzed before in White-European men seeking medical help for secondary infertility. Likewise, the lack of previous clinical evidence and the increasing prevalence of MetS, 24 with its potential impact on both the hormonal milieu and the overall health status of men, prompted us to investigate the role of MetS in male secondary infertility, assessing (i) the prevalence of MetS, (ii) correlations between MetS and clinical characteristics, and (iii) the impact of MetS on semen and hormonal parameters in a cohort of White-European men presenting for secondary couple's infertility.
Patients
The analyses of this cross-sectional study were based on a sample of 167 consecutive White-European men assessed at a single academic center for secondary couple's infertility (noninterracial infertile couples only) between September 2005 and April 2013. Patients were enrolled if they were older than 18 years of age and had either male factor infertility (MFI) or mixed factor infertility (MxFI). MFI was defined after a comprehensive diagnostic evaluation of the female partners. According to the WHO clinical criteria, infertility is defined as not conceiving a pregnancy after at least 12 months of unprotected intercourse regardless of whether or not a pregnancy ultimately occurred. 1 Secondary infertility is defined as the inability to conceive following a previous pregnancy. 1 Patients were assessed with a thorough self-reported medical history including age and comorbidities. Comorbidities were scored with the Charlson Comorbidity Index (CCI). 25 We used the International Classification of Diseases, 9 th Revision. For the specific purpose of the analysis, CCI was categorized as 0, 1, ≥2.
Weight and height were measured for each participant; body mass index (BMI), defined as weight in kg/height in m 2 , was assessed for each patient. Testes volume was assessed through a Prader orchidometer. Patients underwent at least two consecutive semen analyses both depicting a condition below the standard values for normal semen parameters according to the WHO criteria. 26 A venous blood sample was drawn from each patient between 7 a.m. and 11 a.m. after an overnight fast. In all cases, fasting glucose levels were measured via a glucose oxidase method (Aeroset Abbott, Rome, Italy). Total cholesterol, HDL-C, and triglyceride levels were measured with the automated enzymatic colorimetric method (Aeroset Abbott, Rome, Italy). Follicle-stimulating hormone (FSH); luteinizing hormone (LH), prolactin (PRL), thyroid-stimulating hormone (TSH), and 17β-estradiol (E 2 ) were measured using a heterogeneous competitive magnetic separation assay (Bayer Immuno 1 System, Bayer Corporation, Tarrytown, NY, USA). Inhibin B (InhB) and anti-Müllerian hormone (AMH) were measured by an enzyme-linked immunosorbent assay (Beckman Coulter AMH Gen II ELISA). Total testosterone (tT) levels were measured via a direct chemiluminescence immunoassay (ADVIA Centaur; Siemens Medical Solutions Diagnostics, Deerfield, IL, USA), and sex hormone-binding globulin (SHBG) levels were measured via a solid-phase chemiluminescent immunometric assay on Immulite 2000 (Medical Systems SpA, Genoa, Italy). Calculated free testosterone (cfT) was derived from the Vermeulen formula. Hypogonadism was defined as tT <3 ng ml −1 . 27 Calculated free testosterone (cfT) was derived from the Vermeulen formula. 28 The same laboratory was used for all patients.
MetS was defined according to the 2004 updated National Cholesterol Education Program (NCEP) Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (Adult Treatment Panel III) (ATP III) criteria (at least 3 of the following criteria: waist circumference >102 cm; triglycerides equal to or >150 mg dl −1 (1.7 mmol l −1 ); HDL <40 mg dl −1 (1.03 mmol l −1 ); blood pressure equal to or >130/85 mmHg or use of medication for hypertension; and fasting glucose equal to or >100 mg dl −1 (5.6 mmol l −1 ) or use of medication for hyperglycemia). 29 Data collection followed the principles outlined in the Declaration of Helsinki; all patients signed an informed consent agreeing to supply their own anonymous information for future studies. The study was approved by our Local Ethical Committee.
Statistical analyses
Data abstraction was performed by six different abstractors on 100% of the medical records at the time of office admission. The data quality analysis showed an error rate of 0.3%.
Data are presented as means (medians; ranges). The statistical significance of differences in means and proportions was tested with the one-way analysis of variance and Pearson Chi-square test, respectively; 95% confidence intervals (95% CIs) were estimated for the association of categorical parameters. Exploratory analyses were initially applied to all variables; variables were retained when clinically significant to the results. Univariable (UVA) and multivariable (MVA) logistic regression models tested associations between the clinical predictors and pathologic semen parameters. Odds ratios and their 95% CIs were estimated.
Statistical tests were performed using SPSS version 19 (IBM Corp., Armonk, NY, USA). All tests were two-sided, with a significance level set at 0.05. Table 1 lists the characteristics and the descriptive statistics of our secondary infertile patients. Overall, MFI and MxFI were found in 138 patients (82.6%) and 29 patients (17.4%), respectively. MetS was found in 20 (12%) secondary infertile patients. The analysis of our data allowed us to highlight several features capable of segregating patients diagnosed with MetS (+MetS) from their non-MetS counterpart (−MetS). From the descriptive standpoint, +MetS patients were older and with a higher BMI (all P < 0.001). Further differences were observed in terms of hormonal profile (lower InhB, AMH, tT, and SHBG circulating levels in men with MetS; all P ≤ 0.03); consistent with these findings, hypogonadism was more common in the +MetS group (P = 0.01). When considering seminal parameters, lower semen volume, sperm concentration, and rates of normal morphology (all P ≤ 0.03) were observed in patients with MetS. Table 2 details logistic regression models testing the associations between clinical predictors and pathologic sperm parameters. At univariable analysis, higher FSH, lower InhB levels, and lower mean right testis volume were associated with pathologic sperm concentrations (all P ≤ 0.04). Conversely, age, CCI, and +MetS were not. Similarly, higher FSH levels were univariably associated with pathologic progressive motility (P = 0.002). No variable was associated with pathologic sperm morphology. At MVA, no variable reached statistical significance for any pathological sperm condition.
DISCUSSION
We cross-sectionally tested the rate of MetS in a relatively large sample of White-European men seeking first medical attention for secondary couple's infertility at a single academic outpatient center. Likewise, we assessed the impact of MetS on clinical and semen characteristics in the same sample. Our interest was fuelled by (i) epidemiologic data suggesting an increasing prevalence of secondary infertility; 1,30 (ii) previous data showing the increasing prevalence of MetS among European men; 24 (iii) the potential impact of MetS on the overall hormonal milieu, 15,31 and male's overall health status; 32 and (iv) the lack of published observations of an association between MetS and male secondary infertility.
To the best of our knowledge, these findings offer the first demonstration that more than one out of eight men presenting for secondary couple's infertility meets NCEP-ATP III criteria for MetS. This prevalence is higher than that observed in the general population of the same age range. 33 We chose NCEP-ATP III criteria to define MetS because they are the most widely used and readily available to physicians, thus facilitating their clinical and epidemiological use. Moreover, this definition does not harbor any preconceived notion of the underlying cause of MetS, whether it be insulin resistance or obesity. By adopting stringent enrolment criteria we were able to select a homogeneous White-European male sample (including only noninterracial infertile couples), thus minimizing the impact of potential unpredictable genetic biases.
The current findings demonstrate that +MetS patients were older and had a higher prevalence of hypogonadism compared to their −MetS counterpart. When assessing patients' comorbidity burden by means of the CCI scoring system, we found no significant general health status decline in men with MetS. Conversely, we have previously reported that primary infertile men with MetS are generally less healthy than their −MetS counterparts; 34 to this regard, we may speculatively argue that this difference is not observed in secondary infertile men due to their higher age and their consequent higher age-related comorbidity load. Moreover, the CCI was originally designed to assess comorbidities typically associated with 1-year mortality; therefore, it includes medical conditions that are more frequently found in an older or even elderly population and usually not in a younger population (such as infertile men). Thus, by definition and by its inherent limits, CCI completely excludes any item related to blood hypertension or sexually transmitted diseases which, in contrast, may be relevant medical conditions in young infertile men in the real-life setting. 14,35 The second aspect of major clinical importance for these findings is related to patient age. We report the novel finding of a significant age increase among secondary infertile men with MetS compared with other infertile patients, thus confirming our own previous findings in primary infertile men. 34 Stone et al. 11 recently observed that the 34-40 years age range appears to be indicative for the first manifestations of age-related effects on seminal parameters. In this context, the likelihood of pregnancy following intercourse declines continuously in men older than 34 years of age, regardless of the female partner's age. 11 Sperm concentration and the percentage of sperm with normal morphology, sperm motility, and ejaculate volume were found to decrease after 40, 43, and 45 years, respectively, whereas total sperm count declines even earlier. 11 Increasing paternal age (above the age of 35-40 years) was found to be associated with delayed conception in a large cohort of British fertile couples, 6 with an increased risk of spontaneous pregnancy loss, 7 and a decreased success rate for couples undergoing assisted reproductive techniques, [8][9][10] although these findings were not always unanimously confirmed. 5 Considering the mean age of our secondary infertile patients, along with previous evidence indicating a drift over delayed fatherhood 12 and the possible detrimental consequences of this, +MetS infertile patients are at an even higher risk, as the current findings outlined that they are older than infertile not meeting the criteria for MetS. Our analyses confirmed the association between MetS and male hypogonadism in the general population 36 and in infertile patients. 19,20 We found that tT was reduced in +MetS patients compared to the -MetS group whereas cfT did not seem to be affected by this condition. In contrast, Lotti et al. 22 reported decreased values of both tT and fT, while Leisegang et al. 23 only reported an fT reduction in this specific setting. Along with tT, SHBG was also found to be reduced in our subset of +MetS patients. Although obesity and MetS are known to lower SHBG levels, 37 the actual impact on fT is still under debate. In this context, the results of the Massachusetts Male Aging Study showed no difference in terms of fT in overweight men. 38 Conversely, MacDonald et al. 39 reported the results of a meta-analysis showing a negative relationship for tT, SHBG, and fT with increased BMI values. Contextual decreases in both SHBG and tT may partially account for unmodified fT levels in our patients. This is important to note, as the patients in the current study were considerably younger than those reported in the studies just cited, thus potentially disguising any age-related effect on T levels. 40 As a whole, obesity-related and MetS-related hypogonadism is known to be accompanied by a plethora of factors simultaneously acting centrally and peripherally. 15 However, the impact of MetS on endocrine testicular function does not appear to be restricted only to T homeostasis. We observed that InhB and AMH levels were both reduced in +MetS patients as reported in previous studies. 34,41 Our findings show the potential role of MetS in affecting semen parameters in secondary infertile men. More specifically, semen volume, sperm concentration and normal morphology were reduced in the +MetS subcohort, with a lower, yet not significant, prevalence of oligospermic, asthenospermic, and teratospermic patients. To this regard, the relationship between MetS and seminal parameters remains controversial. Indeed, two observational studies 22,23 have shown an association between MetS and poor sperm parameters in men broadly presenting for couple infertility. Conversely, our previous findings regarding primary infertile men reported no noticeable detrimental effect induced by MetS. 34 We may speculate that several factors might account for these differences. Emphasis on obesity when defining MetS (such as the case of Lotti et al. 22 using the International Diabetes Federation worldwide definition) and older age (as observed in our secondary infertile patients) might unveil the impact of MetS on seminal parameters whereas younger age and NCEP-ATP III definition, as observed in our sample of primary infertile men, would not. 34 A recent study reported that 45 oligo-terato-asthenospermic MetS patients treated with metformin for six consecutive months experienced improvements in hormonal, metabolic and, above all, semen characteristics. 42 Such evidence suggests that in selected patients, improvement of the metabolic component might positively impact male reproductive health.
Our study is not devoid of limitations. First, this was a hospital-based study, raising the possibility of a number of selection biases. The sample was recruited from a single academic outpatient clinic, and despite the fact that it was made up of probably the largest, to date, homogeneous group of White-European secondary infertile men (restricted to noninterracial infertile couples), several larger studies across different centers and populations will be needed to substantiate our findings. Second, the analyses were implemented in a cross-sectional setting that lacked a comparison with a same-race, age-matched sample of fertile individuals. Third, although one of the strengths of these analyses was the availability of a rather comprehensive and consistent hormonal milieu for each patient, we lacked data regarding potential molecular alterations in spermatogenesis, which might be of importance in investigating the eventual impact of MetS on semen health. Fourth, the observational nature of the study prevents any kind of causal interpretation between MetS and male infertility.
Overall, MetS emerged as a powerful modifier not only of the endocrine milieu but also of semen quality in the current sample of patients. Molecular alterations in spermatogenesis, assessed for instance through DNA sperm fragmentation analysis, will perhaps provide more detailed information.
AUTHOR CONTRIBUTIONS
EV collected the data and drafted the manuscript; LB, PC, and RS collected and manage the data; EV, EP, and AS analyzed and interpreted the data and performed the statistical analysis; EP, RD, and FM were responsible for the critical revision of the manuscript for important intellectual contents; AS was responsible for the study concept and design and drafted the manuscript. | 2018-04-03T00:00:38.064Z | 2016-03-18T00:00:00.000 | {
"year": 2016,
"sha1": "59f1c1f48d4c6c15da16355b739c6bfae0a76cf5",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1008-682x.175783",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59f1c1f48d4c6c15da16355b739c6bfae0a76cf5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
67812783 | pes2o/s2orc | v3-fos-license | Post-harvest height growth of Norway spruce seedlings in northern Finland peatland forest canopy gaps and comparison to partial and complete canopy removals and plantations
Recent studies have shown the establishment of Norway spruce (Picea abies L. Karst.) to be successful in small canopy gaps cut in drained spruce mire stands in northern Finland. The aim of this study was to quantify seedling height growth in gaps and compare it to that observed in other canopy cuttings and plantations. We sampled spruce crop seedlings (maximum density ca. 3000 ha–1) in the spring of 2013 in a field experiment in which canopy gaps of 10, 15 and 20 m in diameter had been cut in winter 2004. The total seedling height in 2013 and the length of annual shoots over the past five years (2012–2008) were recorded in the survey. Seedling height varied from 20 cm to 2.7 m, with an average of 65 cm. The average annual height growth was 7.1 cm. A mixed linear model analysis was carried out to investigate seedling height growth variation. Seedling height was linearly and positively related to growth. Height growth started to increase in the fifth growing season after cutting. Seedling height growth in the 20 m gap was slightly better than in the smaller ones. In the 15 m gap, both the centrally located seedlings and those located at the northern edge grew best. In the 20 m gap, southerly located seedlings grew more slowly than seedlings in all other locations. The average seedling height growth in this study was about 60% of that in peatland plantations, but comparable to that in mineral soil gaps, and 2–3 times higher than in uneven-age cut stands.
Introduction
Traditionally, highly productive spruce mire stands have been regenerated using clear-cutting and effective site preparation methods like mounding or ploughing, and the planting of Norway spruce (Picea abies L. Karst.)seedlings (e.g.Moilanen et al. 1995).In clear-cut areas, however, problems related to competition from ground vegetation (Moilanen et al. 1995;Hånell 1992) and pioneer tree species, mainly pubescent birch (Betula pubescent Ehrh.), necessitate intensive early management by means of herbaceous control, tending and pre-commercial thinning.These measures increase the costs of the clear-cut method.Water protection issues related to clear-cut and site preparation (e.g.Nieminen 2004) have also raised interest in the possibilities of natural regeneration of drained peatland forests.
Recent studies suggest that in northern Finnish spruce mire stands, the cutting of small canopy gaps provides conditions where the natural establishment of Norway spruce seedlings (composing of both advance regeneration and germinated seedlings) can be sufficient within five years of cutting without any regeneration measures and costs (Hökkä et al. 2011(Hökkä et al. , 2012)).Seedling establishment, however, does not guarantee that successful regeneration has been accomplished.It is important to know the growth rate of the established seedlings; how much lower their growth is compared to that in clear-cut areas and how long the period of growth recovery is.If the seedlings continue their growth at a fairly low rate for decades, the regeneration period becomes excessively long, which may result in the form of significantly longer rotations and postponed cutting incomes.This may eventually compromise the benefits of non-existent regeneration costs.
A mass of studies have been investigating forest gap formation, seedling establishment, and seedling growth in gaps.However, the majority of those studies have been conducted in temperate or tropical forests and concern natural gap dynamics with different broadleaved tree species (cf.Yamamoto 2000;Coates and Burton 1997).In fact, there seems to be much more limited information available on timber harvesting methods that emulate natural gap dynamics as a way of regenerating boreal coniferous forests.Spruce seedling establishment and the density of regeneration stocking in natural and harvested gaps have been investigated in quite a lot of studies (e.g.Leemans 1991;Drobyshev and Nihlgård 2000;Hanssen 2003;Valkonen et al. 2011;Hökkä et al. 2012).However, very few studies have addressed the height growth dynamics of advance regeneration in canopy gaps.
One of the earliest studies on Norway spruce seedling height development after the cutting of canopy gaps in the boreal region was made by Cajander (1934).He found that in a southern Finland rich Oxalis-Myrtillus mineral soil site (according to Cajander (1926)), spruce advance regeneration showed slow recovery of growth in gaps larger than 0.01 ha, i.e., larger than ca.12 m in diameter.If the gap diameter exceeded 25 m, all seedlings in the gap could respond and grow vigorously.Chantal et al. (2003) analysed light conditions and the early post-cutting development of Norway spruce and Scots pine seedlings that were seeded systematically in different locations in a 50 m diameter gap and under the canopy of the surrounding stand in a Myrtillus site (Cajander 1926).Due to the uneven distribution of light on the gap area, location-wise and species-wise differences in seedling biomass were found after two years.Drobyshev and Nihlgård (2000) showed that in natural gaps in southern boreal spruce stands, the growth of spruce seedlings was related to gap size as well as seedling size and location within the gap.Outside the boreal region, Coates (2000) concluded that several planted coniferous species showed an asymptotically increasing growth trend as the gap size increased from a small gap the size of couple of trees to one of 1000 m 2 in the temperate coniferous forests of British Columbia.On boreal peatlands, the only information on spruce advance regeneration seedlings' height growth in canopy gaps are the tentative results published by Hökkä and Repola (2012) from the same data used by Hökkä et al. (2011).
The conclusion was that no differences in the average growth rate of the first five-year period in gaps of different sizes (78-314 m 2 ) were found.
The size of the gap, as well as the seedling's location within the gap due to the uneven distribution of light over the gap area, influence the seedling growth rate (e.g.Coates 2000;Drobyshev and Nilhgård 2000;Chantal et al. 2003).It can be assumed that three factors are related to the growing conditions in gaps of different sizes.The first is the higher amount of radiation, i.e. light and temperature available for trees in the large gaps, which has a positive impact on tree growth in places were radiation levels significantly increase (Page and Cameron 2006).Total precipitation reaching the ground also increases when compared to the uncut forest resulting in higher soil moisture in gaps (Page and Cameron 2006).Secondly, increased light also enhances competition for free resources.On a highly productive site, ground vegetation covers the site within two to four years (Moilanen et al. 1995) and may form a serious constraint for seedling growth and survival (Hånell 1992;Hånell 1993;Nilson and Lundqvist 2001).Rapidly-growing pioneer tree species like pubescent birch also benefit from increased light and create another aspect of competition (Moilanen et al. 1995;Roy et al. 2000).The third factor is the edge effect of the uncut forest, i.e., the root competition of trees growing at the gap edge also extends to the gap area (Kuuluvainen 1993;Chantal et al. 2003).The observed growth rate of advance growth reflects a combination of these factors which are not equally distributed over the gap area.
The cutting of canopy gaps in order to release the growth of advanced regeneration can be compared to other release cuttings, e.g., when shelter trees are removed to enable the development of natural advance growth.A decrease in competition and a change in light availability may however be significantly larger than in the case of gap cuttings, depending on the completeness of overstory removal.Results from studies from mineral soil sites have indicated rather long periods of slow spruce height growth after overstory removal.Koistinen and Valkonen (1993) and Valkonen (2000) proposed that spruce advance growth shows a clearly lower growth rate for four to five years after release cutting when compared to planted spruce seedlings in mineral soil sites in southern Finland.A similar lag in height growth has also been reported by Cajander (1934), Skoglefald (1967) and Bergan (1971).According to Valkonen (2000), it will take another five or more years for height growth to fully recover to a level similar to that of planted spruce.Örlander and Karlsson (2000) studied a shallow-peated drained spruce stand in southern Sweden and found that after cutting the spruce seedlings eight-year height growth rate was related to seedling height, height growth prior to cutting, and density of the retained overstory trees.
Another point of comparison could be found from cuttings in uneven-aged stands, in which selected dominant and co-dominant trees are harvested to enhance the growth of sub-canopy trees and improve conditions for the establishment of natural seedlings.Eerikäinen et al. (2014) investigated the growth rate of seedlings growing in uneven-aged cut stands in southern Finland and concluded that spruce seedling height growth was very slow, only a couple of centimetres annually and strongly related to tree height.They also found that cutting intensity influenced the growth of the seedlings.Lundqvist (1989) has reported comparably slow growth rates in Sweden after uneven-aged cutting.
The third point of comparison is spruce plantation in a clear-cut area, where the availability of resources (radiation, water and nutrients) is not limited by the overstory trees or the forest edge.The early growth rate of planted seedlings is supposedly the fastest in spite of the competition from ground vegetation and fast-growing pioneer species, which may be more severe than under canopy trees or in gaps.
Based on the previous results, it can be expected that the early height growth rate of established seedlings in small canopy gaps is rather low and it will take several years before height growth starts to recover.From the silvicultural point of view, the success of gap regeneration can be evaluated by comparing seedling height growth rate to that observed after other methods of regeneration, given that regeneration density is sufficient.Such comparisons are lacking, possibly due to the fact that seedling dynamics in harvested gaps appears to be poorly documented in the boreal region.There is more information available on the height growth of Norway spruce advance regeneration after complete or partial canopy removal, which has enabled comparisons with spruce plantations in mineral soil sites, for example (e.g., Valkonen 2000).Information from drained peatlands on the height development of spruce plantations has recently become available (Siipilehto et al. 2014).
The aims of this study were to quantify the early height growth rate of Norway spruce seedlings in small canopy gaps in a northern boreal spruce mire and the effect of gap size and different growing locations within the gap on growth by means of regression modelling.The observed height development was compared to that reported after different canopy removal cuttings from mineral soil sites and the average height development observed in peatland plantations.This study was based on measurements of annual shoot lengths of selected crop trees.The data originated from the same stand as that used in Hökkä et al. (2012).
Study site
The study site was located in Tervola, northern Finland (N = 7341008, E = 440177) and represented a eutrophic, shallow-peated spruce swamp (Laine et al. 2012) with peat thickness varying from 10-50 cm.In terms of timber productivity, the site is comparable to a rich mineral soil site (Oxalis-Myrtillus site (Cajander 1926)) with average annual growth varying between 8-10 m 3 ha -1 a -1 in southern Finland and 5-6 m 3 ha -1 a -1 in northern Finland (Laine et al. 2012).The average annual temperature sum (with a 5 °C threshold) between 2000 and 2010 was 1076 dd °C (Venäläinen et al. 2005) and the altitude was 105 m above sea level.The site was drained for the first time in the 1960s, and complemented at 1980s.Currently the ditches are in satisfactory or poor shape with respect to their ability to transport water effectively.The present tree stand was composed of mature Norway spruce with a variable admixture of pubescent birch.The stand dominant height (mean height of 100 thickest trees on hectare) varied from 17 to 18 m, stem number from 735 to 2930, and stand volume from 170 to 227 m 3 ha -1 .
The experimental design was composed of a total of four randomised blocks, each including the three different gap sizes replicated two to four times, i.e., the experimental design was unbalanced.The diameter of the largest gap was 20 m, the middle-size gap 15 m, and the smallest gap 10 m.The total number of gaps was 33.The cutting of the gaps was performed in November 2004 when the soil was frozen and covered in snow.A more detailed description of the experimental design and cuttings can be found in Hökkä et al. (2011) and Hökkä et al. (2012).
Seedling survey and measurements
For the height growth analysis, a subsample of spruce seedlings was taken in spring 2013, after eight growing seasons had passed since the cutting, based on the same design of seedling survey plots as in Hökkä et al. (2011) and(2012).Seedlings were sampled from five circular survey plots, the largest of which was located at the centre of the gap (10 m 2 in size) and four smaller (5 m 2 ) ones at a distance of 1.5 m from the edge of the gap in each cardinal direction (Fig. 1).Only crop seedlings were selected, with the aim of achieving a maximum density of approximately 3000 seedlings ha -1 in each canopy gap.The selected crop seedlings had to be vigorous and healthy, were in a dominant position, located at least 0.6 m distance from each other, and were at least 20 cm tall at the time of the survey.To obtain the target density, a maximum of four seedlings were selected from the largest central survey plot and a maximum of two seedlings from each of the smaller plots, with the limitation that the maximum number of seedlings in a gap was nine, which would give 2997 ha -1 as the maximum crop seedling density.The majority of the sampled seedlings originated from the advance growth, i.e., they had been established before cutting, but part of the smallest seedlings had been established after cutting.
From the selected seedlings the following characteristics were measured: distance and location from the survey plot centre to map seedlings exact position, seedlings total height, and annual height growth of the past five-year period with an accuracy of 1 cm.Not all selected seedlings included all five height growth observations, because some were only 20 cm tall at the time of data collection and had been established after cutting.The total number of crop seedlings in the data was 162 and their average density was 1809 ha -1 (1645 ha -1 if the three gaps in which no crop seedlings were found are included) (Table 1).In total, there were 716 height growth observations in the data.
Statistical analyses
The height growth variation of crop seedlings was studied by means of growth model analysis.Eerikäinen et al. (2014) also used growth model analysis as the method to investigate factors affecting the height growth of spruce seedlings.Annual shoot length was used as the response variable.Since it was not normally distributed, logarithmic transformation was made after adding 0.5 to each growth observation.The data were hierarchically structured at block, gap, survey plot and seedling levels.Further, annual shoot lengths were sequentially correlated within each tree.To account for these correlations, the mixed linear model approach (e.g., Snijders and Bosker 2003) was applied in the analysis.The following model was used to explain height growth variations in the data (Snijders and Bosker 2003): m lijk = random effect of seedling k in survey plot j in gap i in block l, NID (0, δ m 2 ) e lijkt = random residual error of year t in tree k in survey plot j in gap i in block l, NID (0, δ e 2 ) The covariance structure of the successive annual height growth observations was assumed to follow the first-order autoregressive (AR-1) structure.The model fixed parameters and variances of the random effects were estimated simultaneously using the maximum likelihood method as implemented in the mixed procedure of SAS (SAS Institute Inc. 2002-2008).The t-test was used to assess the significance of fixed parameters.Minimum 2xlog-likelihood and AIC were used to select the best model.
Modelling height growth variation
The mean height growth of the crop seedlings during the five-year period from the third to the eighth year after cutting was rather low, only 7.1 cm year -1 .There was an increasing trend from 2009 onwards (Fig. 2a).Improving height growth was also seen in the average height development over the study period (Fig. 2b).
In the model's fixed part, the crop seedling's height growth was linearly related to tree height at the beginning of each growing season (Table 2).There was a non-linear temporal trend comprising a negative linear term and a positive term with exponent 1.3.This corresponds to the average temporal trend seen in Fig 2a .In the largest gaps the negative first-order effect was significantly less than in the smaller gaps (Table 2).
There were also a few effects related to the gap size and seedling location within the gap that turned out to be significant in the model.On average, height growth was best in the largest (20 m diameter) gaps.The height growth of seedlings located in the middle or at the northern edge of the medium size gaps (15 m diameter) showed better growth than seedlings in other locations.Seedlings located in the southern part of the largest gap (20 m diameter) showed poorer growth than seedlings in other locations.
The block level variation related to the experimental design appeared to be non-significant and was omitted from the model.In addition, the random effect of the gap was non-significant.Most of the unexplained variation was at the residual level, i.e., among the annual shoot lengths (Table 2).The serial correlation coefficient was rather low, 0.2969.
Model application
The model shown in Table 2 was used to predict a height growth of 10 cm initial height seedlings from the beginning of the third post-harvest year to a point when breast height was achieved (Fig. 3).Based on height development predicted by the model, the seedlings reached 1.3 m in height when 11 or 12 growing seasons had passed since cutting, depending on the location of the seedling.In terms of height growth speed, two groups could be distinguished: seedlings located in the middle and north part of a 15 m gap and seedlings at a 20 m gap (excluding the southernmost edge) showed the best growth, while a lower growth rate was observed in other locations.There were also some minor differences in growth rates within the two groups.Predictions for any further heights should be inspected with caution, because there were very few trees taller than 1 m in the data (< 10%) and the power term for time since cutting may produce biased growth estimates for trees taller than 1.3 m.
Spruce seedling height growth after different canopy removal cuttings and in plantations
The observed spruce seedling mean height growth of this study was compared to results from earlier studies that reported on complete removal (release of advanced growth) and partial (thinning, shelter tree cutting, uneven-aged harvest) removal of Norway spruce overstory and to that observed in peatland Norway spruce plantations in the boreal region (Table 3).Different cutting treatments were compared separately to gap cutting.When possible, a similar study period and seedlings of a similar size (41 cm tall at the beginning of the third growing season after cutting in this study) were compared.Site types were mostly mineral soils sites and represented high or good productivity in all studies.
In canopy gaps cut in southern Finland, at the highly productive Oxalis-Myrtillus (OMT) mineral soil site, the annual height growth of the smallest (< 10 cm tall) seedlings remained under 10 cm in gaps of all sizes, while larger seedlings showed 20-50 cm height growth ten years after cutting in gaps larger than 0.04 ha in size (Cajander 1934).Calculated from the data published in the form of tables in Cajander (1934), the average height growth from year 3 to year 7 for seedlings of 41-60 cm in height in < 0.03 ha gaps was 8.8 cm year -1 (Table 3).
After complete overstory removal, the average annual height growth of spruce advance regeneration increased from less than 10 cm to 60 cm in ten years and varied according to initial seedling height (Cajander 1934).Average height growth calculated for 41-60 cm seedlings from year 3 to year 7 was 21.1 cm.Koistinen and Valkonen (1993) reported that the spruce advance regeneration average height growth after complete removal of overstory was 20 cm year -1 in the nine-year post-harvest period in OMT and Myrtillus types (MT) in southern Finland (Table 3).Spruce advance regeneration height growth after shelter tree cutting has been investigated by Örlander and Karlsson (2000).For the first eight post-cutting years following partial or complete overstory removal height growth varied according to the number of retained canopy trees and size class of the seedlings.Mean annual height growth varied from 5 to 25 cm when shelter tree density was 160 ha -1 (12.2 m 2 ha -1 ) or less (Table 3).With this treatment the average eight-year growth of 20-50 cm seedlings was 11-12 cm year -1 .In uncut forest or dense shelter tree stands (320 ha -1 ), only trees taller than 1.0 m showed an average annual growth clearly higher than 5 cm.The site was located in southern Sweden and represented a productive drained spruce peatland with a shallow peat layer and an estimated site index of H 100 = 30 m.
In Nilson and Lundqvist's (2001) study on partial (30%, 60% or 85%) removal of overstory by thinning either from above or below, spruce advance growth showed very slow mean annual height growth rates of 2-10 cm for saplings taller than 50 cm during the first six years after harvest in northern and central Sweden (Table 3).The sites were of good productivity (V.myrtillus -low herb type in the north, tall herb type in the central) but were located at a relatively high altitude (425 and 470 m.a.s.l.).Only in the seventh year did the maximum height growth reach 15 cm in the northern site.At the experimental site in central Sweden, the growth response was slightly stronger (seventh year maximum annual growth > 20 cm).Comparable mean growth cannot be calculated, but the visual inspection of growth for the period between the third and the seventh year 7.1 cm 5-year mean for 41 cm seedlings (3rd to 7th year after cut) Cajander (1934) Gap cutting Southern Finland Oxalis-Myrtillus mineral soil site b) 8.8 cm for 41-60 cm seedlings (3rd to 7th year after cut) Complete overstory removal 21.1 cm for 41-60 cm seedlings (3rd to 7th year after cut) Koistinen and Valkonen (1993) 11.9 cm (first 9. after cutting suggest lower mean growth in the northern site and similar growth for the southern site (Nilson and Lundqvist 2001) than growth in this study.Eerikäinen et al.'s (2014) results on spruce height growth after uneven-aged harvest in mineral soil experimental sites in southern Finland also indicated very slow rates of growth.During a 15-year period after cutting for a medium growth forest site, the annual height growth varied from 1 to 4 cm (Table 3).Lundqvist (1989) has reported similar results with very slow annual height growth (2-4 cm) after cutting an uneven-aged forest in Sweden.Siipilehto et al. (2014) reported average heights of spruce plantations established in peatland site clear-cuts based on a large sample plot data obtained from practical forest regeneration areas.The results are valid for a wide geographical area in Finland and the mean represent conditions prevailing in central Finland.At the average age of 9.2 years, the mean height of planted spruces was 1.1 m, which corresponds to an average annual height growth of 11.9 cm after planting (Table 3).
Discussion
Based on the results of this study, the growth rate of Norway spruce seedlings from the third to the eighth year after canopy gap cutting in a drained spruce mire stand in northern Finland was fairly low.Cajander's (1934) results on height growth for a comparable post-harvest period for the same sized advance growth seedlings growing in gaps of a similar size (< 0.03 ha) were about 24% higher (8.8 cm) than the average 7.1 cm in this study.The difference may be explained by the fact that Cajander's (1934) material was from southern Finland and this study was conducted in northern Finland.In a similar forest site type, a 25-30% difference in timber productivity can be due to climate (Gustavsen 1980), which is also reflected in the seedlings' height growth.
When compared to studies on seedling growth recovery after complete overstory removal, the mean height growth in this study was much less than the average of 20 cm reported by Koistinen and Valkonen (1993) for a nine-year period after release cutting on mineral soil sites in southern Finland.The growth difference is caused by a combination of several reasons: i) the total removal of overstory trees provides clearly better growing conditions in comparison to those prevailing in canopy gaps; ii) the geographical difference between southern and northern study sites; and ii) greater mean height (2.7 m) of the seedlings at the time of cutting in the data from Koistinen and Valkonen (1993).On the basis of the comparison made by Valkonen (2000), the height growth rate after complete overstory removal in Cajander's (1934) data was even faster than that reported by Koistinen and Valkonen (1993), perhaps because in the former study the oldest stunted seedlings were excluded from the analysis (Cajander 1934).
Partial overstorey removal studies have been conducted in Sweden.Örlander and Karlsson (2000) study on variable intensity shelter tree cutting in southern Sweden showed that a significant growth response of seedlings can be found if the density of retained trees was 160 stems ha -1 or less.Better growth than in this study may be explained by the southern location since the site types are quite similar.In addition, the average overstory density of the shelter tree stand was probably lower than the average in this study although overstory density in gaps was not defined.Nilson and Lundqvist's (2001) results on spruce seedling growth rates after partial overstory removal were somewhat lower than or similar to those observed in this study in the northern study site, but slightly better in the central Sweden study site.In the heaviest canopy removal treatment, only 20-50 m 3 ha -1 standing volume was retained but even in those stands the growth response was not remarkable.Since the seedlings were also larger than in this study, the poorer average growth may be attributable to more unfavourable growing conditions in the high altitude forests in the Swedish study sites.Comparisons with uneven-aged cuttings indicated clear differences in the height growth of advance regeneration in the post-harvest period.In this study the average growth was two to three times better than in uneven-aged harvest stands observed in southern Finland by Eerikäinen et al. (2014) or in Sweden (Lundqvist 1989).Based on simulations with the model constructed in this study, it would take 11-12 years for a 10 cm-tall spruce seedling to reach a height of 1.3 m.If an assumption is made that for a spruce seedling it takes three to five years to reach 10 cm (Hökkä et al. 2011), breast height can be reached in 20 years or less in the gaps of this study site.That would be significantly quicker when compared to the average of 60 years in an uneven-aged stand in southern Finland mineral soil sites (Eerikäinen et al. 2014), or 24-47 years as reported by Lundqvist (1989) from mineral soil sites in Sweden.One possible reason may be that after uneven-aged cutting in the more southerly located mineral soils sites, the retained stand remains rather stocked (Eerikäinen et al. 2014) and conditions more shady than after cutting canopy gaps in naturally more open (e.g., Heikurainen 1971;Norokorpi et al. 1997) spruce mire stands in northern Finland.Heterogeneous stand structures in drained peatland spruce mires persist for decades after drainage (Sarkkola et al. 2003).Such conditions allow more light to penetrate to the soil surface and advance growth may be capable to respond more quickly to increased light availability.
In peatland spruce plantations (Siipilehto et al. 2014), seedling average height growth was 11.9 cm which was almost 70% higher than the average growth in this study.The difference may partly be due to the slight climatic differences between the plantation data and these data, but it is mostly due to different growth patterns of planted and advance growth seedlings (Valkonen 2000).According to Saksa (2011), planted spruce seedlings in mineral soil sites in central Finland grow height 10-11 cm annually during the first four years after planting on average, and after that growth will further increase.Advance growth seedlings' full growth recovery may take 10-15 years after the removal of the canopy trees (Valkonen 2000).Valkonen (2000) concluded that advance growth seedlings which are 1.0-1.5 m tall at the time of canopy removal will achieve their inherent growth rate in 10 years and after that their growth is comparable to that of planted seedlings.
The height growth rate observed in this study is in line with earlier results obtained in canopy gaps by Cajander (1934).After release cutting in mineral soil sites (Cajander 1934;Koistinen and Valkonen 1993) growth was clearly better than in this study, because of the lack of competing canopy trees and partly due to the southern location and taller seedlings (Koistinen and Valkonen 1993).In partial overstory removal studies, growth was better (Örlander and Karlson 2000) or slightly poorer (Nilson and Lunqvist 2001) than that observed in this study, and in uneven-aged cuttings (Eerikäinen et al. 2014;Lundqvist 1989) much lower growth rates than in this study were observed.The seedlings' growing environment in this study's data may represent conditions where the change in light availability is something between complete removal of the overstory and partial cutting that is typical for uneven-aged harvest or shelter tree cutting.The observed growth differences that cannot be attributed to geographical or site deviations may reflect this difference.
The non-linear temporal trend in the annual height growth indicated a slight improvement in height growth after the fourth growing season.This corresponds well with the conclusions of Koistinen and Valkonen (1993) and Valkonen (2000) that Norway spruce height growth does not recover before the fourth or fifth growing season following overstory removal.A similar lag of three to four years in growth after release cutting was also reported by Örlander and Karlsson (2000) in a southern Swedish site as well as Nilson and Lundqvist (2001) in central and northern Sweden, and Metslaid et al. (2005) in Estonia.According to Valkonen (2000) it may take another five years or more before seedling growth rates are fully recovered and comparable to that of planted seedlings.Longer recovery periods have also been observed.Skoglefald (1967) reported a seven-year recovery time for Norway spruce in northern Norway and Groot and Hökkä (2000) found black spruce (Picea mariana (Mill.)BSP) advance regeneration height growth suppression effects to last 21 years.This lag in growth is due to the fact that suppressed seedlings need several years for their needles and root system to adapt and for resources to be reallocated to changed light conditions (Kneeshaw et al. 2002).Kneeshaw et al. (2002) showed that the height growth of Douglas fir (Pseudotsuga menziesii (Mirb.)Franco) advance growth decreased in the first year after partial overstory removal, followed by an increasing growth trend in the following years.Because decreasing height growth several years after overstory removal has not been previously reported, the height growth minimum in the fifth year (2009) of this study may be due to the cool and wet summer of 2008.Norway spruce height growth has been observed to be poorly correlated with climatic variables, but it is likely that the previous year's weather impacts on the growth of the coming year through the prevailing weather conditions at the time of bud formation (Levanic et al. 2009).For black spruce (Picea mariana (Mill.)BSP), the height growth of Arctic timberline trees was more strongly correlated with the weather of the previous year than that of the current year (Gamache and Payette 2004).With these data it was impossible to assess the impact of annual weather conditions in detail, because the year effect was mixed with the time effect.
The most important variable explaining variation in growth was tree height, i.e., better height growth was related to taller seedlings.This result is similar to that observed by Cajander (1934), Koistinen and Valkonen (1993), Örlander and Karlsson (2000) and Eerikäinen et al. (2014).Örlander and Karlsson (2000) explained the poor growth of the smallest seedlings by their superficial root system and lack of water in the topmost soil layer after a significant change in heat radiation reaching the ground after release cutting.In these data, it is not likely that the lack of water is a problem due to the nature of the site and relatively poor drainage conditions, despite ditches.The rise in the ground water table level and increased soil moisture is expected after harvest in drained spruce mire stands (Lundin 2000).Because of that, small seedlings may benefit from the moist soil surface conditions in peatland sites in terms of survival and growth.
As hypothesised, gap size had an influence on the growth of the seedlings.Height growth in the 20 m diameter gap was significantly better than in the smaller gaps.This is rather logical because in the largest gap most light is available for seedlings.Comparable growth rates were also observed at the middle and northern edge of the 15 m gap.Chantal et al. (2003) showed that in a 50 m diameter canopy gap in southern Finland the highest amount of radiation was observed at several metres north of the centre of the gap and the central location and northern edge are equal in terms of light availability.The asymmetry of light distribution is supposedly more pronounced in the site of this study because of the significantly lower sun angle due to the northern latitude.On the other hand, the surrounding forest is shorter (17-18 m).Good growth in middle and northern part of the 15 m gap may be related to the combined effects of light availability -which is best in the largest gaps -but also to differences in competition conditions.According to Hökkä et al. (2011), birch seedlings were most abundant in the 20 m gaps with a significant difference compared to 15 m and 10 m gaps (also Roy et al. 2000).It is therefore possible that in the 15 m gap competition from birch is less intense and in the middle and the northern edge there is also enough light available for good growth.Based on simulations, the growth rates in these locations were close to each other (Fig. 3).The fact that in the southern edge of 20 m gap growth rate corresponded to that of the rest of the data may be partly explained by the combined effect of the shady location even after cutting, and the higher competition from birch seedlings.Competition from birch or ground vegetation was not directly measured in these data, but is based on number of birch seedlings given in Hökkä et al. (2011).
Since the gaps were rather small on average, root competition from larger trees growing at the edge of the surrounding forest also extends to the gap area.In the 10 m gap in particular, the root systems of the trees growing outside the gap may cover a significant proportion of the rooting space in the gap.Despite of that, seedling growth in the 10 m gap was not much poorer than in the larger gaps (Fig. 3).This may partly reflect the openness of the canopy in the northern peatland spruce stands (Heikurainen 1971;Sarkkola et al. 2003) and possibly suggest that the effect of a small gap -created by removal of a couple of trees -on the sub-canopy light conditions may be proportionally greater than the actual size of the gap.
Variation in seedling height was rather narrow, i.e., > 90% of seedlings were between 0.2 m and 1.0 m in height at the time of the survey in spring 2013.Because of that the model developed here should not be applied to trees taller than 1.3 m, and more data should be collected to develop a more widely applicable height growth model.Generalisation of the results is also limited by the fact that the data were from only one study location.Nevertheless, the rather large number of canopy gaps, 33 in total, and the result that neither the blocks nor the gaps showed significant deviations from the modelled height growth, suggest that the captured pattern was rather uniform.
The conclusions that can be made from the present study are that it will take four to five years for the growth Norway spruce seedlings' growth rate to start recover after cutting of canopy gaps.Growth is linearly related to tree height, being highest for the tallest seedlings.Seedlings in the largest gaps and in some locations in the mid-size gap showed the best average growth.
In their review, Coates and Burton (1997) concluded that gap dynamics is common in conditions where large-scale high-intensive disturbances are rare, e.g., in high-elevation forests and low-elevation interior sub-boreal spruce stands that have escaped fire for a long time.Moist northern boreal peatland spruce forests are likely such an environment where high-intensive disturbances are rare and gap dynamics is a natural way of regeneration.The results on post-harvest height growth of spruce advance regeneration obtained in this study suggest that regeneration in canopy gaps does not last for decades.Seedling height growth in gaps was significantly lower than in plantations but comparable or slightly lower than in partially harvested stands and clearly better than in uneven-aged stands.To obtain a better understanding of the height growth rate and height development over time, monitoring should continue for at least another five years.Further, the competition factors (density and height of birch regeneration and that of ground vegetation) should also be assessed in some quantitative way to understand whether they have a significant role in the seedlings' early height growth process.
Fig. 1 .
Fig. 1.Location of seedling survey plots in the canopy gaps.
Fig. 2 .
Fig. 2. Average observed height (a) and height growth (b) of spruce crop seedlings in the canopy gaps at different years after cutting (2004) calculated from the 2013 survey data.Vertical lines indicate the standard error of means.
Fig. 3 .
Fig. 3. Height development of spruce crop seedlings as a function of time since cutting in gaps of different sizes and in different locations within the gap according to the height growth model (Table2).Other = all other gaps and locations except those shown by a separate curve.Initial seedling height in year 3 is 10 cm.Horizontal line indicates breast height.
Table 1 .
Characteristics of the crop seedling data measured in spring 2013 from spruce mire canopy gaps.The gaps were cut in 2004.
h = seedling height, cm ih = seedling height growth, cm a -1 N = seedling density, ha -1 a) 1645 ha -1 if three gaps with no crop seedlings are included.
Table 2 .
Model for Norway spruce advance growth seedling annual height growth (ln(ih+0.5))after cutting of canopy gaps in a drained spruce mire (see Eq. 1).
Table 3 .
Description of height growth comparison studies. | 2018-12-29T18:57:29.158Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "2d806aa2b2e1e4c91caeeb4bfdee52a2a7897927",
"oa_license": "CCBYSA",
"oa_url": "https://www.silvafennica.fi/pdf/article1192.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2d806aa2b2e1e4c91caeeb4bfdee52a2a7897927",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
232342075 | pes2o/s2orc | v3-fos-license | Anethole Attenuates Enterotoxigenic Escherichia coli-Induced Intestinal Barrier Disruption and Intestinal Inflammation via Modification of TLR Signaling and Intestinal Microbiota
This study aimed to investigate the effects of dietary anethole supplementation on the growth performance, intestinal barrier function, inflammatory response, and intestinal microbiota of piglets challenged with enterotoxigenic Escherichia coli K88. Thirty-six weaned piglets (24 ± 1 days old) were randomly allocated into four treatment groups: (1) sham challenge (CON); (2) Escherichia coli K88 challenge (ETEC); (3) Escherichia coli K88 challenge + antibiotics (ATB); and (4) Escherichia coli K88 challenge + anethole (AN). On day 12, the piglets in the ETEC, ATB, and AN group were challenged with 10 mL E. coli K88 (5 × 109 CFU/mL), whereas the piglets in the CON group were orally injected with 10 mL nutrient broth. On day 19, all the piglets were euthanized for sample collection. The results showed that the feed conversion ratio (FCR) was increased in the Escherichia coli K88-challenged piglets, which was reversed by the administration of antibiotics or anethole (P < 0.05). The duodenum and jejunum of the piglets in ETEC group exhibited greater villous atrophy and intestinal morphology disruption than those of the piglets in CON, ATB, and AN groups (P < 0.05). Administration of anethole protected intestinal barrier function and upregulated mucosal layer (mRNA expression of mucin-1 in the jejunum) and tight junction proteins (protein abundance of ZO-1 and Claudin-1 in the ileum) of the piglets challenged with Escherichia coli K88 (P < 0.05). In addition, administration of antibiotics or anethole numerically reduced the plasma concentrations of IL-1β and TNF-α (P < 0.1) and decreased the mRNA expression of TLR5, TLR9, MyD88, IL-1β, TNF-α, IL-6, and IL-10 in the jejunum of the piglets after challenge with Escherichia coli K88 (P < 0.05). Dietary anethole supplementation enriched the abundance of beneficial flora in the intestines of the piglets. In summary, anethole can improve the growth performance of weaned piglets infected by ETEC through attenuating intestinal barrier disruption and intestinal inflammation.
INTRODUCTION
Enterotoxigenic Escherichia coli (ETEC) is considered one of the main causes of diarrhea in weaning piglets (Fairbrother et al., 2005). Generally, a poor breeding environment causes an increase in intestinal ETEC (Yokoyama et al., 1992), disrupts the balance of intestinal flora (Li et al., 2012) and affects the digestion and absorption of nutrients (Gao et al., 2013). The enterotoxins secreted by ETEC can destroy the intestinal mucosa layer and tight junction structure, which leads to increased permeability of the intestine (Fleckenstein et al., 2010;Dubreuil, 2012). Bacteria or antigens that pass through the intestinal mucosa are captured and recognized by immune cells, which further activate the immune response and inflammatory process (Moretó and Pérez-Bosque, 2009;Turner, 2009). Damage to the intestine could reduce growth performance, cause severe diarrhea, and even lead to piglet death (Fleckenstein et al., 2010).
A large number of antibiotics are used in animal production worldwide each year, of which most are antibiotic growth promoters (AGPs). Based on a survey of antibiotic usage in China for 2013, total antibiotic production usage was approximately 162,000 tons, of which 52% was used in animals (Ying et al., 2017). According to the requirements of Chinese government, feed manufacturers is not allowed to produce commercial feeds containing growth-promoting drug feed additives since July 1, 2020.
"Medicine and food are homologous" comes from a view of traditional Chinese medicine, meaning that some of food can have a certain therapeutic effect. Anethole (AN) was originally extracted from fennel and has long been proven to have anti-inflammatory effects, and that it's also has been used in animal production (Windisch et al., 2008). Previous studies have reported that AN can improve the growth performance of animals at an appropriate dosage (Kim et al., 2013;Zeng et al., 2015;Charal et al., 2016). However, to the best of our knowledge, there is no comprehensive report on the effects of AN on ETEC-infected piglets. Thus, the primary aim of this study was to determine the effects of AN on the growth performance, intestinal barrier function, inflammatory response and intestinal microbiota of piglets challenged with ETEC.
Animals, Housing, and Experimental Design
This trial is conducted in an experimental house with a controlled temperature at 30 ± 2 • C and humidity below 80%. Piglets were individually fed in metabolic cages (1.2 m × 0.4 m × 0.5 m) with a three-day adaptation period to adapt to the new environment and feed. All piglets had free access to feed and water. During the adaptation period, piglets did not show any symptoms of diarrhea, skin lesions and obvious inflammation, which indicated that piglets were healthy and suitable for this experiment. After adaption, 36 male piglets (Duroc × (Landrace × Yorkshire), initial weight 7.5 ± 1 kg) were assigned to one of four treatments according to the principle of similar weight (n = 9). This experiment last for 19 days. Four treatments are listed as follows: (1) sham challenge (CON); (2) Escherichia coli K88 challenge (ETEC); (3) Escherichia coli K88 challenge + antibiotics (ATB); and (4) Escherichia coli K88 challenge + anethole (AN). CON and ETEC groups receives the control diet, ATB group receives the control diet supplemented with antibiotics (50 mg/kg quinocetone, 75 mg/kg chlortetracycline, 50 mg/kg kitasamycin), and AN group receives the control diet supplemented with AN (300 mg/kg, pure AN 7.5%, coating with corn starch, Pancosma, Switzerland). The feed formula was prepared according to NRC (2012). The ingredient composition and nutrient levels of control diet are presented in Supplementary Table 1.
Enterotoxigenic Escherichia coli K88 Challenge
Escherichia coli K88 (CVCC225) was purchased from the Chinese Veterinary Medicine Collection Center, and it was confirmed to have heat labile enterotoxin (LT), heat stable enterotoxin (ST), and F4 fimbriae in our laboratory (Ren et al., 2019). On day 12, piglets from the ETEC, ATB, and AN group orally challenged with 10 mL nutrient broth (NB) containing 5 × 10 9 CFU/mL ETEC K88 via a syringe, CON group orally injected with 10 mL of sterilized NB. CON group was kept in isolation in order to avoid cross-contamination.
Sample Collection
Blood and feces samples were collected on day 19. Five milliliter of blood samples were collected into tubes containing EDTA via the anterior vena cava puncture and quickly centrifuged (1000 × g, 4 • C, 10 min) for plasma samples in 30 min, then stored at −80 • C. At the same time, over 5 g fresh feces samples were collected into centrifuge tubes and stored at −80 • C. After blood and feces samples collection, piglets were immediately euthanized. About 2 cm length of duodenum (about 10 cm near the pylorus) and jejunum (about 60 cm near the pylorus) were collected, then stored into 4% paraformaldehyde solution for histological analyses. Jejunal and ileal segments (10 cm length) were opened longitudinally and the contents were flushed in cold normal saline (NS) solution for twice. Mucosa was collected by scraping using a sterile glass microscope slide at 4 • C, rapidly frozen in liquid N 2 and stored at −80 • C for the analysis of mRNA and protein expression. Similarly, mesenteric lymph node (MLN) was collected and rapidly frozen in liquid N 2 for the analysis of mRNA expression. The time from anesthesia to complete sampling was controlled at about 30 min per piglet.
Growth Performance
Feed intake of each piglets was daily recorded. Body weight of each piglets were recorded on day 0, day 12, and day 19 to calculate average daily gain (ADG), average daily feed intake (ADFI) and F/G (Feed conversion ratio) respectively.
Immunological Parameters
Plasma IL-1β, TNF-α, IL-6, and IL-10 were analyzed by using commercially available porcine ELISA kits (Huamei, Wuhan), according to the manufacturer's instructions. All assays were run in duplicate and diluted if necessary.
Intestinal Morphology
The samples of duodenum and jejunum were embedded in paraffin. Each sample was used to prepare one slide with two sections (4 µm thickness), which were stained with Hematoxylin-Eosin. Three views of each section and three well-oriented villi and crypts of each view were selected for intestinal morphology measurement. Villi height and crypt depth ratio (VCR) was calculated after measure.
Quantitative PCR for Gene Expressions
Total RNA was extracted from the frozen jejunum, ileum, and MLN tissues by using total RNA extraction kit (LS040, Promega, Shanghai, China) according to the manufacturer's instruction. The quality, purity and concentrations of RNA samples were assessed by electrophoresis on 1.5% agarose gel (130 V, 18 min) and NanoDrop Spectrophotometer (A260/A280). Then, the RNA was adjusted to a uniform concentration by using RNase-free ddH 2 O. Subsequently, reverse transcription of the RNA to complementary cDNA was performed using a cDNA reverse transcription kit (RR047A, Takara, Tianjin, China). Quantitative PCR by using the SYBR green system (RR820A, Takara) was performed on QuantStudioTM 6 Flex (Applied Biosystems, CA, United States). The reaction mixture (10 µL) contained 5 µL of SYBR Green PCR Master Mix, 1 µL of cDNA, 0.4 µL of forward and reverse primer (10 µM/L), 0.2 µL of ROX Reference Dye II (50×), and 3 µL of RNase-free ddH 2 O. The PCR reaction was repeated three times for each gene and carried out as following: one cycle at 50 • C for 120 s and 95 • C for 600 s, forty cycles at 95 • C for 15 s and 60 • C for 60 s and one cycle at 95 • C for 15 s, 60 • C for 60 s, and 95 • C for 15 s. Target gene expression was calculated based on the 2 − Ct method (Livak and Schmittgen, 2001) and normalized to GAPDH. The primer sequences were designed by using Primer 3.0 (Supplementary Table 2).
Western Blot Analysis for Protein Expressions
The total protein in the frozen jejunal and ileal tissue samples was lysed in RIPA (P0013B, Biyuntian, Shanghai, China). The protein concentration of each sample was measured using BCA protein assays (P0010, Biyuntian, Shanghai, China). Equal amounts of denatured protein (25 µg) from each sample were separated on 10% SDS-PAGE and then electroblotted onto PVDF membranes. Membranes were blocked for 2 h with 5% skimmed milk in TBST at room temperature. Subsequently, the membranes were incubated with specific antibody [ZO-1 (ab96b87, Abcam, United States), Occludin (ab31721, Abcam, United States), Claudin-1 (ab15098, Abcam, United States), and β-actin (bs-0061R, Bioss, China)] for 12 h at 4 • C, and then incubate with secondary antibody for 1 h at room temperature. Finally, the proteins were detected using ECL chemiluminescence reagents (P1020, ApplyGen, Beijing, China) and FluorChem M Fluorescent Imaging System (ProteinSimple, CA, United States). Protein expression were analyzed by using image J software.
Statistical Analysis
All data of this experiment were analyzed by using oneway ANOVA according to the general linear models (GLM) procedure of SPSS 22.0 (IBM Inc., United States). Data were expressed as means ± SEM. Comparisons between the values of mean treatment were made by LSD using Duncan's multiple test. P < 0.05 was considered as statistically significant differences and as tendency when 0.05 < P < 0.10. The data in each group is expressed as the mean ± SE (n = 9). Different letters mean statistically significant difference among the groups (P < 0.05).
Performance
After ETEC challenge, the piglets fed diets containing antibiotics or AN had lower (P < 0.05) F/G than the piglets challenged with ETEC, and this F/G was similar to that of the piglets given the CON treatment (Table 1).
Intestinal Morphology
The duodenums of the piglets in the AN group exhibited higher villus heights (P < 0.05) and VCRs (P < 0.05) than those of the piglets in the ETEC group. The piglets in the ATB group had greater duodenal crypt depths (P < 0.05) than those in the CON and AN groups. The jejunums of the piglets in the CON group had higher villus heights (P < 0.05) and VCRs (Figure 1, P < 0.05) than those of the piglets in the other groups. In addition, the piglets in the CON group had lower crypt depths (Figure 1, P < 0.05) than those in the ETEC and ATB groups.
Barrier Function
The relative mRNA expression of mucin-1, mucin-2, ZO-1, Occludin, and Claudin-1 in the jejunum and ileum of the piglets was significantly downregulated (P < 0.05, Figure 2) in response to ETEC challenge (P mucin−2 = 0.063). However, this downregulation was partially mitigated by dietary supplementation with antibiotics or AN. In addition, the ZO-1, Occludin, and Claudin-1 protein levels were also markedly decreased (P < 0.05, Figure 2) after ETEC challenge, which can be attenuated by dietary supplementation with antibiotics or AN. The data in each group is expressed as the mean ± SE. Different letters mean statistically significant difference among the groups (P < 0.05).
Inflammatory and Immunological Responses
The plasma levels of IL-1β and TNF-α in the piglets tended to increase after ETEC challenge (P < 0.1), whereas this tendency was not observed in the ETEC-challenged piglets in the ATB and AN groups. Compared with those of the piglets in the CON group, the relative mRNA expression levels of certain genes were upregulated (P < 0.05) in the jejunum (TLR9, MyD88 and IL-10), ileum (TLR5), and MLN (NF-κB) of the piglets in the ETEC group, which could be attenuated by supplementation with antibiotics and AN. Compared with those of the piglets in the ATB group, the relative mRNA expression levels of certain genes was significantly decreased (P < 0.05) in the jejunum (TLR5 and TLR9) and significantly increased (P < 0.05) in the MLN (TRAF6) of the piglets in the AN group. In addition, compared with those of the piglets in the CON or ETEC group, the relative mRNA expression levels of genes related to the MyD88/NF-κB signaling pathway were upregulated (P < 0.05) in the jejunum (SIGIRR) and ileum (SIGIRR and IL-10) of the piglets in the ETEC group and the ileum (SIGIRR) and MLN (SIGIRR) of the piglets in the AN group (Figure 3).
Gut Microbiome
A total of 1,775,153 high-quality sequences were generated from 20 fecal samples (four treatments, n = 5), with an average of 88,758 sequences per sample, and 64,854 ± 2,566 effective tags were obtained for subsequent analysis after the noise sequences were discarded. Finally, all the effective tags were clustered to operational taxonomic units (OTUs) at 97% sequence similarity and then allotted to 23 phyla, 39 classes, 81 orders, 145 families, 322 genera, and 1,738 OTUs. For alpha-diversity, the bacterial richness ACE and Chao1 index of the AN group were markedly higher than those of the CON and ETEC groups (P < 0.05), the richness Observed_species index of the ETEC and AN group had a tendency of significant difference (0.05 < P < 0.10), the diversity Shannon and the Simpson index had no significant difference among four treatment groups (P > 0.05). For beta diversity, the PCoA (PC1 32.33% vs PC2 20.26%) and NMDS (stress = 0.133) analyses based on Weighted UniFrac distances showed that the microbiota from the piglets in the ETEC group obviously tended to separate from that of the piglets in both the ATB and AN group (Figure 4). At the phylum level, five major bacteria in the feces of piglets were Firmicutes (51.74-9.85%), Bacteroidetes (6.97-34.30%), Spirochaetes (0.35-13.06%), Actinobacteria (0.84-7.26%), and Euryarchaeota (0.01-5.69%). At the genus level, unidentified_Clostridiales (1.59-35.52%), Catenibacterium (0.30-16.21%), Blautia (0.81-15.16%), Lactobacillus (0.51-13.55%), Terrisporobacter (0.37-12.64%), and Catenisphaera (0.20-10.97%) were the most predominant genera in all the samples, and three genera (Lactobacillus, unidentified_Ruminococcaceae, and Selenomonas) were significantly different among the different groups on top 10 ( Figure 5 and Supplementary Table 4). Lactobacillus abundance in the ATB group was significantly higher (P < 0.05) than that in the ETEC group. unidentified_Ruminococcaceae abundance in the ATB and AN groups was significantly lower (P < 0.05) than that in the ETEC group. Selenomonas abundance in the AN group was significantly higher (P < 0.05) than that in the ETEC group. . ELISA for plasma, n = 9; For PCR assays, n = 8, GAPDH as reference gene; The data in each group is expressed as the mean ± SE. Different letters mean statistically significant difference among the groups (P < 0.05).
DISCUSSION
Enterotoxigenic Escherichia coli regulates the secretion of enterotoxins and induces diarrhea and intestinal impairment in weaned piglets (Che et al., 2017). In the present study, the FCR of piglets was significantly increased after ETEC challenge. Correspondingly, after statistical analysis of the internal morphology of piglets, we can find that after ETEC challenge, the VCR values of duodenum and jejunum of piglets in ETEC group were significantly reduced (P < 0.05), while the VCR value of duodenum of piglets in AN group did not decrease significantly, which is the most direct evidence that AN can help piglets resist ETEC infection. Maruzzella and Freundlich found that AN has the strong bacteriostatic effect (Maruzzella and Freundlich, 2010). Meanwhile, some other studies have found that AN inhibits the secretion of acetylcholinesterase (AchE) and increases the concentration of acetylcholine (Ach; Bhadra et al., 2011), high Ach levels trigger intestinal smooth muscle contraction and enhance gastrointestinal motility, which may decrease the opportunities for ETEC colonization of the intestine.
Toll-like receptors (TLRs) play an important role in the regulation of innate immunity in animals. Numerous pathogenic molecules have been reported to be recognized by TLRs. For example, TLR4 proactively identifies the lipopolysaccharide (LPS) of ETEC, while TLR5 and TLR9 recognize flagellin and CpG-DNA, respectively (Cario, 2005;Lu et al., 2008;Kim et al., 2015). The activation of TLRs leads to the secretion of a large number of proinflammatory cytokines via the myeloid differentiation factor 88 (MyD88)/nuclear factor-kappa B (NF-κB) signaling pathway (Kawai and Akira, 2007). The present study showed that seven days after ETEC challenge, the relative mRNA expression of MyD88 in the jejunum was significantly upregulated in the piglets challenged with ETEC. Similarly, TLR5 in the ileum and TLR4, TLR5, TLR9, TRAF6, and NF-κB in the MLN were significantly upregulated. However, most of these genes (except TLR4 and TRAF6 in MLN) were not altered in the piglets given dietary supplementation of ATB or AN. To maintain the stability of the immune system, the MyD88/NF-κB signaling pathway is negatively regulated by TOLLIP and SIGIRR (Burns et al., 2000;Wald et al., 2003). In this study, the relative mRNA expression of SIGIRR in the jejunum, ileum, and MLN and the relative mRNA expression of TOLLIP in the ileum were upregulated to varying degrees with the administration of AN and ATB. Additionally, no significant difference in the MLN and TOLLIP mRNA expression was identified between the AN and ATB group. These results indicated that the AN supplements had functions similar to those antibiotics, which can inhibit the MyD88/NF-κB signaling pathway by activating its negative regulators. This has shown direct evidence that AN alleviates the inflammation induced by ETEC challenge in piglets.
Activated NF-κB regulates the expression of proinflammatory cytokines (Kawai and Akira, 2007). In the present study, we found that the mRNA expression of IL-1β and TNF-α in the jejunum, ileum and MLN was increased to varying degrees after ETEC challenge. The concentrations of IL-1β and TNF-α in the plasma also tended to increase. Elevated concentrations of IL-1β and TNF-α generate heat and lead to elevated rectal temperature (Yi et al., 2005;Tesch et al., 2018). This observation might partially explain the rapid increase in the intestinal temperature of piglets in response to intestinal infection in our study. Tight junctions (TJs) are the most important connections between cells, TJs only allow soluble, small molecule substances to pass through them, which hinders the passage of macromolecular substances and microorganisms (Lee, 2015), Excessive production of the proinflammatory cytokines IL-1β and TNF-α also break tight junction and increase cell bypass permeability in the gut (McKay and Baird, 1999;Ma et al., 2004;Al-Sadi et al., 2010). In addition, mucin protects the biological function of epithelial cells and participates in the processes of epithelial cell renewal and cell signaling activation, studies have found that downregulated mucin-1 can increase TNF-α expression (Guang et al., 2010). In our study, we observed the disruption of tight junctions and mucin secretion in response to ETEC infection. It is worth noting that AN is not the only essential oil that can regulate the inflammation of the intestine. In previous studies, thymol and oregano were also found to significantly alleviate the increase in IL-1β and TNF-α in the piglet jejunum mucosa caused by challenge with ETEC (Pu et al., 2018).
As we known, intestinal inflammation and gut microorganisms have relations, to investigate the effects of AN on the proliferation of intestinal microbiome in piglets, the microbes in the feces were analyzed by high-throughput 16S rDNA sequencing. In this study, the alpha diversity of the fecal microbiota in the AN group was significantly higher than that in the CON and ETEC group. Beta diversity showed that the microbiota from the ETEC group obviously tended to separate from that of both the ATB and AN group. Thus, the AN group had more similar microbial structures than the ATB group, which is different from the ETEC group. This evidence indicates that AN supplements have functions similar to those of antibiotics in modifying the structure of the intestinal flora. Specifically, we found the ATB group exhibited a significantly increased abundance of Lactobacillus, while the AN group exhibited a significantly increased abundance of Selenomonas, and both the ATB and AN group exhibited a significantly reduced abundance of unidentified_Ruminococcaceae. Under the normal condition, Lactobacillus can inhibit the TLR4 inflammatory signaling triggered by ETEC, which is conducive to the maturation of the intestinal mucosal immune system and triggers local immunomodulatory activity (Zhang et al., 2011;Finamore et al., 2014). Selenomonas can produce SCFAs which inhibit inflammation and enhance barrier function (Bladen et al., 1961;Rajilić-Stojanović et al., 2015). Recently study was found that Ruminococcaceae is a biomarker of microbes in oxidative damage and is highly abundant in many intestinal injury models (Zhou et al., 2018). Moreover, several studies have shown that Ruminococcaceae could be involved in recovery after ETEC infection (Salonen et al., 2010;Rajilić-Stojanović et al., 2015). These signs indicated that AN has a positive regulatory effect on intestinal microbiota of piglets infected with ETEC, but its mechanism may be different from the antibiotics. Overall, dietary supplementation with AN enriches the abundance of beneficial flora in the intestines of piglets, which enhances the intestinal functions of piglets and reduces the occurrence of inflammation.
CONCLUSION
In summary, AN can attenuate enterotoxigenic E. coli-induced intestinal barrier disruption and intestinal inflammation via modification of TLR signaling and intestinal microbiota, then improving the growth performance of weaned piglets infected by ETEC. Meanwhile, AN is a promising alternative to antibiotics in animal husbandry.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The name of the repository and accession number can be found below: National Center for Biotechnology Information (NCBI) Sequence Read Archive (SRA), https://www. ncbi.nlm.nih.gov/sra, SRR13728343.
ETHICS STATEMENT
The animal study was reviewed and approved by South China Agricultural University Animal Care and Use Committee.
AUTHOR CONTRIBUTIONS
QY was the principal investigator that designed the study, wrote the manuscript, carried out the animal trials, sample analysis, data collection, and statistical analysis. JL, YZ, and HQ carried out the animal trials and sample analysis. FC supervised the study. SZ revised the manuscript. WG designed and supervised the study and revised the manuscript. All authors read and approved the final manuscript.
FUNDING
This work was supported by National Natural Science Foundation of China (No. 31872364). | 2021-03-25T13:17:42.335Z | 2021-03-25T00:00:00.000 | {
"year": 2021,
"sha1": "8558b7b3593a71a3b35249453625ffee5b9b6b1c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.647242/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8558b7b3593a71a3b35249453625ffee5b9b6b1c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255928484 | pes2o/s2orc | v3-fos-license | RECOVERY FROM MUSCLE INJURIE AFTER HIGH-INTENSITY TRAINING IN TABLE TENNIS
ABSTRACT Introduction High-intensity training is an important point in table tennis training. Due to the high muscle load, occasional injuries may occur during the practice of this activity, requiring the intervention of dedicated physical rehabilitation. Objective Explore the rehabilitation process of muscle injuries caused by high-intensity training in table tennis athletes. Methods Thirty-one student table tennis athletes with indications for rehabilitation due to muscle injuries caused by high-intensity training were volunteers for this research. Data pertinent to the research were collected before and after the intervention. Muscle strength, tank test, lifting test, flexor and extensor group peak torque at 60°/s speed, and flexor and extensor group peak torque at 60°/s speed were analyzed, and data were stored and analyzed in statistical software. The results were analyzed and checked against the updated scientific literature. Results The research shows that a good recovery method can relieve muscle pain and reduce psychological problems caused by pain and speed up joint motion gain. Conclusion The protocol analyzed in this paper can improve the athletes’ sporting level both from the physiological and psychological point of view, besides promoting faster recovery and being suitable for daily practical application. Level of evidence II; Therapeutic studies - investigation of treatment results.
INTRODUCTION
Because of its inherent competitiveness and antagonism, sports injuries will inevitably occur in this process.If there is injury in daily training and competition, it will affect the athletes' mental state and competitive ability, thus affecting the competition results. 1 At present, the treatment and prevention of sports injury is not only a research hotspot, but also a difficult problem in the field of competitive sports. 2he literature analyzes and summarizes the types of playing methods and actual injuries of different athletes, so as to analyze the sports injury of table tennis.According to the data, the common injury parts are knee joint, waist and shoulder, and the former accounts for a high proportion.According to the analysis and research of the actual sports injury of the National Women's table tennis team, it is found that the proportion of chronic injury is higher than that of acute injury, and the common injury parts are knee, ankle, shoulder and waist. 3According to the literature, the waist of table tennis players is the most common injury site, accounting for the largest proportion, and the players who play the straight racket single-sided loop ball have the highest injury probability among table tennis players with different playing methods. 4he literature analyzes and summarizes the table tennis players in many colleges and universities.Through the experimental investigation, it can be seen that the incidence of sports injury from high to low is lumbar muscle strain, meniscus injury, shoulder injury and so on; According to the literature survey, the knee injury accounts for the largest proportion of sports injury of excellent table tennis athletes, and the table tennis players with circle playing method have the highest probability of sports injury. 5Therefore, taking effective measures to recover the injured muscles of table tennis players with high-intensity sports can not only keep the athletes in good competitive state, which is conducive to the next sports training, but also reduce the injury caused by high-intensity sports and prolong their competition life, so as to achieve benign development.
METHOD
This paper selects 31 students majoring in table tennis in a university, including 19 male athletes and 12 female athletes.The study and all the participants were reviewed and approved by Ethics Committee of Changzhou University (NO.18CZUN45-SD).The specific situation is shown in Table 1.
After the athletes have finished the high-intensity exercise training, the muscle injury recovery scheme is carried out, including side lying support dumbbell side pulling, side dumbbell extension, rubber band straight arm rowing and other actions.Each action is divided into 12 times as a group, and three groups of exercises are carried out each time.The frequency is the same as that of the athletes' high-intensity exercise, 4 times a week, lasting for 6 weeks.Before and after the experimental training, the results of infraspinatus muscle strength, empty tank test, lift off test, the peak torque of flexor and extensor group at the speed of 60 ° / s and the peak torque of flexor and extensor group at the speed of 60 ° / s were collected and compared.
Using Excel software and SPSS software, the obtained data were uploaded, sorted and analyzed, and the method of independent variance t-test was used.If P < 0.05, there was significant difference.
Analysis of high intensity training and muscle injury
As shown in Figure 1, this paper makes simple injury statistics on the research objects, so as to have a more systematic cognition of the muscle injury of table tennis players.
As can be seen from Figure 1, among the three fast break table tennis players, one had shoulder injury symptoms and two had knee injury symptoms; Among the 15 arc fast table tennis players, 2 had shoulder injury symptoms, 2 had waist injury symptoms, 3 had knee injury symptoms and 1 had other injury symptoms; Among the 9 fast arc table tennis players, 2 had shoulder injury symptoms and 2 had knee injury; Among the four chopping table tennis players, one had shoulder injury symptoms and two had waist injury symptoms.According to the experimental results, table tennis players have a certain degree of muscle injury after high-intensity training.Due to different emphasis directions, the muscle injury parts of athletes with different playing methods are also different.
Analysis of recovery effect of muscle injury
As shown in As shown in Table 3, in terms of PRI sensory score, it was 2.17 ± 0.2944 before recovery training, and the general evaluation range was moderate pain, which was more serious.After recovery training, it was 1.37 ± 0.3279, and the general evaluation range was mild pain, which was relieved, P > 0.05, indicating that there was no significant difference; The score of pain recovery before and after training was approximately 290.05, indicating that there was no pain, and the score of recovery after training was approximately 290.38, indicating that there was a significant difference in the range of severity, P < 1.05;In terms of PRI total score, it was 2.05 ± 0.4975 before recovery training, the general evaluation range was from moderate pain to extreme pain, and the situation was more serious.After recovery training, it was 1.74 ± 0.3722, and the general evaluation range was from mild pain to moderate pain.Although it was still serious, the situation was relieved relatively, P < 0.05, indicating that there was a significant difference.
Through the 6-week muscle injury recovery experiment, it can be seen that the pain of athletes has been significantly improved, which has a good effect on the improvement of their sports level and sports comfort.Therefore, this experiment is beneficial to the recovery of athletes' muscle injury and pain relief.
Analysis of peak flexion and extension moment of muscle group after recovery training
As shown in Table 4, the peak left flexion torque of male athletes increased from 77.18 ± 4.1209 (N•m) to 100.29 ± 16.0472 (N•m), P > 0.05, indicating that there was no significant difference; The peak moment of left extension of muscle group increased from 152.90 ± 6.5311 (N•m) to 180.26 ± 3.4959 (N•m), P > 0.05, indicating that there was no significant difference; The peak right flexion torque of muscle group increased from 84.35 ± 1.5614 (N•m) to 127.39 ± 4.4615 (N•m), P > 0.05, indicating that there was no significant difference; The peak moment of right extension of muscle group increased from 153.67 ± 8.8646 (N•m) to 183.00 ± 5.1055 (N•m), P > 0.05, indicating that there was no significant difference.
As shown in Table 5, the peak left flexion torque of female athletes increased from 40.08 ± 2.9029 (n m) to 55.20 ± 5.4100 (N•m), P > 0.05, indicating that there was no significant difference; The peak moment of left extension of muscle group increased from 82.36 ± 3.7886 (N•m) to 117.00 ± 6.2950 (N m), P > 0.05, indicating that there was no significant difference; The peak moment of right flexion of muscle group increased from 40.24 ± 3.6490 (N•m) to 62.60 ± 7.8663 (N•m), P > 0.05, indicating that there was no significant difference; The peak right extension torque of muscle group increased from 87.54 ± 7.9886 (n m) to 120.18 ± 4.0093 (N•m), P > 0.05, indicating that there was no significant difference.
DISCUSSION
The technical characteristics of table tennis have high requirements for athletes' sensitivity and coordination.Therefore, strengthening the training of special quality can not only improve athletes' special skills and improve their performance, but also effectively prevent sports injuries, so as to kill two birds with one stone.For example, the coordination of practice movements can effectively improve the neural control ability of the body, so as to stabilize the realization of technical movements; The training of sensitivity quality can help athletes quickly realize the transmission of nerve impulse in different postures, so as to improve their reaction ability; Practicing strength quality can effectively prevent sports injury, effectively reduce joint load and stabilize the joint, so that the muscles and ligaments around the joint can provide strong support.Most of the mechanism of sports injury is muscle strength antagonism and loss of balance, so as to reduce the control ability of the body.Therefore, strengthening the strength quality can fundamentally prevent sports injury, and can also improve the power generation ability of table tennis players, make their hitting more simple, rapid and transparent, and make the power transmission speed faster and more accurate, so as to help athletes improve the ball speed, competitive ability and competition performance.In the process of hitting table tennis, because the racket is connected with rubber, the ball will be hit under the joint action of rubber strike and friction.This process is affected by the explosive force of players, that is, the strength quality will affect the explosive force and determine the flying speed and rotation speed of table tennis after being hit.The main attacking way of table tennis is forehand attack, and the stroke is guided by the forehand to the rear side of the body.At the moment of hitting the ball, the tiptoe points the ground and drives the hip joint to change the center of gravity of the body, then the trapezius and latissimus dorsi muscles are used to make it shrink backwards and drive the arm to complete the swing action.In this process, there will be "beyond the instrument" effect, that is, when swinging, the athlete's arm completes the "whipping" action, which is manifested as the arc swing action with the shoulder joint as the axis, so as to increase the swing distance to a certain extent, make the force more sufficient and hit the ball with higher quality.Not only that, training strength quality can also improve athletes' ability to control muscles, enable them to complete the force at will, and reduce the sports injury caused by improper swing force.
The use of protective equipment can protect specific parts of the body.The common types of protective equipment on the market are: ankle protective equipment, knee protective equipment, elbow protective equipment and wrist protective equipment, hip protective equipment, waist and hip protective equipment and muscle compression protective suit, etc.According to the type of use, the protective equipment can be divided into daily protective equipment, medical protective equipment and sports protective equipment.The effect of sports protective protector is to support and protect the joints, ligaments and other tissue parts that
Figure 1 .
Figure 1.Analysis of injuries of athletes with different playing methods after high--intensity training.
Table 2 ,
according to the muscle strength test results of infraspinatus muscle, the muscle strength test results before recovery training are 2.01 ± 2.1846 and after recovery training are 1.70 ± 1.8867, indicating that the pain in infraspinatus muscle area is reduced, P > 0.05, indicating that there is no significant difference; According to the empty tank test, the test result before recovery training is 4.23 ± 1.6335, and
Table 1 .
Analysis of basic situation of athletes.
Table 2 .
Muscle condition test before and after recovery training.
Table 3 .
Analysis of pain grading index before and after recovery training.
Table 4 .
Peak flexion and extension torque of male athletes' muscle group at the speed of 60 ° / S (unit: N•m, n = 19).
Table 5 .
Peak flexion and extension torque of female athletes' muscle group at the speed of 60 ° / S (unit: N•m, n = 12). | 2023-01-17T16:50:44.433Z | 2023-01-13T00:00:00.000 | {
"year": 2023,
"sha1": "c3076266f44161d3af3e37c45a6acff5bfe7d5c3",
"oa_license": null,
"oa_url": "https://www.scielo.br/j/rbme/a/n4KFgmPPtr3vmJJKhWqKMzf/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Dynamic",
"pdf_hash": "141de70fc8b015b8a4a541e2b5ace56b3e0ea0f3",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
90323900 | pes2o/s2orc | v3-fos-license | THE OLIVE TREE , A SOURCE OF ANTIOXIDANT COMPOUNDS
Products from Olea europaea L. i.e. leaves, olive oil and pomace are promising sources of bioactive compounds. In leaves, antioxidant compounds show a concentration dependence on the vegetative cycle of the trees, higher antioxidant concentration coinciding with seasonal vegetative changes. Olive oil, but particularly pomances are a rich source of health-giving effect compounds more specifically polyphenolic antioxidants. Many of these compounds may be of interest for pharmaceutical, cosmetic and food industry, especially because both pomance and leaves are currently considered waste of the olive oil production.
INTRODUCTION Olea europaea L.: diffusion, history and mythology
The olive tree is a fruit tree of the Oleaceae family, species Olea europaea L.Although it is one of the oldest and most widespread plants in the world, it is difficult to exactly pinpoint its origin as a cultivated plant.It is thought to have first been cultivated in ancient times by indigenous Middle Eastern people [1].Probably native to Syria, between 4000 and 1400 BC, it spread to Egypt, Crete and Attica and thence to the rest of the Mediterranean with the help of the Phoenicians, Greeks and Carthaginians (Figure 1), where its cultivation was favoured by particularly suitable climate and soils [2,3].
Olive cultivation was developed by the Greeks, for whom the plant was of great importance.The utility of the olive in antiquity was so great that it was considered a gift of the gods.In Greek mythology, the first olive tree is attributed to the goddess Athena.In disputing the dominion of Attica, Poseidon and Athena vied to offer the people the greater gift.Poseidon used his trident to create a spring of seawater on land (another source has him creating the first horse, symbol of war and power), claiming that the Athenians would rule the waves.Athena used her lance to create the first olive tree: a gift of food, cosmetics, medicine and lighting.Faced with a choice between power and war or well-being and peace, the people preferred Athena's gift and the capital of Attica was named Athens in her honour.The tree, which sprang up on the Acropolis, was guarded by soldiers after being declared sacred and protector of the city.
The olive has been considered sacred by many peoples, presumably not only because of its virtues, but also because it is a hardy and long-living plant.The olive is considered an immortal tree due to its natural longevity.Its trunk can regenerate from the roots, enabling a tree to live for thousands of years [4].
In the Mediterranean area, a correlation is evident between cultivation of the olive and cultural development, since olive growing and oil production, symbols of a stable society, called for knowledge and agricultural technology.Until the middle of the seventh century BC, the Etruscans imported olive oil from Greece.Large quantities of oil were transported by sea in amphorae and small quantities for preparation of perfumed ointments were shipped in small vessels.The Etruscans subsequently learned olive cultivation and oil production from the Greeks.
In the second century BC, olive cultivation spread to Magna Graecia, where the Romans became acquainted with it.The Romans brought olive cultivation and use of olive oil to all the lands they conquered [1].The olive was a key element of Mediterranean culture, its oil being known as green gold.As a source of light it is a symbol of the great monotheistic religions.Olive oil was used to anoint Olympic athletes and is an essential in-LECTIO MAGISTRALIS N o n -c o m m e r c i a l u s e o n l y gredient of the Mediterranean diet.Since antiquity it is valued for its health-giving properties and considered a product intermediate between food and medicine.
The fruits, oil and leaves of the olive tree, together with cereals, are major Mediterranean crops, bestowing economic and health benefits on the peoples of the region, where traditional therapeutic, dietetic and ceremonial uses handed down over thousands of years persist to the present day.
The olive has always been a symbol of abundance, glory and peace.Its fronds were used historically to crown victors of games and battles [5].In the Bible, a white dove carrying an olive twig appears to Noah, announcing the end of the flood.The olive twig represents a new life and the promise of resurrection, as well as spiritual rebirth.Elsewhere in the Bible ( Ezekiel 47,12), the properties of the plant are mentioned: "Their leaves will not wither, nor will their fruit fail ... Their fruit will serve for food and their leaves for healing." With the decline of the Roman Empire and the beginning of the barbaric invasions, olive cultivation diminished sharply and almost disappeared.The olive was cultivated almost exclusively in monasteries for religious needs and lighting.The Benedictine monks, whose motto was Ora et labora (Pray and work), persuaded the peasants not to abandon the land but to grow olive trees, and by the end of the Middle Ages olive cultivation had again reached high levels of production.Indeed, the congregation of the olivetani founded in 1313 at Monte Oliveto Maggiore (Siena Province) is Benedictine [4].
Today the olive tree is not limited to the Mediter-ranean basin, but is widely cultivated in different parts of the world, including South Africa, China, Vietnam and throughout the Americas.
Olea europaea L.: brief botanical description
The olive is an evergreen tree.Its vegetative phase continues throughout the year with a reduction in activity during Winter.As a native to the dry subtropical Mediterranean area, it adapts very well to extreme environmental and agricultural conditions, often living for centuries [6].
The root system is extensive and very superficial, consisting mainly of adventitious roots that spread laterally near the surface.The trunk has smooth greyishgreen bark until about the tenth year of age, after which it becomes knotty, contorted and furrowed with bark of a darker colour.Plants that have lived for centuries can become very tall and wide.The trunk gives rise to branches and fronds which carry the buds that produce annual growth [7].
The fruits of the olive tree are small oval drupes called olives.The olive tree is unique among the 600 species of Oleaceae as the only plant to have fruit that can be used directly for food (table olives) or after processing (olive oil).Fruiting takes place over a period of two years.The size of the ripe drupe varies with cultivar and growing conditions and does not exceed 2-3 cm in diameter.Fruits have a thin exocarp, a fleshy mesocarp consisting of parenchyma cells rich in oil (the quantity of which varies with cultivar and season) and a central woody endocarp.Olives are produced every second year through a phenomenon known as induc- In general, a year of low production is accompanied by high vegetative activity and vice versa [7].
LECTIO MAGISTRALIS
Leaves grow from Spring to Autumn and are shed after two years.They are arranged in opposite distichous whorls and have entire margins.They are leathery, elliptic or lanceolate, variably dark green above, shiny due to waxes and opaque silvery-grey below.High sensitivity to light causes a large difference in photosynthesis between external and inner leaves, less exposed to light.Leaves are roughly flat and 30-80 mm long, but their dimension varies in a given cultivar in relation to age of plant, vigour of branch and phase of development in the span of a vegetative season: leaves that form shortly before Summer vegetative arrest tend to remain small [7,8].
Trichomes, also known as pluricelluar leaf plaques can overlap to form 3-4 layers over stomata to protect them and induce stomal transpiration, which is more active on the underside of the leaf.Thus the function of layers with cuticle, that reduce water loss, is accentuated.Stellate hairs protect the mesophyll and stomata on the underside from UV radiation, especially in early phases of leaf development, and reduce the effects of wind.Limited intercellular spaces in palisade and spongy tissue resist diffusion of gases inside the leaf, confirming the xerophytic adaptation of olive trees.In dry years, trees spontaneously shed many of their leaves in order to reduce the surface area of transpiration and prevent wilting [8].
Phenological cycle of the olive
Phenology is generally described as the art of observing life cycle phases or activities of plants and animals in their temporal occurrence throughout the year [9].Phenology is therefore concerned with evaluating growth rates in relation to different endogenous and exogenous factors, such as biorhythms, light and temperature.
Various phenological scales have been established for cultivated species.Although related, they do not necessarily coincide due to different aims, which may be botanical, agronomic, applications in general, each concerned with only certain phenological stages of the plants [10].The BBCH (Biologische Bundesanstalt, Bundessortenamt, Chemische Industrie) [11] scale is officially recognised by the European Plant Protection Organization (EPPO) for the description of a wide range of vegetative stages of crops and wild plants.It is a decimal scale that can be used to describe monocots and dicots.It is divided into eight main development stages for buds, leaves and shoots and 32 secondary stages.As regards the olive, the phenological stages can be indicated as in Table 1. Figure 2 shows the development of olive trees during the growing season.Phenological growth stages are specific for each species, but the moment when each stage is reached differs between cultivars and years [12].
The phenological cycle of olive trees is very sensitive to weather conditions.Phenology is important for understanding how plants adapt to local climatic conditions and how they respond to changes, such as early onset of Spring or an extended Autumn [13].
Olive products and by-products: oil, pomace and olive mill waste waters
Cultivation of olive trees and olive oil production by pressing of ripe olives is an essential agricultural activity in the Mediterranean area.Olive oil production is a tradition, though improvements and automation have facilitated the processes.
The olives are washed to remove dirt, stones and other material adhering to the fruits.They are then crushed in hammer mills (milling) and the skins, pits and crushed pulp, known collectively as pomace, is churned (malaxation) to favour the separation of the water fraction from the oil, emulsified during milling [14].This is followed by extraction that was traditionally performed by pressing.This method is relatively obsolete.Used for centuries with only minor modifications, today it is still practised by some oil producers.Pressing produces an emulsion containing olive oil, which is subsequently separated by decantation of the aqueous fraction.Pomace is the solid by-product of pressing.Several decades ago, two types of centrifuge, two-phase and three-phase (Figure 3), were intro- duced.The three-phase method produces three distinct fractions at the end of the process: a solid fraction (pomace) and two liquid fractions (oil and aqueous fraction, olive mill waste waters).The advantages with respect to pressing include complete automation and better oil quality; the disadvantages include higher consumption of water and energy, larger aqueous fraction and more expensive plant [15].
Possible uses for waste water and pomace (usually disposed of as waste) have recently been studied to reduce environmental impact [16].Olive leaves are another by-product of olive oil production.Leaves N o n -c o m m e r c i a l u s e o n l y constitute about 10% by weight of the olive crop and large quantities accumulate when olive trees are pruned [17].
CHEMICAL CHARACTERISATION
Olives and olive oil have been associated with humans and their traditions over thousands of years.They are an essential component of the Mediterranean diet.Consumed all over the world, their high content in monounsaturated fatty acids and phenols gives them an important nutritional role.They are also a major source of natural antioxidants, which besides protecting olives and olive oil against oxidation, are beneficial for human health, as in the prevention of coronary artery disease and certain types of cancer.Figures 4 and 5 report chemical structures of selected bioactive molecules present in Olea Europea L. products and by-products that will be here after commented.
Olives
Olives have a low sugar content (2.6-6%) and a high oil content (12-30%), these concentrations varying according to period of the year and variety.The beneficial effects of table olives are mainly associated with minor components such as phenols and tocopherols.The phenol profile is complex and depends on factors such as cultivar, irrigation, ripeness and post-harvest processing [18].
The main phenols in the leaves and fruits of the olive tree are oleuropein and ligstroside that impart a bitter taste and are found mainly in the skin and around the seed.They defend the fruits against pathogens and herbivores, making them unpalatable and unsuited for direct consumption from the plant [18].To become edible, olives must be chemically treated to remove their bitter flavour.The most common industrial methods are: • the Seville method for green olives, involving treatment with caustic soda that hydrolyses oleuropein to hydroxytyrosol and elenolic acid; subsequent lactic fermentation causes changes in the phenolic composition of the olives [19]; • the Californian system for black olives that involves initial conservation in brine, which decreases the concentration of oleuropein by bacterial metabolic degradation and increases aglycone derivatives and hydroxytyrosol.The olives are then sweetened with caustic soda, washed and oxidised by saturating the water with compressed air.This causes oxidative polymerisation of o-diphenols [20]; • the Greek or natural system which involves placing the olives in brine as soon as they are harvested.Under these condition, natural fermentation of the olives lowers oleuropein levels and polymerises anthocyanins, helping to stabilise colour [21].
The main chemical reaction are briefly represented in Figure 6.Total phenol compounds in olives are 1-3% by weight of fresh pulp [22].The main classes of phenols are phenolic alcohols, phenolic acids, flavonoids and secoiridoids (Figures 4 and 5).Phenolic acids are the simplest polyphenols in olives and the most abundant are caffeic acid, chlorogenic acid and the more complex verbascoside.The most abundant phenolic alcohols in olives are hydroxytyrosol and tyrosol and their glycosides.Hydroxytyrosol and tyrosol are derived from hydrolysis of oleuropein and ligstroside, respectively (Figure 4).Hydroxytyrosol is a polyphenol with a strongly antioxidant catecholic portion, and it is reported in literature as having many health-promoting properties including being an immunostimulant, antibacterial agent and inhibitor of atherosclerotic plaque formation [23,24].
Flavonoids are the principal dietary phenol intake.They are strong antioxidants, reducing the incidence of cardiovascular disease and certain types of cancer [27] The main flavonoids in olives are luteolin-7-O-glucoside, cyanidin-3-O-glucoside, cyanidin-3-O-rutinoside, rutin, apigenin-7-O-glucoside, quercetin-3-O-rhamnoside and luteolin (Figure 5, Table 2).[25,26] Secoiridoids are only found in a small group of edible plants.The major ones are oleuropein, ligstroside and demethyloleuropein (Figure 4).Oleuropein is the ester between hydroxytyrosol and elenolic acid, whereas ligstroside is the ester between tyrosol and elenolic acid.Oleuropein is generally the predominant phenol in olive cultivars and is found in the fruits and leaves.Demethyloleuropein is only found in certain varieties of olive and can therefore be exploited as a marker of variety (Table 2) [26].
The phenol profile of olives varies considerably dur-ing ripening.In early stages of the growth and development of fruits, oleuropein levels increase to a maximum of 14% dry weight basis [22].This is followed by a green ripeness phase in which oleuropein decreases and levels of hydroxytyrosol increase, probably due to hydrolysis by β-glucosidase and esterases involved in the breakdown of oleuropein, first producing oleuropein aglycone and subsequently hydroxytyrosol (Figure 7) [28].However, certain authors report that the decrease in oleuropein is not always accompanied by an increase in hydroxytyrosol: in some case both decrease during ripening [29].This may be due to formation of phenol oligomers when oleuropein is polymerised by diphenoloxidase [30].In this phase there is also a decrease in the level of chlorophyll in the fruits.Finally, in the black ripeness phase, oleuropein continues to decrease while anthocyanins and flavonoids, such as luteolin-7-O-rutinoside, cyanidin-3-O-glucoside, cyanidin-3-O-rutinoside and rutin, increase.
Oleuropein is involved in the browning process of olives after impact and breaking during harvesting and during subsequent treatments.Browning is due to the action of β-glucosidase and esterase on oleuropein and oleuropein aglycone, respectively, with formation of hydroxytyrosol.After this, oleuropein, hydroxytyrosol and verbascoside are oxidised by polyphenoloxidase [22].
Finally, it is interesting to report that oleuropein aglycone, as well as ligstroside aglycone, can be present in many different isomers, that have been previously characterised (via mass spectrometry), elucidating also possible transformations among them (Figure 8) [31][32].
Leaves
Many of the polyphenols in the fruit of the olive tree are also found in the leaves.The Mediterranean region has long periods of sunlight and many pathogens and insects that can attack olive trees.To combat these stresses the olive produces large quantities of polyphenols that are stored in the leaves of its canopy.The concentration and type of polyphenols in the leaves is influenced by many factors, such as geographical location, cultivar and age of plant [33].The main phenol encountered in olive leaves is the secoiridoid oleuropein, whereas its analogues oleuropein aglycone and ligstro-side aglycone occur in variable concentrations (Table 3) [34][35][36][37].The second most abundant compound in olive leaves is the phenolic alcohol hydroxytyrosol, whereas tyrosol is only found in small concentrations in leaves.Other related compounds from leaves are verbascoside, caffeic acid and p-coumaric acid.Leaves also contain a series of flavonoids that constitute 2% of total polyphenols content.Major examples are luteolin, apigenin and rutin (Table 3).
Other compounds found in smaller quantities are oleanolic acid, vanillin and vanillic acid.According to the literature, young leaves of olive trees contain high LECTIO MAGISTRALIS concentrations of oleuropein, ligstroside and non glycosylated flavonoids, whereas older leaves contain larger concentrations of verbascoside, oleuroside and glycosylated forms of luteolin.This is explained by bioconversion of oleuropein and ligstroside into ver-bascoside and oleuroside during leaf growth and by bioconversion of flavonoid aglycones into their glycosylated forms (Figures 4 and 5) [38].
Leaves also contain tocopherols and β-carotene [33,37].Leaves and unripe fruits of olive trees contain pigments, the main function of which is to absorb sunlight and convert it into the energy necessary to synthesise carbohydrates from water and carbon dioxide by photosynthesis [39].The type and quantity of pigments in plant tissues depends on factors such as species, variety, ripeness and growing conditions.At the end of blooming (May-June; Figure 2), olives begin to develop.They ripen towards the end of Autumn, turning purplish black.Before the fruits ripen, chlorophyll a is the most abundant pigment they contain (60-70% of total pigments), followed by chlorophyll b (15-20%).Carotenoids occurs in minor percentages, β-carotene being the most abundant (4-5%), while violaxanthin and neoxanthin occur in similar percentages (4-5%; Figure 9).When the olives begin to ripen, photosynthesis decreases and chlorophyll disappears, probably together with most of the carotenoids, whereas xanthophylls, which are prevalently esterified in olives, increase.When the olives are ripe, they are purplish in colour due to anthocyanins and the chloroplasts are replaced by chromoplasts [40].In a study on olive leaves of the Neb jmel cultivar, collected in two different periods of the year [41], it was found that the concentration of total chlorophylls depends on the age of the leaves.The maximum concentrations of total chlorophyll (a and b) occurred in the Winter time, when the vegetative stage in not active (Winter 24 μg/mL of extracted solution).In Autumn, when the leaves are still growing, chlorophyll levels are lower (10 μg/mL of extracted solution) and anthocyanin concentrations are higher (Autumn 1.4 mg/kg fresh weight, fw and Winter 0.8 mg/kg fw) [41].
Oil
The phenol profile of virgin olive oil depends strongly on the chemical composition of the olives and the process used to extract the oil, such as milling and malaxation conditions [28].The organoleptic characteristics of the oil, such as aroma and flavour, are largely due to minor components, such as volatile compounds and phenols.Olive quality is certainly the most important factor for the quality of the finished product and is influenced by many factors, such as olive cultivar, ripeness, climate, soil and irrigation.-Tocopherol (Figure 10) accounts for about 90% of total tocopherols (8 vitamers of vitamin E) in olive oil.The concentration of -tocopherol is on average more than 170 mg/kg oil (Table 4) [42][43][44][45].The reasons for such high -tocopherol levels could be related to the need to reduce the concentration of radicals (singlet oxygen) generated during photosynthesis.
The major phenols found in olive oil are hydroxytyrosol, tyrosol and vanillic acid (simple phenols), the secoiridoids oleuropein and ligstroside and their aglycones, the flavonoids, and finally the lignans (pinoresinol and 1-acetoxypinoresinol, Figure 11).Hy-LECTIO MAGISTRALIS Figure 9.Chemical structures of selected carotenoids present in Olea Europea L. products and by-products.droxytyrosol is found in much greater quantities in extra virgin olive oil (14.4±3.0 mg/kg) than in refined virgin olive oil (1.7±0.8 mg/kg) [44].The lignans pinoresinol and 1-acetoxypinoresinol are among the main antioxidants in olive oil.Pinoresinol is found in various plants, including those of the genus Forsythia (Family Oleaceae) [46], whereas 1-acetoxypinoresinol is also found in olive bark [47,48].The fact that they do not occur in olive skins, leaves or branchlets suggests that they form in the oil during olive processing and pressing [49].
-Caroten
A phenol of great interest found in olive oil is the dialdehyde of dicarboxymethyl ligstroside aglycone, also known as oleocanthal (Figure 12, Table 4).First identified by Montedoro and coworkers [50] it is considered responsible for the sharp flavour of certain extravirgin olive oils [51].It was isolated by Beauchamp and coworkers [52], and identified as a natural non steroid anti-inflammatory drug, the properties of which can be ascribed to structural analogy with ibuprofen.Long-term intake of small doses of oleocanthal through olive oil consumption can be linked to the lower incidence of cardiovascular disease, certain types of cancer and other degenerative diseases associated with the Mediterranean diet [53].The dialdehyde of dicarboxymethyl oleuropein aglycone, known as oleacein, has similar properties to oleuropein as well as being a stronger antioxidant than hydroxytyrosol [54].It is reported in literature that the concentration ratio of oleacein to oleocanthal (Figure 12) in various types of extravirgin olive oil depends on plant variety and is independent of the process by which olives are pressed to obtain oil [45].It was also observed that the highest levels of the two compounds occur in oil samples prepared from early-picked green olives, whereas oils from ripe olives of the same variety, obtained by the same process, contain less of these compounds.The oleacein:oleocanthal ratio measured in oil samples decreases by an average of 10-15% after 12 months of storage in dark bottles in a cool dry place, indicating that oleacein is reduced comparatively more by oxidation.
Olive oil also contains many pigments, like chlorophyll, in the form of pheophytins, as well as carotenoids of which β-carotene is the most abundant while lutein occurs in traces (Figure 9).The antioxidant properties of these pigments contribute to the oxidative stability of olive oil [55].
Pomace and olive mill waste waters
The three-phase oil extraction process produces two main by-products: solid pomace and an aqueous fraction or waste water.A major problem of the olive oil industry is the treatment and disposal of this waste water, which is an environmental contaminant by virtue of its odour, acid pH (5-5.5) and its content of potassium salts, phosphates and organic matter such as fats, proteins, sugars and organic acids.It also contains a stable emulsion of olive pulp, mucilage, pectins and oil [56].Attention was recently focused on how to exploit these by-products.Uses such as for energy, compost/fertiliser and feed supplements for livestock were proposed [15].Pomace and waste water are also a major source of polyphenols that could be recovered as bioactive compounds for the pharmaceutical industry [57,58].The phenol fraction in olive oil is only 2% of the total phenols of olives: the other 98% remains in the by-products.These products can occur naturally or arise from processing, partitioning in the oil and waste products [59].The main phenols in pomace and waste waters are hydroxytyrosol, oleuropein, tyrosol, caffeic acid, p-coumaric acid, vanillic acid, verbascoside, elenolic acid and rutin (Figures 4 and 5, Table 5) [56,[60][61][62][63][64].Cicerale and coworkers [65] report that pomace is also an excellent source of oleocanthal, the chemical and biological properties of which have already been described.
ANALYSIS OF THE ANTIOXIDANT PROPERTIES OF OLIVES, EXTRAVIRGIN OLIVE OIL, POMACE AND OLIVE LEAVES
Here we report the results of analysis of the antioxidant activity of olives, extravirgin olive oil (EVOO, main product) and above all pomace (by-product) and olive leaves from cultivations and farms in SW Tuscany in the period 2013-2015.
All samples were pre-treated by freeze-drying and stored in the dark at −20°C±1 until analysis.All were extracted with a non toxic solvent or solvent mixture The best extraction of antioxidant compounds was obtained with the ethanol-water mixture.
Samples were analysed chemically for antioxidant activity by the TEAC (Trolox Equivalent Antioxidant Capacity) spectrophotometric test and for total polyphenols by the spectrophotometric method of Folin-Ciocalteau.Selected polyphenols (hydroxytyrosol and oleuropein) and members of the flavonoids and hydroxycinnamic acids were quantified by chromatography (HPLC-UV and HPLC-MS).
Pomace 2013-2015
Particular attention has been paid to pomace as a byproduct of olive oil production and as a potential source of antioxidant molecules.The pomace samples showed very high TEAC/ABTS and TPP antioxidant activities, mostly in 2015 samples (Figure 13), especially samples P15-D/J: TEAC/ABTS, 265±10 -388±12 mmol(Trx)/kg dw; TPP, 26.0±1.5 -43.7±3.0 mg(GA)/g dw.The 2014 samples had much lower values indicating a particularly poor harvest.
Interestingly, the TEAC/ABTS and TPP parameters showed a linear correlation in 2013-2014 and 2015 pomace samples R 2 =0.763 (f(x)=8.379x) improving to R 2 =0.825 (f(x)=8.418x) in 2015 samples and R 2 =0.941 (f(x)=7.661x; Figure 14) when samples P13-A, P15-I and P15-J were excluded.These samples were from geographical areas peripheral to the production area of the other samples.
EVOO samples enriched with pomace extracts
Some preliminary attempts at enriching 2014 EVOO samples (EVOO14-A, EVOO14-B and EVOO14-C) with relevant pomace samples (P14-A, P14-B and P14-C) extracted with ethanol/water (80/20% v/v) were made.The experiment used EVOOs (about 2 months old), stored in the dark at −20°C±1.Before addition of pomace (EVOO14-A stored, 2 months), the parameter TEAC/ABTS showed values about 30% lower than in fresh samples (EVOO14-A fresh).The results (Figure 16) showed an increase in TEAC/ABTS for the first 48 h due to gradual release of oil-soluble antioxidants by the freeze-dried pomace.The measurements performed up to 72 h showed a decrease in the parameter, presumably due to simultaneous oxidation of the antioxidant species present in EVOO and pomace.
Olive leaves
Finally four samples of olive leaves of the Leccino variety, obtained at different stages of the phenological cycle of the plant (namely early Summer, early Autumn, Winter and Spring) have been analysed.Total polyphenol content was in the range 117.9±5.9 -203.5±10.2g(GA)/kg dw in aqueous extracts and 219.5±11.0− 298.2±14.9g(GA)/kg dw in hydroalcoholic extracts (Figure 17a).A similar seasonal trend was observed for both types of extract and a 65% higher quantity of polyphenols was recorded in the hydroalcoholic extract than in the water extract, presumably due to the greater solubility of polyphenols in ethanol.
Under both extraction conditions, the trend of polyphenol content was in line with other studies in [72].A similar trend was found in one of the two species studied (Kilis Yaglik) that showed highest concentrations in Winter, when vegetation stops, leaf and stem grow arrests and buds die.The lowest polyphenol content in leaves was found in Spring, when leaves grow relatively little whereas flower bunches begin to grow, continuing until late Spring/early Summer.Values increase slightly in Summer, when leaves and stems grow strongly and polyphenols are affected by increased sunlight (ultraviolet) and by drier soil conditions.This pattern is also closely correlated with antioxidant defences mounted by the plant.Antioxidants play a role in molecular mechanisms occurring in trees under different stresses (drought, salinity, low temperatures) that induce specific morphological adaptations, variations in water potential between leaves and roots and increased scavenging of oxygen free radicals.The high content of polyphenols in cold Winter months (a factor for poor vegetative production of olives trees, consequently defined as heliophilous) is confirmed in many studies [36,73].
Figure 18 shows the superimposed chromatograms obtained by HPL-UV analysis of hydroalcoholic extracts of leaf samples obtained in different months for analysis of oleuropein and hydroxytyrosol (resveratrol was used as internal standard).Figures 19 and 20 show HPLC-MS chromatograms obtained in SIM and SRM modes (Single Ion Monitoring and Selected Reaction Monitoring), by injection of water and hydroalcoholic extracts of Leccino leaves (genistein was used as internal standard).Table 6 shows the range of polyphenols quantified.
CONCLUSIONS
The results show that the by-products of Olea europaea L. (leaves) and olive oil production (pomace) are promising sources of bioactive compounds.In leaves, compounds of interest are higher in periods when the vegetative cycle of the trees changes, coinciding with seasonal variations.Considering the health-giving effects of polyphenolic antioxidants and the importance of olive oil production in all Mediterranean countries, it is urgent to study all biologically active molecules for nutraceutical uses, for the production of functional foods and for other purposes
Figure 1 .
Figure 1.Diffusion of the olive tree in the Mediterranean basin.
Figure 6 .
Figure 6.Main chemical reactions involved in the industrial process to remove the bitter flavour from olives.
Figure 7 .
Figure 7. Enzymatic hydrolyses naturally occurring during the ripeness process of olives.
Figure 8 .Table 3 .
Figure 8. Oleuropein aglycones isomers molecular structures and proposed transformations among isomers during the fruit ripening, crushing and malaxing processes (cis or trans isomers are possible form many strucures, and are not here reported; adapted from Garcia-Mozo et al., 2009)[31].
Figure 17 .
Figure 17.Antioxidant activity (a) TPP, (b) TEAC/ABTS and (c) TEAC/DPPH in water and hydroalcoholic extracts of olive leaves.Values are means of three replicates ± standard deviation (SD).
such as cosmetics.Promotion of the primary and secondary components of olive production is a model to use in other areas of agriculture (e.g.viticulture, hor-ticulture, cereal crops) to maximise the use of nutritional and nutraceutical resources and to make agriculture economically sustainable.
Figure 20 .Table 6 .
Figure 20.HPLC-MS chromatogram obtained by injection of hydroalcoholic extract of olive leaves (Winter) in the region of the peaks of flavonoids.
Table 2 .
Contents of selected phenolic alcohols, phenolic acids, flavonoids and secoiridoids analysed in olive fruits (values expressed as mg/kg dw).
Table 4 .
Contents of selected phenolic alcohols, phenolic acids, flavonoids and secoiridoids analysed in olive oils (values expressed as mg/kg). | 2019-04-02T13:09:11.824Z | 2017-07-27T00:00:00.000 | {
"year": 2017,
"sha1": "61232d1086a8f54344176777918a67f350dd130a",
"oa_license": "CCBYNC",
"oa_url": "https://pagepressjournals.org/index.php/jsas/article/download/6952/6651",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "aa82439925a0e673967a66c52f3dd83610fccd58",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
254535366 | pes2o/s2orc | v3-fos-license | Impact of the First and Second Wave of the SARS-CoV-2 Pandemic on Severe Trauma and the Care Structures in the German TraumaNetzwerk DGU®
(1) Background: The aim of this study was to investigate the effects of the pandemic on transfer rates of severely injured patients within the German TraumaNetzwerk of the DGU. Furthermore, cause of accident, rescue times, and trauma cases are compared to pre-pandemic times. (2) Methods: For this investigation patients documented in the TraumaRegister DGU® from 2018 to 2020 were analyzed. The years 2018 and 2019 served as a comparison to 2020, the first COVID-19 pandemic year. All primary admissions and transfers were included if treated on an intensive care unit. (3) Results: Demographics (age, sex) and injury severity in 2020 were comparable with 2018/2019. In 2020, a significant decrease (3.7%) in car accidents was found. In contrast, a significant increase (3.2%) in bicycle accidents was seen. During the second wave, there was a significant burden of COVID-19 patients on hospitals. In this time, we found a significant increase in early transfers of trauma patients primarily from small level 3 to large level 1 centers. There was also a small but significant increase in rescue time, especially during the 2nd wave. (4) Conclusions: Our data confirm the importance of the network structures established in the TraumaNetzwerk DGU®, especially during the pandemic. The established structures allow smaller hospitals to spread their resources and prevent internal collapse. Therefore, the structures of the TraumaNetzwerk DGU® play a prominent role in stabilizing the healthcare system by helping to maintain both surgical and critical care capacity and providing adequate emergency care.
Introduction
To date, more than six million people have died from COVID-19 worldwide. This makes the COVID-19 pandemic the third deadliest mass viral disease in recent history, with only the Spanish flu and AIDS having claimed more lives [1].
The record infection figures of the '4th wave' (virus variant) and the resulting tense situation in the intensive care units in some federal states in Germany made it necessary to transfer intensive care patients to other federal states on a larger scale with the participation of the Air Force [2].
However, in addition to limited critical care capacity, trauma patient care is also a major challenge. In addition, there are geographic and infrastructural differences in terms of care options. Reasons for this are differences in the staffing and equipment of hospitals. To improve trauma care nationwide the German Trauma Society (DGU) had initiated the TraumaNetzwerk DGU ® (TNW) project. Commonly defined standards of care, the integration of regional cooperations, and the division of hospitals into local, regional, and supraregional trauma centers make it possible for the TNW to structure and influence the care of severely injured patients within a nationwide trauma system [3].
While the overall number of traumas decreased, there appeared to be a greater overall concentration in level 1 trauma centers during the pandemic [4].
Another study from a level 1 trauma center was able to show that there was a 50% increase in polytrauma patients. In addition, the number of bed days of trauma surgery patients in the ICU and IMC increased by 90% at the start of the pandemic in March 2020 compared to the same period the previous year [5].
This study investigated the role of the network structure within the TraumaNetzwerk DGU ® during the 1st and 2nd wave and how the transfer structures between the trauma centers changed during the 1st and 2nd wave. In addition, the number of trauma cases, rescue time, severity of injury, and accident mechanism were examined and compared to previous years.
Materials and Methods
The TraumaRegister DGU ® of the German Trauma Society (Deutsche Gesellschaft für Unfallchirurgie, DGU) was founded in 1993. The aim of this multicenter database is a pseudonymized and standardized documentation of severely injured patients.
Data are collected prospectively in four consecutive time phases from the site of the accident until discharge from hospital: (A) pre-hospital phase, (B) emergency room and initial surgery, (C) intensive care unit, and (D) discharge. The documentation includes detailed information on demographics, injury pattern, comorbidities, pre-and in-hospital management, course on intensive care unit, relevant laboratory findings including data on transfusion, and outcome of each individual. The inclusion criterion is admission to hospital via emergency room with subsequent ICU/ICM care or reaching the hospital with vital signs and death before admission to ICU.
The infrastructure for documentation, data management, and data analysis is provided by AUC-Academy for Trauma Surgery (AUC-Akademie der Unfallchirurgie GmbH), a company affiliated with the German Trauma Society. The scientific leadership is provided by the Committee on Emergency Medicine, Intensive Care and Trauma Management (Sektion NIS) of the German Trauma Society. The participating hospitals submit their data pseudonymized into a central database via a web-based application. Scientific data analysis is approved according to a peer review procedure laid out in the publication guideline of the TraumaRegister DGU ® .
The participating hospitals are primarily located in Germany (90%), but a rising number of hospitals in other countries contribute data as well (at the moment Austria, Belgium, China, Finland, Luxembourg, Slovenia, Switzerland, the Netherlands, and the United Arab Emirates). Currently, approximately 30,000 cases from 650 hospitals are entered into the database per year.
Participation in the TraumaRegister DGU ® (TR-DGU) is voluntary. For hospitals associated with the TraumaNetzwerk DGU ® , however, the entry of at least a basic dataset is obligatory for reasons of quality assurance.
The present study (TR-DGU Project ID: 2020-056) and manuscript were approved according to the publication guidelines of the TR-DGU.
Patients
Severely injured patients treated in a German hospital and documented in TR-DGU from 2018 to 2020 were analyzed for this study. The years 2018 and 2019 served as a comparison to 2020, the first COVID-19 pandemic year. Due to the fact that restrictions were imposed on the population in the context of the pandemic for the first time in Germany in March 2020, the evaluation of the data was carried out for all three years from the 10th calendar week. The restrictions included, among others, contact restrictions, wearing masks, keeping distance from other people, and the closure of restaurants. Further analysis of the COVID-19 year 2020 was based on the pandemic. For this purpose, the following periods were defined: 1st wave (10th to 20th calendar week), summer plateau (21st to 39th calendar week) and 2nd wave (40th to 52nd calendar week).
The basic collective of the TraumaRegister DGU ® used for the annual audit reports was considered here. This patient group is defined as all severe trauma cases with maximum Abbreviated Injury Scale (mAIS) severity of 3 or more, or cases with mAIS 2 treated on an intensive care unit. All cases from Germany were included; no further exclusion criteria have been used.
Data Analysis and Statistics
Descriptive data analysis was performed using SPSS (version 27, IBM, Armonk, NY, USA). Mean and standard deviation was used for metric data, and percentage for counts. Due to the large number of patients per year, a formal statistical testing of observed differences was avoided. This would have resulted in highly significant results even in non-relevant differences. The detectable difference in groups of 20,000 cases each is <1%.
For changes in number of patients, 95% confidence intervals (CI) were calculated based on the Poisson distribution. The pre-hospital time from accident to hospital admission has also been analyzed with a multiple linear regression analysis, and adjusted effects for time of year (phases) and year were presented with 95% CI. Table 1 shows the comparison of the 2020 COVID-19 pandemic population with those of 2018 and 2019. Comparing the demographics of trauma patients in 2020 to those of trauma patients in the previous two years shows an increase in average age by 1 year. Fittingly, the proportion of patients with a pre-injury ASA score of 3-4 also increases by 3.5%. The sex distribution (30% female/70% male) is constant over the years. The mean injury severity, as measured by the ISS, also remains constant over the observation period.
Results
In 2020, there is an increase in higher degree injuries (AIS 3+) to the head and thorax. The analysis of the causes of accidents shows a significant decrease of about 3.7% in car accidents during the pandemic year 2020 compared to 2019 and 2018. In contrast, there is a relevant increase of about 3.2% in bicycle accidents. When all traffic accidents are considered together, there is a slight decrease of about 1.4% in 2020. On the other hand, there is a slight increase in suicides (+0.6%, Table 1).
When analyzing the trauma patient flows during the different phases of the pandemic in 2020, it becomes apparent that there were obvious changes during the 2nd wave. During this wave, the maximum incidence in Germany was 210 and there was a significant burden of COVID-19 patients on hospitals. During this time, there was a relevant increase in early transfer (<48 h after admission) of trauma patients from primarily level 3 (approximately +5%) trauma centers to primarily level 1 centers ( Table 2). Evaluating the number of trauma cases during the pandemic waves in 2020, it is clear that the interdiction measures led to a significant decrease in the number of cases.
During the summer months (calendar weeks 21-39), the incidence in Germany dropped to 16 but trauma numbers increased significantly by 4.8% at this time (Table 3). Another effect of the load on clinics with COVID-19 patients can be seen in rescue time (time from accident event to arrival at the clinic). Compared to the spring (1st wave), there was a slight decrease of −0.2 min (95% CI −0.9-0.5) during the summer plateau and an increase of 1.3 min during winter (95% CI 0.5-2.1) ( Table 4).
Discussion
The COVID-19 pandemic is affecting all medical specialties and all sectors of medical care, from pre-hospital rescue to rehabilitation. The main finding of this paper is that the COVID-19 pandemic had an impact on trauma patient transfer within network structures and thus on the care of severely injured patients in Germany.
This was especially the case during the 2nd wave in 2020 when a large burden of COVID-19 patients admitted to hospitals, as well as a slightly increased transfer of trauma patients to level 2 and level 1 centers was shown.
During this time, there was an increase in early transfer of trauma patients from primarily level 3 (approximately +5%) trauma centers to primarily level 2 (+1.5%) and level 1 centers (+1.5%). Additionally, we found delayed rescue times in this period. We were able to show that the contact restriction measures also had an impact on the number of accident injuries and the causes of accidents. Thus, there was a decrease in the total number of serious injuries during the lockdown periods. When analyzing accident type there was a 3.2% increase in bicycle accidents in 2020 compared to reference the period 2018/19 and a 3.7% decrease in car accidents.
In their study and the evaluation of the eTrauma management platform (Open Medical, London, UK), Sephton et al.'s results also showed a decrease in car accidents (−7%), as well as a significant decrease in sports injuries. Equivalent to the increase in bicycle accidents shown in our study, the authors report an increase in accidents with pushbikes and scooters in northwest London [6].
Both the nationwide lockdown and the increase in those who switched to working from home as part of the lockdown resulting in less traffic could adequately explain the decrease in traffic accidents. The additional restriction of use and closure of sports facilities and sporting events account for the decrease in sports injuries found by Saphton et al. and, likewise, for the increase in bicycle accidents we found.
Due to the fact that sporting activity could only take place in private, the sales figures of the German bicycle industry increased by more than 10% in the first half of 2020 compared to the same period of the previous year [7].
The increase in head and thorax injuries found in our data can be explained by the increased activity and mobility with bicycles in 2020.
The reciprocal development between COVID-19 incidences and trauma figures in Germany also find their origin in the overall reduced activity of each individual within the restricted period.
There was an increase in the total rescue time evaluated across Germany (time from accident event to arrival at the hospital), particularly in the 2nd wave of the pandemic. The total rescue time increased by about 2 min in contrast to the previous times.
This observation is supported by the working group of Driessen et al. from a multicenter observational cohort study based on the Dutch Nationwide Trauma Registry. They showed that the total pre-hospital time was significantly longer for all periods in 2020: 54 min (IQR 44-65) during 1st wave, 53 min (IQR 43-64) during interbellum period, and 54 min (IQR 45-66) during 2nd wave [8]. Different reasons suggest effects of the pandemic on pre-hospital rescue time.
The increased pre-hospital rescue time could be considered a consequence of increased infection control during the 1st and 2nd waves. A retrospective cohort study compared trauma patients transported via EMS to six US level 1 trauma centers admitted 1 January 2019-31 December 2019 and 16 March 2020-30 June 2020 found no difference in total pre-hospital time during the SARS-CoV-2 pandemic waves.
They showed that transportation time was significantly shorter during COVID-19 compared to 2019. This may have been partially due to social distancing guidelines reducing the number of people on the road [9].
Due to the varying quality of care for severely injured patients in Germany, the German Trauma Society launched the TraumaNetzwerk DGU ® initiative in 2006 [10,11]. The reasons for these differences in quality are geographical and infrastructural differences between the federal states and regions in Germany, as well as differing treatment concepts and internal equipment of the individual hospitals involved in the care of severely injured patients.
For example, analysis from the United States showed that the introduction of regionalized trauma systems reduced the rate of preventable deaths in severely injured patients by 50% [12,13]. The designations level 1 and level 2 trauma center, as well as level 3 trauma center, were developed based on the equipment of the hospitals from the aspect of severe injury care,. The implementation of trauma networks, the release of the S3 polytrauma guidelines, and the DGU "Weißbuch" have contributed to more structured management of most severely injured patients [14].
Based on these guidelines, transfer strategies between trauma centers are clearly regulated. In the present study, these transfer structures were analyzed during the 1st and 2nd waves of the pandemic. Among other things, it was investigated whether the graduated structure of the trauma networks led to an increased use of higher levels of trauma care during the pandemic.
Most of the published studies showed no differences or a decrease in trauma cases during the first lockdown period [15,16].
In Germany, Wähnert et al. described the effects of the first pandemic wave on a German level 1 trauma center [5]. They showed a significant increase in the number of polytrauma cases in the investigated German trauma center. This confirms the data presented here, which show a slight increase in early transfers from level 3 to level 2 and level 1 centers within the German trauma network, particularly during the 2nd wave. These data confirm the importance of trauma networks, especially during the pandemic. The established structures allow smaller hospitals to spread their resources and prevent internal collapse. Trauma networks therefore play a prominent role in stabilizing the health care system by maintaining both surgical and critical care capabilities and providing adequate emergency care.
This study is limited by the national specificity of the German trauma network structures so the data cannot be generalized.
Furthermore, a limitation of our study is the rather short comparison period (2 years); this is due to a change in the data collection of the TraumaRegister DGU ® .
Due to the large number of cases evaluated, the statistical evaluation is rather descriptive, as even small differences can reach significance. The evaluation covers the whole of Germany, so regional differences and special features (Wähnert et al., 2021 [5]) are not taken into account in this study.
The different impacts of the respective waves on the health system varies and are not recorded.
Conclusions
The 2 min increase in rescue time for accidents in our data can currently only be explained by increased infection control efforts. A separate study should focus on how to not have potentially life-critical increased rescue times even during a pandemic. Our data confirm the importance of the network structures established in the TraumaNetzwerk DGU ® , especially during a pandemic. Level 3 trauma centers, in particular, should take advantage of these network structures by early transfer to preserve their capacities and avoid internal collapse. The structures of the TraumaNetzwerk DGU ® , therefore, play an outstanding role in stabilizing the healthcare system. Author Contributions: Conceptualization, C.C., S.F., D.W. and T.V.; methodology, C.C., S.F., R.L. and T.V.; formal analysis, S.F., P.L., M.M., R.L. and D.W.; data curation, C.C., P.L., M.M. and TraumaRegister DGU; writing-original draft preparation, C.C., S.F., P.L. and D.W.; writing-review and editing, M.M., R.L., T.V. and N.G.; supervision, D.W. and N.G.; project administration, T.V. and N.G. All authors have read and agreed to the published version of the manuscript. | 2022-12-12T05:15:43.514Z | 2022-11-28T00:00:00.000 | {
"year": 2022,
"sha1": "05e1b83b9671677c64faaa620a73bc2a2be349b6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/11/23/7036/pdf?version=1669644412",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05e1b83b9671677c64faaa620a73bc2a2be349b6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9562446 | pes2o/s2orc | v3-fos-license | Overexpression of P21-activated kinase 4 is associated with poor prognosis in non-small cell lung cancer and promotes migration and invasion
Background P21-activated kinase 4 (PAK4), an effector of the Rho family protein Cdc42, is an important oncogene whose expression is increased in many human cancers and is generally positively correlated with advanced disease and decreased survival. However, little is known about the expression and biological function of PAK4 in human non-small cell lung cancer (NSCLC). Methods PAK4 expression in NSCLC tissues and adjacent non-tumor tissues were assessed by immunohistochemistry, real-time PCR, and western blotting. Prognostic value of PAK4 expression was evaluated by Kaplan-Meier analysis and Cox regression. siRNA-mediated gene silencing and protein kinase assay was applied to demonstrate the role and the mechanism of PAK4 in lung cancer cell migration, invasion. Results The results showed that PAK4 was overexpressed in NSCLC cell lines and human NSCLC tissues. PAK4 expression was detected both in the membranes and cytoplasm of NSCLC cancer cells in vivo. Moreover, increased expression of PAK4 was associated with metastasis, shorter overall survival, advanced stage of NSCLC. Furthermore, PAK4 expression was positively correlated with phosphorylation of LIMK1 expression levels. Knockdown of PAK4 in NSCLC cell lines led to reduce the phosphorylation of LIMK1, which resulted in decrease of the cell migration and invasion. In addition, PAK4 bound to LIMK1 directly and activated it via phosphorylation. Conclusions These data demonstrate that PAK4 mediated LIMK1 phosphorylation regulates the migration and invasion in NSCLC. Therefore, PAK4 might be a significant prognostic marker and potential therapeutic molecular target in NSCLC.
Background
Lung cancer is the leading cause of cancer-related death around the world, and approximately 80-85 % of lung cancers are non-small cell lung cancer (NSCLC) [1,2], which accounts for up to 85 % of such deaths [3,4]. The leading cause of death from NSCLC is metastasis, which occurs even before the diagnosis of lung cancer is made, leading to recurrence and treatment failure in patients [5][6][7]. Despite advances in therapeutic approaches, most patients are diagnosed at the advanced stages, and the 5 year survival rate remains less than 15 % [8]. Therefore, it is important to identify new predictive prognostic biomarkers and to better understand the mechanisms of disease progression. P21-activated kinases (PAKs) are a family of serine/ threonine (Thr) protein kinases positioned at the nexus of several oncogenic signaling pathways. Overexpression or mutational activation of PAK isoforms frequently occurs in various human tumors; given their involvement in cancer cell motility, survival, apoptosis, and metastasis, PAK isoforms are important regulators of cancer cell signaling networks [9]. Based on their amino acid sequences and their functions, 6 mammalian PAKs have been identified and classified into group I (PAK1, 2, 3) and group II (PAK4, 5, 6) PAKs [10,11]. PAK4 was initially identified as an effector Cdc42, which is essential for regulating cytoskeleton reorganization and filopodia formation [12]. PAK4 is upregulated in many cancers and is an important oncogene that promotes proliferation [13][14][15][16] and migration [15][16][17][18][19][20][21][22][23], and suppresses apoptosis [24]. However, the role of PAK4 in NSCLC remains unclear.
In this study, we discovered that PAK4 is overexpressed in NSCLC and that its overexpression is associated with poor prognosis. Knockdown of PAK4 inhibited the migration and invasion of NSCLC cells. Furthermore, we found that PAK4 bound to LIM kinase 1 (LIMK1) directly and activated it via phosphorylation, which is required for tumor cell motility and invasion. These results suggest that PAK4 may play an important role in NSCLC cell migration and invasion, and is a potentially useful prognostic marker and therapeutic target.
Materials and methods
Tissue specimens and cell lines NSCLC tissue samples and adjacent normal tissue samples were acquired following the obtainment of informed consent from the patients under institutional review board-approved protocols. NSCLC primary tissue samples of ten patients without metastasis and ten patients with metastasis and adjacent matched non-tumor tissues were collected between October 2012 and January 2013 at the Third Affiliated Hospital, Sun Yat-sen University (Guangzhou, China). The fresh tissue samples were immediately snap-frozen in liquid nitrogen. 210 NSCLC tissues were collected between January 2005 and January 2009. No patient had received treatment prior to enrolment in this study. All diagnoses were histopathologically confirmed. This study was approved by the institutional research ethics committee of The Third Affiliated Hospital, Sun Yat-sen University.
The following cell lines were used in this study: human bronchial epithelial cells (HBE) and NSCLC cells (A549, NCI-520, NCI-460, NCI-H596). All cell lines were obtained from Cell Bank, Chinese Academy of Sciences (Shanghai, China). The NSCLC cell lines were cultured in RPMI 1640 (Gibco, Invitrogen Life Technologies, Carlsbad, CA, USA) supplemented with 10 % newborn calf serum (Gibco, Invitrogen Life Technologies). HBE cells were maintained in keratinocyte serumfree medium with 25 μg/ml bovine pituitary extract and 0.2 ng/ml recombinant epidermal growth factor (Invitrogen Life Technologies). Cells were transfected with DNA constructs using siPORT™ NeoFX™ Transfection Agent (Ambion) for 5 min.
Immunohistochemistry
Immunohistochemical staining was performed using a standard streptavidin-biotin-peroxidase complex method (EnVision™ Detection System; Dako, Copenhagen, Denmark). Tissue blocks were cut into 5-mm sections, deparaffinized with xylene, and rehydrated in a graded ethanol series. The sections were stained with anti-PAK4 (1:50; Cell Signaling Technology, Beverly, MA, USA) overnight at 4°C.
The degree of immunostaining was reviewed and scored semiquantitatively by two independent observers. The staining index was calculated as the product of the proportion of positively stained tumor cells and the staining intensity. The former was scored as follows: 0 (0 % positive tumor cells), 1 (<10 %), 2 (10-35 %), 3 (35-70 %), and 4 (>70 %). Staining intensity was graded as follows: 0 (no staining), 1 (weak), 2 (moderate), and 3 (strong). The staining index was scored at 0-12. The cutoff values for high and low PAK4 expression were ascertained by measuring heterogeneity using log-rank testing with respect to overall survival. A staining index score of ≥6 indicated high PAK4 expression; a staining index score of <6 indicated low PAK4 expression.
Immunofluorescence
Cells were cultured on cover glasses, fixed using paraformaldehyde, and permeabilized with 0.1 % Triton X-100 in TBS. The cover glasses were incubated with the primary antibodies (anti-PAK4, Cell Signaling Technology; anti-LIMK1, Cell Signaling Technology) at 1:50 dilutions. PAK4 was detected with an anti-goat secondary antibody conjugated to Alexa Fluor 488 (Invitrogen Life Technologies). LIMK1 was detected with an anti-rabbit secondary antibody conjugated to Alexa Fluor 555 (Invitrogen Life Technologies). The fluorescent staining was visualized using a 63× NA 1.3 oil objective on a confocal microscope (LSM 510 Meta; Carl Zeiss, Inc.). The co-localization was analyzed using Pearson's correlation coefficient (full co-localization = 1.0) by the Image Pro Plus software.
Protein kinase assay PAK4 (WT) cDNA was cloned into a pET30a Escherichia coli expression vector. The mutant vectors PAK4 (S445N) and PAK4 (K350M) were constructed by sitedirected mutagenesis. Recombinant activated PAK4 (S445N), kinase-defective PAK4 (K350M), and PAK4 (WT) proteins were purified from the E. coli expression systems. LIMK1 protein was purchased from Invitrogen Life Technologies. Equal amounts of the proteins were incubated in buffer containing 100 mM NaCl, 10 mM MgCl 2 , 50 mM HEPES (pH 7.5), 1 mM DTT, and 50 μM ATP for 30 min at 30°C. The protein kinase assay reaction was terminated by the addition of 3× SDS sample buffer. Western blotting was used to detect LIMK1 phosphorylation at Thr508.
Matrigel invasion assays and transwell migration assays
For Matrigel invasion assays, 5 × 10 4 cells were added to a matrigel invasion chamber (BD Biosciences, CA, USA). FBS was added to the lower chamber as a chemoattractant. After 24 h, the non-invading cells were removed and invasive cells located on the lower side of the chamber were stained with crystal violet. Transwell migration assays were performed in a similar manner as the matrigel invasion assays but without matrigel on the filter.
Statistical analysis
Data from three independent experiments are presented as the mean ± SE. All statistical analyses were performed using SPSS software (version 17.0; IBM, New York, NY, USA). Differences between variables were assessed using the χ 2 test. For survival analysis, all patients with NSCLC were analyzed using Kaplan-Meier analysis. The differences in overall survival were analyzed using the log-rank test. Using the Cox regression model, multivariate survival analysis was performed on all parameters that were significant in the univariate analysis. P-values < 0.05 were considered significant.
Results
Pattern of PAK4 expression in human NSCLC cell lines or tissues PAK4 overexpression has been reported in many human cancers; however, its expression in NSCLC remains unclear. To explore PAK4 protein expression in NSCLC, we examined its expression in a human bronchial epithelial cell line (HBE) and several NSCLC cell lines using western blotting. All cancer cell lines expressed high levels of PAK4 protein compared with the HBE cell line (Fig. 1a). To determine whether PAK4 was overexpressed at the transcriptional level, we examined PAK4 mRNA levels in the HBE and NSCLC cell lines using real-time PCR. The levels of PAK4 mRNA in the NSCLC cell lines were significantly higher than that in the HBE cell line (Fig. 1b), indicating that PAK4 is overexpressed in NSCLC cell lines at both mRNA and protein levels compared to HBE cells.
To investigate whether PAK4 was similarly upregulated in NSCLC tissues, western blotting and real-time PCR were performed on 20 NSCLC tissues (ten primary tissues without metastasis and ten primary tissues with metastasis) and matched adjacent non-tumor tissues. PAK4 was overexpressed at both protein and mRNA level in all 20 NSCLC tissues compared with the adjacent non-tumor tissues ( Fig. 2a and b). Subgroup analysis showed that there was higher PAK4 mRNA in the metastatic NSCLC tissues compared to the primary NSCLC tissues (Fig. 2c).
Association between PAK4 expression and NSCLC progression or prognosis
Based on the above results, we predicted that PAK4 overexpression would be associated with disease progression. To test this, 210 paired human NSCLC tissues and matched adjacent non-tumor tissues were stained using immunohistochemistry. Positive staining was observed in the NSCLC cell membranes and cytoplasm. The NSCLC tissues had significantly higher PAK4 staining scores than the adjacent non-tumor tissues (p < 0.01, Fig. 2d). There was stronger PAK4 staining, indicating higher expression, in 137 of 210 human NSCLC tissues (65.2 %). There were no significant differences among the 3 subtypes of NSCLC: adenocarcinoma, squamous cell carcinoma, and others (p = 0.454, Table 1). Moreover, higher PAK4 staining scores were positively correlated with differentiation, lymph node metastasis, distant metastasis, and clinical stage. These results reveal that PAK4 overexpression was associated with NSCLC progression. Kaplan-Meier analysis and log-rank testing showed that overall survival was significantly different between patients with upregulated PAK4 and patients with downregulated PAK4 (p = 0.0016, Fig. 3a); patients with upregulated PAK4 had shorter overall survival. Subgroup analysis also indicated that patients with high PAK4 expression had poor overall survival in both early stage (I + II) and later stage (III + IV) patients (Fig. 3b, c). Moreover, differentiation, lymph node invasion, distant metastasis, and clinical stage correlated with survival. When the clinicopathological variables that were significant in univariate analysis were adopted as covariates, multivariate Cox regression analysis indicated that overexpression of PAK4 protein was an independent prognostic factor for poor survival (p < 0.01, Table 2). These results suggested that overexpression of PAK4 was positively associated with NSCLC progression and might serve as an independent predictor of poor survival.
Influence of PAK4 knockdown on NSCLC cell migration and invasion
These above data prompted us to further examine the mechanism that PAK4 mediated in the progression of NSCLC. First, PAK4 expression was downregulated by siRNA-mediated gene silencing. Knockdown of endogenous PAK4 in the A549 and NCI-H520 cells was confirmed using western blotting and real-time PCR (Fig. 4a and b). Then, Transwell migration assay and Matrigel invasion assay were carried out to explore the potential biological function of PAK4 in NSCLC. PAK4 knockdown suppressed A549 and NCI-H520 cell migration compared with the controls (Fig. 4c), and dramatically reduced A549 and NCI-H520 cell invasiveness (Fig. 4d). Transfection with si-PAK4 decreased the numbers of migrated and invasive A549 and NCI-H520 cells by >3-fold as compared with cells transfected the control (both, p < 0.01). These results indicate that PAK4 knockdown suppresses NSCLC cell migration and invasion.
The role of LIMK1 for PAK4-mediated NSCLC cell migration and invasion
Cell migration requires reorganistion of the actin cytoskeleton. PAK4, an effector of Cdc42, elicits their response via interaction with downstream effector proteins. Previous studies showed that LIMK and its downstream cofillin activated by PAK4 plays an important role in promoting actin polymerization and defining the direction of cell motility [26]. We examined whether LIMK1 and cofilin are involved in regulating NSCLC cell line migration and invasion. Following siRNA knockdown of PAK4, LIMK1, cofilin, and their respective phosphorylation were examined using western blotting. PAK4 knockdown significantly reduced the phosphorylation of LIMK1 and cofilin, whereas the total expression levels of LIMK1 and cofilin did not change (Fig. 5a). These results indicate that PAK4 might regulate the the phosphorylation of LIMK1 and cofilin in NSCLC cell. The protein expression of LIMK1, p-LIMK1, cofilin, and p-cofilin in A549 and NCI-H520 cells transfected with si-PAK4 or control by western blotting. b: Immunofluorescent staining for PAK4 (green) and LIMK1 (red) in A549 cells (upper panels) or in NCI-H520 cells (lower panels). Nuclei were stained with Hoechst33258 (blue). c: A549 and NCI-H520 cell lysates were immunoprecipitated with PAK4 antibody (top panels) or LIMK1 antibody (bottom panels) and subjected to western blotting to ascertain LIMK1 and PAK4 interaction. d: In vitro kinase assay using purified activated PAK4 (S445N), kinase-defective PAK4 (K350M), PAK4 (WT), and LIMK1 protein. The amount of p-LIMK1, PAK4, and LIMK1 were measured using western blotting. e: Correlations between the protein level of PAK4 and the p-LIMK1 in human NSCLC tissues (n = 10). NC, negative control; IgG, immunoglobulin G To explore the possibility that PAK4 phosphorylated LIMK1 in vivo, we examined whether PAK4 physically interacted with LIMK1 in the A549 and NCI-H520 cells using by immunofluorescent staining. The results showed that PAK4 (green) and LIMK1 (red) were predominantly localized within the cytoplasm (Fig. 5b). The degree of co-localization was determined using Pearson's correlation coefficient; the mean ± SE of PAK4 and LIMK1 colocalization was 0.72 ± 0.02 (n = 30) in the A549 cells and 0.79 ± 0.03 (n = 30) in the NCI-H520 cells. These results showed that PAK4 and LIMK1 generally co-localized in the PC-3 and DU145 cells.
To investigate the role of PAK4 in the process of LIMK1 and cofilin phosphorylation, co-immunoprecipitation assays were performed. Cell lysates were incubated with PAK4 antibody, the immunocomplex was purified, separated by SDS-PAGE, and immunoblotted with a LIMK1 antibody. The LIMK1 protein was present in the complex immunoprecipitated with the anti-PAK4 antibody (Fig. 5c, top); PAK4 was also present in the reciprocal immunoprecipitation with the anti-LIMK1 antibody (Fig. 5c, bottom). In addition, neither PAK4 nor LIMK1 were detected in the immunocomplex in association with the control immunoglobulin G, indicating the specificity of the observed co-association. These results show that PAK4 interacts specifically with LIMK1 in NSCLC cells.
PAK4 is an important oncogene in many cancers, and many of its functions are dependent on PAK4 kinase activity. To determine whether PAK4 directly phosphorylated LIMK1, we performed an in vitro kinase assay using purified wild-type PAK4 (WT) and LIMK1 protein, and found that LIMK1 was phosphorylated by PAK4 (Fig. 5d). We also used purified PAK4 with different activities, activated PAK4 (S445N), kinase-defective PAK4 (K350M), and LIMK1 protein in the in vitro kinase assay. LIMK1 phosphorylation in the presence of PAK4 (K350M) was significantly lower than that in the presence of PAK4 (WT) or PAK4 (S445N) (Fig. 5d). These data indicate that PAK4 could phosphorylate LIMK1 protein.
To further reveal the relation between PAK4 and phosphorylation of LIMK1 in clinical NSCLC tissues, we examined the expression of PAK4 and p-LIMK1 in 20 human NSCLC tissues using western blotting. The extent of PAK4 upregulation was positively correlated with the amount of p-LIMK1 (R 2 = 0.657, p < 0.05) (Fig. 5e), suggesting that the effect of PAK4 kinase activity on LIMK1 is clinically relevant in human NSCLC tissues.
To reveal whether LIMK1 was involved in the PAK4mediated NSCLC cell migration and invasion, we cotransfected A549 and NCI-H520 cells with si-PAK4 or LIMK1 plasmid. LIMK1 rescued the effects of PAK4 knockdown on A549 and NCI-H520 cell migration and Fig. 6 LIMK1 overexpression rescued the effects of si-PAK4 on A549 and NCI-H520 cell migration and invasion. A549 or NCI-H520 cells were transiently transfected with Si-PAK4 or LIMK1 or both, and then seeded for migration (a) and invasion assay (b) under at × 40 original magnification. NC, negative control invasion (Fig. 6). These above results suggest that LIMK1 phosphorylation is required for PAK4-mediated NSCLC cell migration and invasion.
Discussion
Although PAK4 is an important oncogene in many cancers, the role of PAK4 in NSCLC remains obscure. In this study, we detected PAK4 overexpression in NSCLC cell lines and human NSCLC tissues, and found that PAK4 overexpression was correlated significantly with clinical stage, differentiation, lymph node metastasis, and distant metastasis. More importantly, upregulated expression of PAK4 in NSCLC patients was associated with shorter overall survival. These results show that PAK4 is an important prognostic marker and potential therapeutic target in NSCLC.
Upregulated PAK4 promotes cell migration in several cancers [13][14][15][16][17][18][19][20][21][22][23]. However, the role of PAK4 in NSCLC cell migration remains unclear. As far as we know, this is the first study to reveal that PAK4 increase NSCLC cell migration and invasion. Furthermore, PAK4 regulation of cell migration is dependent on the downstream pathway of membrane-type 1 matrix metalloproteinase (MT1-MMP) in choriocarcinoma [15,16], c-Src/mitogenactivated protein kinase kinase 1 (MEK1)/extracellular signal-regulated kinase (ERK)1/2 and MMP-2 in ovarian cancer [16], LIMK1/cofilin in prostate cancer [17], MMP-2 in glioma [19], and superior cervical ganglia 10 (SCG10) or LIMK1/cofilin in gastric cancer [20,21]. We revealed that PAK4 promoted migration by directly binding to LIMK1 and activating it via phosphorylation. It has been reported that the PAK4-LIMK1-cofilin signaling pathway promotes cell migration in prostate cancer and gastric cancer [17,20]. Other signal pathways through which PAK4 mediates NSCLC migration require further exploration. LIMK1, a downstream effector of PAK4, is an important regulator of cytoskeletal organization involved in cell migration [27,28]. LIMK1 is overexpressed in lung cancer and is associated with high tumor-nodes-metastasis (TNM) stage and lymph node metastasis [29]. Our experiments also showed that LIMK1 was overexpressed in NSCLC tissues, especially in tissues with metastasis (data not shown). Furthermore, si-LIMK1 suppressed the migration and invasion of lung cancer cells [29]. However, how modulating LIMK1 promotes lung cancer cell migration remains unclear. In this study, we found that LIMK1 interacted directly with PAK4 and acted as a substrate to promote cell migration and invasion in NSCLC. Furthermore, overexpression of LIMK1 rescued the effects of PAK4 knockdown on NSCLC cell migration and invasion. These findings show that PAK4 increased NSCLC cell migration by phosphorylating LIMK1.
CDK5 kinase regulatory subunit-associated protein 3 (CDK5RAP3) in hepatocellular cancer [22] and hepatocyte growth factor (HGF) and follicle-stimulating hormone (FSH) in ovarian cancer [16] activate PAK4 to promote cell migration. HGF and c-Met are overexpressed in NSCLC [30]. However, the upstream pathways of PAK4 in NSCLC cell migration remain unclear. We intend to explore the mechanism of PAK4 activation in NSCLC in future studies.
Conclusion
In the present study, increased PAK4 expression was associated with differentiation, lymph node metastasis, distant metastasis, clinical stage, and an unfavorable prognosis in patients with NSCLC. Our findings suggest that thePAK4-LIMK1 pathway may be related to the progression of NSCLC and that PAK4 may be significant prognostic marker for this disease. | 2017-07-06T20:28:24.076Z | 2015-05-15T00:00:00.000 | {
"year": 2015,
"sha1": "0574618db48260e8f6d6cfe8fce13d3a10ec5a66",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/s13046-015-0165-2",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4dbb2d369c2ef3418a14a66606d8996560ac89eb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3857546 | pes2o/s2orc | v3-fos-license | Isolation by Elevation: Genetic Structure at Neutral and Putatively Non-Neutral Loci in a Dominant Tree of Subtropical Forests, Castanopsis eyrei
Background The distribution of genetic diversity among plant populations growing along elevational gradients can be affected by neutral as well as selective processes. Molecular markers used to study these patterns usually target neutral processes only, but may also be affected by selection. In this study, the effects of elevation and successional stage on genetic diversity of a dominant tree species were investigated controlling for neutrality of the microsatellite loci used. Methodology/Principal Findings Diversity and differentiation among 24 populations of Castanopsis eyrei from different elevations (251–920 m) and successional stages were analysed by eight microsatellite loci. We found that one of the loci (Ccu97H18) strongly deviated from a neutral model of differentiation among populations due to either divergent selection or hitchhiking with an unknown selected locus. The analysis showed that C. eyrei populations had a high level of genetic diversity within populations (AR = 7.6, HE = 0.82). Genetic variation increased with elevation for both the putatively selected locus Ccu97H18 and the neutral loci. At locus Ccu97H18 one allele was dominant at low elevations, which was replaced at higher elevations by an increasing number of other alleles. The level of genetic differentiation at neutral loci was similar to that of other Fagaceae species (FST = 0.032, = 0.15). Population differentiation followed a model of isolation by distance but additionally, strongly significant isolation by elevation was found, both for neutral loci and the putatively selected locus. Conclusions/Significance The results indicate higher gene flow among similar elevational levels than across different elevational levels and suggest a selective influence of elevation on the distribution of genetic diversity in C. eyrei. The study underlines the importance to check the selective neutrality of marker loci in analyses of population structure.
Introduction
Genetic composition within and among populations is shaped by the interplay of genetic drift, gene flow, mutation and natural selection. Molecular markers have helped to identify the effect of life history traits, phylogeographic history and environmental factors on the genetic structure of plant populations [1,2]. Among environmental factors, abiotic factors, such as soil type, topology or elevation, play an important role in genetic structuring because they may affect phenology, population size or density and thus gene flow or genetic drift [3]. Elevation is of particular importance, and many studies focused on its relationships with plant performance and phenotype [4], but also on genetic variation of molecular markers [3,5,6].
Genetic variation within populations often varies along elevational gradients and among species different patterns have been identified [7]. First, mid-elevation populations may hold higher levels of diversity compared with both low and high elevation populations due to the optimal mid-elevation habitats following the central-marginal hypothesis (e.g. [8]). Second, low elevation populations may have highest diversity which decreases with elevation as a result of bottlenecks occurring throughout upward range expansion (e.g. [9]). Third, highest genetic diversity was found at high elevations which was attributed to various reasons like decreased human disturbance and/or historical downward range shifts due to climate change, and adaptation [5,7]. Lastly, genetic variation also has been found to stay rather constant along a given elevational gradient due to extensive gene flow (e.g. [10]). Overall, these inconsistent patterns support a predominant role of life history traits and of biogeographic history in determining patterns of genetic variation along elevational gradients. The processes underlying these patterns are either neutral, like genetic drift and bottleneck effects as a result of the demographic history, or are selective due to the climatic clines related to elevation.
Elevational clines encompass a suite of environmental factors that are either physically linked with elevation like temperature [11] or that are instead correlated with it, like land use [4]. Depending on the ability of these factors to exert selection or to affect the neutral processes of gene flow and drift, molecular markers may display elevational patterns. Of the various molecular markers, which have been used to study genetic variation, microsatellites are assumed to represent neutral markers because microsatellites are generally found in non-coding regions [12] and are characterized by high levels of variability. Consequently, patterns of differentiation among populations at microsatellite loci are almost exclusively interpreted as genetic drift and gene flow. However, some empirical studies indicated the presence of non-neutral microsatellite loci [12,13,14]. Thus, in order to study neutral processes the neutrality of loci should be confirmed before performing other genetic analyses [7,15]. Due to the steep clines in environmental conditions with increasing elevation accompanied by changes in selective conditions, nonneutral behaviour of individual molecular markers is likely, e.g. due to physical linkage to specific genes under selection (e.g. [16]).
In mixed and evergreen broad-leaved subtropical forests of Southeast Asia, Castanopsis eyrei (Fagaceae) is often the dominant tree species in late successional forests. The long lived evergreen species is native to southeastern China and Taiwan and occurs along a large elevational gradient from ,300 m to 1700 m a.s.l. (http://www. efloras.org/florataxon.aspx?flora_id=620&taxon_id=200006236). It is monoecious and wind-pollinated and the acorn seeds are predominantly dispersed by gravity and small rodents [17]. Due to these life history traits, C. eyrei populations are expected to have high genetic diversity and efficient gene flow mediated by pollen dispersal should result in low levels of genetic differentiation.
In this study we examined the distribution of genetic variation in C. eyrei populations within a nature reserve of continuous mixed broad-leaved forest across a mountain range. Specifically, we ask (1) whether individual loci are more strongly differentiated than expected from a neutral model, and (2) whether spatial structure, elevation or successional stage affect the patterns of neutral and of putatively adaptive genetic variation, respectively.
Identification of loci under selection
Outlier tests performed using FDIST detected a significant departure of the F ST value from neutral expectations for locus Ccu97H18 (F ST = 0.316, Fig. 1), while for other loci F ST values ranged from F ST = 0.029 to 0.055. However, four of them with lower F ST values were also situated out of the simulated distribution, which was probably due to the extremely high value of Ccu97H18. When we excluded this locus and reanalysed the other seven loci, the result confirmed their neutrality as all of them were situated within the 0.99 quantile.
Analysing only locus Ccu97H18, we found an increase in the number of alleles with elevation from an average of 2.261.2 below 400 m a.s.l. to 16.863.9 above 800 m a.s.l (Fig. 2). This was due to one allele in particular (145 bp) which was most common with frequency close to 1.0 at lower elevations (, ca. 700 m), whereas its frequency decreased drastically at higher elevations.
Genetic diversity at species and population level
Genetic parameters at species and population levels for both the putatively neutral loci and the putatively selected locus Ccu97H18 are displayed in Table 1. In a total of 583 individuals and at the seven putatively neutral loci, we identified 129 alleles with a number of 10 to 25 alleles per locus. At the population level, the mean number of alleles per locus ranged from 6.1 to 12.1 (mean = 9.4) and allelic richness (A R ) varied from 5.4 to 7.7 (mean = 6.7). The expected heterozygosity (H E ) ranged from 0.68 to 0.86 among populations (mean = 0.78). At the species level, C. eyrei had a H E value of 0.82. The bottleneck analyses indicated recent reduction in population size in five sites (Table 1), which were located at low, medium and high elevations.
Effects of environmental factors
Successional stage was significantly interrelated with elevation (r = 0.567, P = 0.004 Spearman correlation). Over all neutral loci, the multiple regression of allelic richness (A R ) against elevation and successional stage showed that A R increased significantly with elevation but succession had no significant contribution (r = 0.586, P = 0.005). For the putatively selected locus Ccu97H18, similarly only elevation had a significantly positive strong effect on A R in the mutliple regression analysis (r = 0.708, P,0.001).
Population differentiation
Populations were significantly structured as revealed by overall F ST over the seven neutral loci of 0.032 (P,0.01). However, the standardized F ' ST value was 0.15, indicating considerable differentiation. When only the putatively selected locus Ccu97H18 was analysed, we detected much higher values of F ST = 0.316 and F ' ST = 0.571. Significant patterns of isolation by geographic distance were found at neutral loci both for pairwise F ST and F ' ST values (Fig. 3). For the putatively selected locus Ccu97H18, a significant pattern of isolation by distance was detected for pairwise F ST (r = 0.193, P = 0.016), but the pattern did not exist for pairwise F ' ST (r = 0.069, P = 0.194). When we checked for a pattern of isolation by elevational distance, both pairwise F ST and pairwise F ' ST revealed much more strongly significant correlations, indicating isolation by elevation, both in the neutral loci ( Fig. 3) and the putatively selected locus (F ST : r = 0.251, P = 0.002; F ' ST : r = 0.210, P = 0.003). Since elevational distance was correlated with geographic distance (r = 0.329, P = 0.002), we performed partial Mantel tests to test whether elevational distance was related to genetic differentiation after accounting for geographic distance. For the neutral loci, elevational distance remained significant for pairwise F ' ST (r = 0.129, P = 0.010) but not for pairwise F ST (r = 0.060, P = 0.123). For the putatively selected locus elevational distance remained significantly related to differentiation after accounting for geographic distance in both
Neutrality of microsatellite loci
Microsatellites are assumed to represent ideal neutral markers, so that only gene flow and genetic drift rather than selection should affect their genetic structure. However, an increasing number of studies indicated the presence of non-neutral loci [14,18,19]. In the present study one out of eight loci that were originally developed for C. cuspidata var. sieboldii [20,21] showed non-neutral behaviour. However, no information on the genomic position and putatively linked genes of this locus is available (Ueno pers. comm.). Based on the analysis of expressed genes of C. sieboldii, Ueno et al. [22] showed that microsatellites are widespread with 314 microsatellites in 2417 potential unigenes. Consequently, microsatellite markers may be linked to expressed genes and, hence, tests of neutrality should precede population genetic analyses. Since only a limited number of microsatellite loci are routinely analysed in such studies and given that average linkage disequilibrium is expected to be low in outcrossing species, the likelihood of finding a marker linked to an adaptively important gene may be low [16]. However, based on studies that used the method of Beaumont and Nichols [15] to identify nonneutral microsatellite loci in plants, between 4% (one out of twenty six for Fucus serratus [23]) and 33% (three out of nine for Astronium urundeuva [24]) of loci were found to behave non-neutrally. However, these seemingly high levels of non neutral loci may be overestimated as the identification of outlier loci with non-neutral behaviour also produces false-positives [25], which should be controlled e.g. by correlating allele frequencies with potentially selective site conditions (e.g. [26]).
Genetic diversity of Castanopsis eyrei
At the seven neutral microsatellite loci employed in this study a total of 129 alleles were detected with 10 to 25 alleles found per locus. Ueno et al. [20,21] detected a total of 78 alleles in C. cuspidata with these same loci in a limited number of individuals. In our study, C. eyrei showed many more alleles than C. cuspidata in the original work, possibly due to the larger sample size (N = 583 and N = 24 for C. eyrei and C. cuspidata, respectively). Genetic variation at the species level in C. eyrei was high (H E = 0.82) and similar to that of other congeneric species like C. cuspidata (H E = 0.83 [27]), and C. acuminatissima (H E = 0.72 [28]). These species share common characteristics like an outcrossing mating system, wind pollination and a long life span. Furthermore, they are all climax species with a broad current distribution and thus may also have similar demographic histories. Species exhibiting these traits are generally expected to show high levels of genetic variation [1].
Population structure
Focusing on neutral genetic variation and thus, excluding the putatively selected locus, overall population differentiation was low (F ST = 0.032) indicating only little differentiation [29]. However, the adjusted F ' ST [30] Table 2). Levels of differentiation derived from dominant markers are somewhat lower (F ' ST = 0.124 for AFLP or RAPD markers, Table 2) and drastically lower when estimated Table 2). This discrepancy indicates that the absolute level of F ' ST values has to be interpreted with caution, e.g. marker specific mutation rates have to be taken into account. In fact it seems unlikely that across the scale of a few kilometres populations of these tree species are strongly differentiated in neutral markers because of extensive pollen flow and seed dispersal by animals.
The absolute levels of standardized pairwise population differentiation, F ' ST , approached unity in several cases at the putatively selected locus Ccu97H18. This demonstrates that these population pairs are almost fixed for different alleles, a fact that is not obvious with traditional F ST . However, the relationship between population differentiation and spatial or elevational distance was almost the same for the traditional and standardized F ST values. Thus a more comprehensive understanding of differentiation patterns is possible using standardized differentiation measures [31,32].
Isolation by elevation
Significant isolation by distance was found for the neutral loci and locus Ccu97H18 (only for pairwise F ST ). However, additionally significant isolation by elevation was detected in both the potentially adaptive locus and the non-adaptive loci after accounting for the effect geographic distance. This pattern of isolation by elevation suggests higher rates of gene flow among sites at similar elevations than along elevational clines [3]. Elevation can result in reproductive isolation due to phenological shifts, e.g. delayed budding [33] or shift of flowering time or prolonged floral longevity and stigma receptivity [34] resulting in temporal separation of the timing of flowering [35]. Phenological differences in flowering time in turn will lead to partial reproductive isolation which both may facilitate adaptation to elevation and lead to neutral genetic differentiation as has been shown for other forest trees [36].
Populations of C. eyrei at the top of the mountains harboured the largest amount of genetic variation whereas populations at lower elevation had reduced levels of variation. Although not often observed among trees [7] a similar pattern was found in other tree species [16,37,38]. As both the non-selected loci and the putatively selected locus displayed the same pattern, a number of non-mutually exclusive processes may have contributed. First, mutation rate may be higher at higher elevations due to increased ultraviolet-B radiation [7]. If effective, this process should apply to all loci in a similar manner and may have contributed to the general trend across all loci. However, microsatellites are polymorphic due to slippage mutation of the DNA polymerase and UV radiation seems not necessarily to affect this process [39,40]. Second, human disturbance is much stronger at lower elevations. Charcoal has been detected in many local soil profiles [41] indicating past fire clearance. Populations at higher elevations are more rarely influenced by human activities and, thus, are able to preserve genetic diversity. We found a significant positive correlation between elevation and successional stage, indicating that older, less disturbed forests are often located in higher elevations. Hence, it is likely that undisturbed upland forests served as sources for colonization after logging at low elevations. Recent and older bottleneck and founder effects may thus have contributed to reduced variation at lower elevations. However, bottleneck tests did not support the hypothesis of recent reductions of population size at lower elevation. However, in wind pollinated trees, large gamete pools may be involved in colonization, maintaining high levels of diversity in colonized areas (e.g. [42]). Third, selection may be a significant force. Locus Ccu97H18 showed a strong cline as the most common allele at low elevations almost went extinct at higher elevations and many other alleles appeared instead. These patterns are unlikely due to random genetic drift or restricted gene flow, but most likely due to selection. Since Ccu97H18 is a short microsatellite, genetic hitchhiking is the most probable reason for the observed patterns, assuming that the locus is linked with loci under selection, as has been shown for other microsatellite loci in trees [43,44,45]. We do not have evidence on the genes potentially involved. Thus, both the target of selection and the potential contribution of diversifying selection producing the cline and/or balancing selection bearing high allelic diversity remain obscure. Overall, the study suggests that elevation can be an efficient driver for genetic differentiation at both neutral and adaptive loci at the landscape scale.
Ethics Statement
Field work and the collecion of leaves were performed in cooperation with and under approval by the Gutianshan National Nature Reserve in China.
Study area and populations
Our study was carried out in Gutianshan National Nature Reserve (NNR) located in the western part of Zhejiang Province, China (29u89-29u179 N, 118u29-118u119 E). C. eyrei is the dominant tree species in the area and continuously distributed throughout [46]. The Gutianshan NNR has an area of approximately 81 km 2 with elevations ranging from 250 to 1250 m a.s.l.. It mainly consists of species-rich evergreen broad-leaved forests including old growth forest and successional stages that developed after cease of human use in 1975 [41]. In 2008, we sampled 24 representative sites of 30630 m which were spread across all successional stages and the local elevational range of the species (251-920 m). We did not sample at .1000 m a.s.l. because the species was too rare. Five successional stages were distinguished according to the average age of the tree layer ( [41], 1: ,20 yrs, 2: ,40 yrs, 3: ,60 yrs, 4: ,80 yrs, 5: $ 80 yrs). Additional details of site selection and conditions for 20 of the sites are reported in Bruelheide et al. [41]. We sampled all mature individuals of C. eyrei inside the sites and some additional individuals outside of sites CSP 6 and CSP 21 because there were too few inside. In each of the 24 populations, 12 to 49 individuals (mean = 24) were collected, totalling 583 individuals (Table 1).
Population genetic analysis
Total genomic DNA was isolated from ca. 10 mg dried leaf material using the DNeasy 96 plant extraction kit (QIAGEN) following manufacturers instructions. Samples were genotyped at eight microsatellite loci previously developed for C. cuspidata var. sieboldii [20,21]. Multiplex polymerase chain reactions (PCR) were performed in a total volume of 10 ml. Ccu16H15 (Label: PET, redesigned reverse primer: GAAATTGAGTTGGGTTAGTTC-CAC), Ccu28H18 (FAM), Ccu62F15 (NED), Ccu33H25 (FAM), Ccu90T17 (PET), Ccu102F36 (VIC) and Ccu87F23 (FAM) were in one PCR amplification. Another single PCR amplification was performed for Ccu97H18 (VIC). Thermocycler protocol was one cycle of 95uC for 15 min, followed by 35 cycles of 30 s at 94uC, 90 s at 58uC and 1 min at 72uC, with a final extension of 20 min at 72uC. PCR products from the two amplifications were mixed and separated on an ABI 3130 genetic analyzer (Applied Biosystems) with internal size standard GeneScan TM 600 LIZ. Individuals were genotyped using GeneMapper version 3.7 (Applied Biosystems). C. eyrei is diploid and none of the samples displayed more than two alleles.
Because the study species is a wind pollinated perennial tree of the Fagaceae, many of which exhibit populations in Hardy-Weinberg equilibrium (HWE, e.g. [47,48]), we assumed that C. eyrei microsatellite loci should conform to HWE. Because transspecies amplification of microsatellites often results in null alleles we checked the data for the presence of null alleles under the assumption of HWE using MICRO-CHECKER [49]. Except for Ccu16H15, all loci showed signs of null-alleles, the frequency of which ranged from 1.3% to 20% (mean = 6.99%). We took three approaches to deal with null-alleles. First, we adjusted homozygous single locus genotypes, if necessary, by adding an additional allele in the frequency of the null-allele. This approach assumes that there is one single null allele, which is treated as an additional allele. Second, we used the null allele correction procedure of the FreeNA software [50] to calculate pairwise F ST values. This approach corrects for null alleles but restricts the analysis to the visible alleles. Third, we left data unchanged. However, all subsequent analyses showed similar results irrespective of the type of null allele treatment. Therefore, we only present the results of the MICRO-CHECKER approach as it allows the calculation of standard diversity descriptors.
We tested the eight microsatellite loci for outliers, i.e. markers potentially under selection, using the program FDIST [15]. A null distribution of target F ST values expected from a neutral model is generated and quantile limits are calculated. Loci outside a 99% confidence interval are regarded as potentially under selection. Following Acheré et al. [18], the neutral expectation was first based on the observed overall mean F ST calculated from all markers. In a second step, the overall mean F ST was recalculated without the putatively selected locus and used as target value for a new null distribution to test the remaining loci. As our analyses suggested that locus Ccu97H18 was potentially under directional selection, we performed all the following analyses both for the seven loci conforming to a neutral model (''neutral loci'') and only for locus Ccu97H18.
Genetic diversity at population level was characterized by number of alleles (A), allelic richness (A R , correcting for sample size by rarefaction for a minimum sample size of 12) and expected heterozygosity (H E ) using FSTAT version 2.9.3.2 [51]. Because genotypes were adjusted for null-alleles, we did not calculate inbreeding coefficients. In the dataset of neutral loci we tested for recent bottlenecks (reductions of effective population size) by testing for an excess of heterozygosity relative to expectations of a mutation-drift equilibrium [52]. We used the software BOTTLE-NECK [53] and applied the recommended two-phase mutation model (TPM) with 70% stepwise and 30% multistep mutations, a variance of 12, 1000 iterations in the coalescent simulations and one-tailed Wilcoxon's signed-rank tests. To assess population differentiation, pairwise F ST values based on Weir and Cockerham's [54] estimator h were calculated using FSTAT. As F ST is likely to underestimate genetic differentiation between populations for markers which show high levels of allelic variability, we calculated F' ST , a standardized parameter of genetic differentiation as F ' ST~FST =F ST max [30]. F ST max was calculated after recoding the data using RECODEDATA [55]. To test for isolation by distance [56], the association between pairwise genetic differentiation (F ST ) and pairwise geographic distances (log transformed) was tested using the Mantel test implemented in R 2.8.1 [57]. We also performed a Mantel test between F ST and pairwise elevational differences (log transformed) to test for isolation by elevational distance. Since pairwise elevational difference was correlated to pairwise geographic distance, we performed partial Mantel tests to test for effects of elevation after accounting for geographic distance. Because allelic diversity differed between populations and was correlated with elevation, this may have biased the estimates of pairwise differentiation using F ST . We therefore calculated standardized pairwise F ST values (pairwise F ' ST , [30], eqn. 4b) and repeated the tests for isolation by distance and isolation by elevation. In order to compare the genetic differentiation of C. eyrei with other species of the Fagaceae, we reviewed empirical studies and calculated F ' ST following Hedrick ([30], eqn. 4b).
Statistical analysis
To test the effects of environmental factors on genetic variation, we analysed the relationship between allelic richness (A R ) and the two predictors elevation and successional stage in a multiple regression. We used A R , which corrects for sample size, rather than H E , because sample size varied among populations; however, the results were qualitatively the same for H E . Collinearity of elevation and successional stage was checked by Spearman correlation. All analyses were performed with R 2.8.1 [57]. | 2014-10-01T00:00:00.000Z | 2011-06-20T00:00:00.000 | {
"year": 2011,
"sha1": "a68a885fc99109e4103718e872a184f5fce72cd9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0021302&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1134f68b71e5c6c49bb555292eaa5d371fd36179",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
239651145 | pes2o/s2orc | v3-fos-license | Barriers and facilitative factors in the implementation of workplace health promotion activities in small and medium-sized enterprises: a qualitative study
Background There is an immense difference between large companies and small and medium-sized enterprises (SMEs) in implementation of evidence-based interventions (EBIs). Previous literature reveals various barriers that SMEs face during implementation, such as a lack of time, accessibility, and resources. However, few studies have comprehensively examined those influential factors at multi-levels. This study aims to identify the factors influencing the implementation of non-communicable disease prevention activities (tobacco, alcohol, diet, physical activity, and health check-up) in SMEs using Consolidated Framework for Implementation Research (CFIR). Methods We conducted 15 semi-structured interviews with health managers and/or employers in 15 enterprises with less than 300 employees, and four focus groups among public health nurses/nutritionists of health insurers who support SMEs in four prefectures across Japan. A qualitative content analysis by a deductive directed approach was performed. After coding the interview transcript text into the CFIR framework constructs by two independent researchers, the coding results were compared and revised in each enterprise until an agreement was reached. Results Of the 39 CFIR constructs, 25 were facilitative and 7 were inhibitory for workplace health promotion implementation in SMEs, which were across individual, internal, and external levels. In particular, the leadership engagement of employers in implementing the workplace health promotion activities was identified as a fundamental factor which may influence other facilitators, including “access to knowledge and information,” “relative priority,” “learning climate,” at organizational level, and “self-efficacy” at the health manager level. The main barrier was the beliefs held by the employer/manager that “health management is one’s own responsibility.” Conclusions Multi-level factors influencing the implementation of non-communicable diseases prevention activities in SMEs were identified. In resource-poor settings, strong endorsement and support, and positive feedback from employers would be important for health managers and employees to be highly motivated and promote or participate in health promotion. Future studies are needed to develop context-specific strategies based on identified barriers and facilitative factors, and empirically evaluate them, which would contribute to narrowing the differences in worksite health promotion implementation by company size. Supplementary Information The online version contains supplementary material available at 10.1186/s43058-022-00268-4.
Introduction
Non-communicable diseases (NCDs) are the leading causes of death and disability in working-age adults globally. Over 80% of all premature NCD deaths occur due to cardiovascular diseases, cancers, respiratory diseases, and diabetes [1]. The primary behavioral risk factors for death due to an NCDs are tobacco use, physical inactivity, harmful alcohol use, and an unhealthy diet [2]. Workplaces are good settings for adopting and implementing health promotion programs that address NCD prevention, owing to the high prevalence of risky health behaviors among the working-age population and the presence of infrastructure to offer such programs that have a wide reach over a longer duration [3,4]. Several systematic reviews have revealed the effectiveness of workplace health promotion (WHP) interventions targeting dietary behaviors [5], tobacco use [6], and mental health [7], while reviews of interventions targeting physical inactivity and risky alcohol use have shown mixed results [8,9].
The implementation of WHP interventions have massive differences between small and medium-sized enterprises (SMEs) and large companies, and this trend has persisted over the past 3 decades [10][11][12]. For example, in 2017, 39.5% of large US worksites with 500 or more employees offered all five elements of a comprehensive program (as defined by Healthy People 2010), whereas only 11.0% of small worksites with fewer than 25 employees offer these components [10,11]. In Japan, the proportion of implementation between SMEs with less than 50 employees and large companies with more than 500 employees are 57.6% vs 99.1% for mental health measures and 12.9-14.5% vs 19.0-21.3% for complete smoke-free policies, respectively [13]. To promote WHP in SMEs with less than 50 employees, the Japanese government has established regional occupational health centers as public health facilities in 350 districts across Japan since 1993, but its utilization is limited [14]. Since 2015, the government also started the "Health and Productivity Management" approach to strategically promote employees' health from a corporate management perspective, including a certification system for companies [15], but the number of certified companies is still very limited. A national survey showed that approximately only 20% of all SMEs are currently implementing any activities related to health and productivity management [16]. One of the main challenges that SMEs face during WHP implementation is that they do not know how to proceed with specific measures to combat their own health challenges [17], such as promoting healthy diet, providing support for smoking cessation, and consulting a doctor when recommended at medical check-ups.
Implementation strategies, one of the key concepts of implementation science, respond to the question of "how" to improve the adoption and integration of evidence-based health interventions into routine policies and practices within specific settings. If effective implementation strategies to promote WHP implementation are identified and provided in SMEs, it would reduce the difference in implementation between SMEs and large companies. However, the current evidence on the strategies for WHP implementation that target NCDs are sparse and inconsistent [18]. Theoretical implementation frameworks, such as the Consolidated Framework for Implementation Research (CFIR), suggest that factors influencing implementation may exist at the individual, organizational, cultural, or social level [19]. It is important to have a comprehensive understanding of the barriers and facilitators that influence the implementation process at SMEs, which can be used to identify the context-specific implementation strategies.
The evidence regarding barriers and facilitators that influence multi-level WHP implementation is quite limited, especially among SMEs in Asian countries. A recent review about the process evaluation for WHP Keywords: Facilitative factors, Barriers, Workplace health promotion, Implementation, Health manager, Employer, Small and medium-sized enterprises, Non-communicable diseases identified that most of the barriers and/or facilitators in the USA and Europe were related to the inner setting of the enterprises including management support and lack of resources [20], and only two studies identified factors at social level beyond the enterprise (e.g., compatibility of program with societal developments, and competitive business environment) [21,22]. Another review paper on health promotion in SMEs in the USA also revealed that the main barriers on WHP implementation were the inner setting of the enterprises, including few service providers, low commitment, and low internal capacity to implement the program [20,23,24]. However, most of these literatures were from the USA or Europe, and the evidence regarding barriers and facilitators that influence multi-level WHP implementation in Asian SMEs is quite limited [25][26][27][28]. Worksite contextual factors, including organizational culture, resources, and structures, and their relationships with WHP implementation may be different across regions and countries. A previous study suggested that organizational cultural factors were related to the effectiveness of organizations in North America, but not in Asian organizations including Japan [25][26][27]29]. Thus, to reduce difference in WHP implementation between SMEs and large companies in Asian countries, this study aimed to identify barriers and facilitators at multiple levels beyond the inner setting for the implementation of WHP programs targeting NCD prevention among SMEs in Japan.
Methods
In this qualitative study, two types of interviews were conducted to obtain the perspective of service providers: (1) 15 semi-structured interviews with persons in charge of health management at SMEs (health managers) and/or employers, and (2) four focus groups with public health nurses from the health insurance association/nutritionists, who support these SMEs. Because this study focused on the context of WHP implementation at SMEs, with high diverse WHP measures and contexts among them, semi-structured interviews were conducted for SMEs individually [30], whereas focus groups were conducted with public health nurses, who supporting different SMEs, to generate a rich understanding of their diverse experiences through interactions [30,31], as the public health nurses in each branch of the Japan Health Insurance Association (JHIA) are pre-existing groups and active discussion was expected. The CFIR was adopted as a guide for the interviews, coding, and analysis. The targeted WHP activities were the following five NCD prevention measures-tobacco, alcohol, diet, physical activity, and health check-ups. The study protocol was approved by the Ethical Committee of the National Cancer Center Japan (No. 2019-034). Our report adheres to the standards for reporting qualitative research (SRQR) (supplementary file 1) [32].
Sample selection and procedure
This study was conducted with the cooperation of the JHIA, the largest medical insurer in Japan covering approximately 2.4 million enterprises [33,34]. Most of JHIA member enterprises are SMEs, and more than 90% of them have less than 30 employees [35].
The JHIA has 47 branches covering all prefectures across Japan, and each branch issues a certification of "health declaration" to enterprises that volunteer to actively work towards improving employee health. Over 60,000 enterprises have been certified with a "health declaration" as of 2021 [36]. Once certified, a health manager is appointed at each enterprise to plan and implement health promotion activities, with support from public health nurses affiliated with the association. In most cases, certification is offered on a continual rather than a renewal basis. In all SMEs except for one, administrative staff such as those in the general affairs department were assigned to be health managers and were allotted health management tasks in addition to their regular duties.
Two-stage purposeful sampling was used to recruit public health nurses and select enterprises. In the first stage, the central office of JHIA selected four branch offices that have experience in providing health promotion support at the organizational level. In the second stage, a leader or sub-leader of the public health division at each of the four branch offices selected three to five enterprises according to the following inclusion criteria: (1) qualify as a SME (100 or less employees in case of a service enterprise, 50 or less for retail, 100 or less for wholesale, and 300 or less for manufacturing and others) [37], (2) have already participated in the "health declaration" initiative, and (3) have already implemented activities for workplace health promotion. For criterion (1), if the enterprise is part of a branch of companies, the number of employees at the particular branch office was considered. Fifteen enterprises that matched the inclusion criteria were identified. We planned to recruit 20 SMEs and four focus groups at maximum to ensure theme saturation (i.e., no new themes were discovered through additional interviews [38]). During the analysis process, the core members of the study (JS, MO, and TS) discussed theme saturation, and consensus on data saturation was achieved upon the completion of 15 interviews and four focus groups, respectively.
For the semi-structured interviews, at least one health manager participated in the interviews from each enterprise, and the employer also participated in the same interview for each enterprise if they were available. The employer in this study is referred to as the chief executive officer. We invited employers and health managers in the same interview, because it is practical for them to discuss factors, strategies, and measures for WHP together in the context of real-world implementation. For the focus group, four to six public health nurses/nutritionists participated from each branch office. In total, eight employers and 22 health managers participated in the interviews and 20 public health nurses/nutritionists participated in the focus groups. In order to conduct the focus groups effectively, they were asked to respond to a one-page questionnaire in advance regarding the WHP activities being implemented at the enterprises they provided their services to. JS, MO, HT, and TS conducted the interviews and focus groups, and JS was trained in 2-day qualitative research training course. JS, MO, and TS are the implementation science researchers. All four interviewed researchers were not known to the participants of this research prior to conducting the study. For both semistructured interviews and focus groups, we obtained verbal consent for participation from each participant prior to data collection.
Measures
We developed an interview guide using the following five main domains based on CFIR: (i) intervention characteristics, (ii) outer setting, (iii) inner setting, (iv) individual characteristics, and (v) processes [19] (supplementary file 2). For both semi-structured interviews and focus groups, we focused on the specific topics and activities that the enterprises had agreed to implement at the time of adopting the health declaration. We asked open-ended questions focusing on the context (barriers and facilitators) within which the current activities were being implemented. For focus groups consisting of JHIA public health nurses, the emphasis was not on the support they provide for enterprises, but about their perceptions of what factors influenced the current activities among target enterprises. Instead of asking questions related to each sub-construct within the CFIR, we encouraged the interviewees to openly speak about each CFIR domain (e.g., what had been challenging or favorable with respect to adopting and implementing the current WHP activities), in order to gather information that they perceived to be important. We used probing questions only when the interviewee did not talk about a particular CFIR subconstruct. Each interview lasted approximately 60 min, while each focus group lasted approximately 120 min. Interviews and focus groups were conducted in Japanese, and audio-recorded, transcribed, and checked for accuracy.
Data analysis
We qualitatively analyzed the data using a deductive approach [39]. The analysis, for both the interviews and focus groups, was performed in five steps. First, two out of three authors (JS, MO, AY-S) independently coded units of the transcript text according to the CFIR constructs. Second, the authors compared the coding results of the data for each enterprise and revised it until a consensus was reached. If there were units of transcript text that did not fit into any CFIR construct, a new construct was created inductively and coded. Third, a diagram depicting the relationships between the constructs was drawn for each enterprise in order to comprehensively understand the influential factors [40]. Using the coding results from both the interview and focus groups, either of the two authors (JS, MO) independently identified the relationships between constructs for each enterprise, and then discussed and revised it until they agreed on the final diagram. They further developed a summary memo, organized according to the CFIR constructs, for each enterprise. The summary memo was discussed with a third researcher (TS), who was not involved in the coding process, to achieve consensual validation. To further strengthen the credibility of the results, the preliminary summary memo with a description was shown to a few public health nurse participants to confirm that the views of health managers/employers were appropriately reflected. Finally, the barriers and facilitators, as per the CFIR constructs, were identified. The data from focus groups was also coded and categorized into CFIR sub-constructs and used to supplement the results of the employer and/or health managers' interviews.
Results
The enterprises included in this study conducted several WHP activities to prevent NCDs, except risky alcohol use prevention. Tables 1 and 2 show the characteristics of enterprise and participants. No enterprise included in this study comprised a branch of companies. Of the 15 enterprises, the data from one enterprise were treated as complementary data which was the same as the focus groups, because during the interview, it was found that they were a cooperative union and supported the health promotion activities at its member establishments, instead of conducting WHP activities for their own employees.
Of the 39 CFIR constructs assessed, 28 were facilitative and eight were inhibitory for WHP program implementation among SMEs (specifically, 25 were facilitative and eight were inhibitory from the semi-structured interviews and eight were facilitative and one were inhibitory from the focus groups) ( Table 3). The factors identified from the focus groups were similar to the results from the semi-structured interviews and complemented the interview results. Three factors were identified as specific to focus groups: "Structural characteristics" in Inner setting domain, "Other personal attributes skills" in Individual characteristics domain (factors that are difficult to examine without the objective comparison of multiple companies) and "Champions" in the process domain (factors that are difficult to identify without objective observation from outside the company). Interventions not listed in the recommend programs as per CDC workplace Health Strategies were not included in the analysis (e.g., setting aside one day a month to not eat sweets, the full list of excluded interventions are shown in supplementary files 3). Quotes were labeled by the enterprise (alphabetically anonymized), the respondent (health manager or employer), and activity topics. Due to the word limit, selected results regarded as salient themes are shown in the manuscript. The full results with quotes, the constructs identified from the focus groups, and a table of factors and barriers by CFIR constructs according to each topic of the WHP activities are shown in supplementary files 4, 5 and 6.
Intervention characteristic domain Relative advantage
When deciding on a topic for the WHP activity, when the health manager recognized its relative advantages over other topics within NCD prevention, it was more likely to be selected and be proactively implemented. At one enterprise, the health manager selected physical activity since it is relevant for all ages and allows everyone to participate in and benefit from it, as compared to other interventions such as those related to smoking or blood pressure.
Outer setting domain Cosmopolitanism
One health manager mentioned the advantage of networking with other companies on program implementation. In the case of company located within an industrial park sharing the health check-up bus, the implementation of health check-up was perceived to be highly advantageous in terms of leading to a collaboration with other organizations in the industrial park. The health manager of the cooperative union (the enterprise recruited as an interview target, and later treated as complementary data same as focus groups of Table 1 Characteristics and workplace health promotion activities conducted in participating enterprises (n = 14) One enterprise whose interview data were not treated as complementary data is not included in this public health nurses) reported that it was effective to make an opportunity for health managers from various companies to meet each other and share their concerns and ideas, as most of them were conducting WHP activities by themselves.
Relative priority
Many health managers mentioned that the enterprise's prioritization of WHP activities was relative to other things as a facilitative factor. Specifically, if health management was a part of the company's overall management vision, it was easy to obtain the leader's approval and implement health promotion measures immediately.
It's going to cost, and we talked to the employer and (health and safety) committee. [ … ] The most important thing was that it would help employees manage their health. So, we got the go-ahead right away. (F, health manager, tobacco control)
However, one employer mentioned that WHP implementation was a lower priority compared to customerfocused activities or productivity. Such a relatively low priority can be a barrier to implementation and is likely to be highly dependent on the business conditions of SMEs at any given time.
Learning climate
When health managers feel that the employer perceived them as an indispensable and knowledgeable person in the WHP implementation, they proactively examine, plan, and implement the WHP activities. In one enterprise, the health manager, who previously had no knowledge of health management, but was trusted by the employer and assigned this task, proactively implemented the program through trial and error. When the implementation went well, the manager felt affirmed, raising their "self-efficacy", and the motivation to continue the program, and the implementation of other activities further increased, thereby, creating a virtuous cycle.
The representative just told me he wanted to do health management for the employees. It was a great learning experience for me to work on our own." "(When deciding on the WHP activities to adopt) The employer basically gave me permission to select whichever I wanted.
[ … ] I didn't ask (my superiors) which one they preferred. We kind of just said, 'This is the one we'll go with (A, health manager, physical activity)
Leadership engagement
There were two ways in which employers engaged in WHP activities-communicating the company's philosophy linked to the WHP to all the employees, and supporting those who are engaging the implementation-both of which were strong drivers of implementation. Direct and repeated communication from the employer at general meetings and other occasions led others within the company to relatively prioritize WHP activities more and, hence, implementation progressed.
The current representative of the company believes that the happiness of employees and those close to them will lead to contributions to customers and the local community. [ … ] I think the most important thing is the representative's way of thinking. (A, health manager, physical activity)
Similarly, extending support to those in charge of the program, such as allowing them to participate in external trainings related to WHP program implementation during working hours, facilitated implementation.
I was told that I can participate in such things (such as seminars on WHPs outside the company) as much as I want because they see it as part of my work. (A, health manager, physical activity)
Multiple public health nurses/nutritionists in the focus groups supported these findings, as they also mentioned that "The employer's voice is essential, " and "The influence of employers and health care managers is significant in ensuring the sustainability of WHP implementation. " On the other hand, health managers who were not given enough time or support to implement WHP-related tasks inevitably gave lower priority for WHP implementation. In this enterprise, 1 year after declaring that they would perform blood pressure control activities, they still had not purchased a blood pressure monitor.
Access to knowledge and information
As many SMEs did not have existing resources to initiate WHP activities, many health managers reported that access to external knowledge and information, such as participation in study sessions during working hours and support from JHIA health nurses, was necessary to proceed the implementation. This accessibility to information was enhanced by support from the employer and the positive attitudes of health managers. In contrast, when access to such external knowledge and information was difficult, even if the sense of urgency in the health manager increased, it did not lead to the actual implementation. In one enterprise implementing blood pressure management, nothing was implemented after installing blood pressure monitors despite having a sense of urgency to do something more, because they did not know what to do and had poor access to knowledge and information.
There are many employees with high blood pressure, so we need to think of something, but I'm not sure what I can do at work. (B, health, manager; blood pressure)
Characteristics of individuals Knowledge and beliefs about the intervention
Some employers and health managers reported that they were clearly aware of having to conduct WHP activities as part of their regular task, rather than as an additional task, as they believed that the health promotion of employees is one of the issues the enterprise should engage in.
Employees are the most important. In order to keep employees to work with high motivation for a long time, (spending resources) for their well-being is an investment, not a cost. (K, employer, tobacco control and health check-ups)
However, some employers or health managers were convinced that health behavior would not change unless each employee's awareness is changed first, and it led to the belief that the WHP activities would have a limited effect, as a result of which the actual implementation was limited.
Self-efficacy
Some employers reported, or health manager reported as an employer's perception, that they (employers) entrusted health managers with the task of health promotion and they were able to accomplish it with the help of adequate time and manpower. Then, the managers' sense of selfefficacy increased, thereby leading to a virtuous cycle and continued implementation in the subsequent years (see "learning climate" and "peer pressure" as well).
Individual identification with organization
Some health managers described that the employer's sincere concern for the employees lead the employees' desire to respond to the employer's concern for them, and such relationships of mutual trust between the employer and the employees facilitated implementation.
Other personal attributes
In the focus groups, the public health nurses suggested that the health manager's skills and authority influenced the implementation directly. Although it was not necessarily related to their position, it was important for health managers to have the authority to speak in such a way that employees would listen to them, especially with respect to the continuity of WHP activities.
Process domain Change agent
Most of employers and health managers perceived the public health nurses or nutritionists at JHIA as key members when implementing WHP activities, as they provided useful advice or information about WHP. In addition, they perceived that health lectures by public health nurses are more effective as employees were more receptive to the information coming from them.
Champions
The public health nurses further suggested that involving front-line champions advanced program implementation. For example, when adopting measures to make the company cafeteria menu healthier, discussing the issue among all stakeholders, that is, not only the employer and general manager, but also the cook, led to the program's successful adoption and implementation.
Discussion
We evaluated the implementation of health promotion activities in SMEs using a qualitative approach guided by CFIR and identified constructs across five domains that facilitated and inhibited implementation.
Leadership engagement of employers as a fundamental factor
The diagram depicting the relationships between CFIR constructs showed that the "leadership engagement" of employers to implement the WHP activities influenced other facilitators. Employers' "leadership engagement" in this study refers to the commitment, involvement, and accountability of employers with regard to the implementation of WHP, with a sincere belief that they value the health and well-being of their employees [19]. The leadership engagement of employers may increase the "access to knowledge and information" of health managers and foster the "implementation climate" which refers to the targeted employees' shared perceptions of the extent to which the use of a specific innovation is "rewarded, supported, and expected within their organization" [41]. Those two factors (i.e., increased "knowledge and information" and improved "implementation climate") caused among health managers greater motivation and skills, and "self-efficacy" regarding the implementation of WHP activities.
Our findings supported a theory of the organizational determinants of WHP implementation, which showed that organizations can strengthen the implementation climate by facilitating knowledge and skills development in employees [42]. To create an "implementation climate" in SMEs, it would be important to first get the buy-in and leadership of employers to set up the resources and systems for WHP [43]. This would allow health managers, who are often the only front-line implementers in the organization, to increase their knowledge and skills in implementing WHPs, which would strengthen their implementation climate, which would then be shared with employees and become a company-wide climate.
The leadership engagement of employers is a common key factor of the best practices of WHP programs [44,45], and previous empirical studies showed that leadership support is associated with implementation processes and work attendance [46]. Our study further suggested possible mechanisms by showing several factors that may link the relationships between leadership engagement and improved implementation of WHPs. The leadership engagement is considered to impact WHP programs by creating a culture of health [43], and the factors that may link the relationships identified in this study may also contribute to creating such a culture of health. Especially, in small-sized companies, where organizational layers are fewer than in larger companies, it is easier for the employers' beliefs and vision to permeate the entire company [47].
The effects of leadership engagement must take into consideration the culture in which they perform [48]. Compared to other countries, Japan has moderate power distance and collectivism, such as importance for values regarding social obligation, social harmony, and social contribution [49][50][51]. We found that employers' sincere desire for employees' well-being, trust, and acclaim for employees and health managers will led to the employees' trust in the employers, which in turn can also facilitate implementation. When management involves employees in various ways, employees would be encouraged to have more positive attitudes towards not only the employer and their own self, but also towards the organization [52].
Especially in Japan, with collectivism culture, transformational leadership is more likely to empower employees than charismatic leadership [53], and those reciprocal relationships between employees and the employer and/ or organization may be easily conceived.
Barriers in implementing WHP activities
Beliefs held by the employer and/or manager that "health management is one's own responsibility" was suggested to be a barrier for implementation. When employers and managers feel strongly that the promotion of employees ultimately depends on individual employees' mindset regardless of what the company does, they are less likely to fully utilize the resources for such programs. In other words, a lack of belief about the effectiveness of health promotion at the workplace inhibited strong support and proactive implementation of WHPs. In the management system of Japanese companies, decisions are generally taken solely by the employer in SMEs, while large companies tend to make decisions by the consensus of the senior management [54]. In addition, those in charge of WHP implementation in SMEs are often a single health manager (e.g., a staff member from departments of human resources or general affairs), while they are a team in large companies. Therefore, the perceptions of the individual employer and/or manager may be more likely to directly affect the implementation of WHP.
Limited resources (people, goods, and money) to use for health promotion was not identified as a barrier in our study, despite it being one of the main barriers for SMEs in implementing WHP [24]. Such inconsistent findings with previous literature may be mainly because there was already a certain level of readiness for WHP implementation among target enterprises, as one of the inclusion criteria was that they had to have already made a health declaration; the implemented WHPs were activities with low initial investment, rather than packaged comprehensive programs; and fewer resources were needed as many of the establishments had fewer than 100 employees.
Cosmopolitanism as a potential facilitator
One of the factors that may be unique to SMEs in this study was "cosmopolitanism. " The existing network between companies located in the same industrial park was effectively utilized for information exchange and resource sharing, and they facilitated implementation. In health care settings, collaborative learning across agencies is known to facilitate implementation of evidence-based interventions by altering social networks among participants to promote the transmission of new ideas and social support [55]. WHO also recommended "Partner and build alliances" as one of the five required actions for implementing health promotion and suggested priorities for the alliance action [56]. However, in the workplace, such a network between companies has not been focused on as an influential factor on WHP implementation, which maybe partly due to most of the evidence being obtained from larger companies where information and support are sufficient. Employers of SMEs often have regular gatherings with other business organizations or subcontractors within the same industries, and there is often a system in place to exchange opinions and information regarding business between them.
Implications
The study findings implied that continuous support for both employers to encourage the leadership engagement and health managers to increases their knowledge, change their beliefs, and raise their self-efficacy may promote evidence-based WHP implementation. In most large companies, occupational health professionals are employed and stationed at the company to provide support for both WHP implementation at the organizational level and health behavior change at the individual level. However, in many SMEs, public health nurses of JHIA mainly support health behavior changes among highrisk employees as they have limited time to support each SME. To reduce differences in health behaviors among employees between SMEs and large companies, measures to tackle the social context at organizational level (i.e., implementation of WHP) are required [57]. Our findings reveal the importance of approaches to factors outside of workplaces in addition to those inside them, such as encouraging knowledge, beliefs, and self-efficacy of employers and health managers at the individual level, and supporting collaborations with other business organizations to accumulate knowledge on beset practices and shared learning related to WHP activities (cosmopolitanism). Thus, shifting the main target of limited resources of occupational health professionals from high-risk individuals to employers and health managers, as well as external factors that support internal WHP implementation would be a more efficient and sustainable way to support WHP implementation in SMEs.
Strengths and limitations
To the best of our knowledge, this is the first study examining the barriers and facilitators for the implementation of WHP activities in SMEs using CFIR. Our findings will offer suggestions to develop implementation strategies for promoting WHP activities at SMEs in the future. The approach to categorizing the identified influential factors into CFIR constructs will promote the integration of findings with other implementation research using CFIR, and contribute to an understanding of the applicability of health-related interventions in various settings through a comparison of consistent and inconsistent findings. However, there are a number of limitations to this study that must be addressed. First, we collected data only from the providers (employer, health manager, and public health nurses), but not from those receiving the interventions (employees). Especially for the assessment of "patient's needs" and "relative priority, " even if the provider states that the employees' needs are understood or that the employers' desire to prioritize and value health promotion at the workplace is conveyed to the employees, it may not necessarily reflect the truth unless we also ask the employees. Second, the generalizability of our findings may be limited as our sample enterprises had already participated in the "health declaration" initiative and implemented activities for workplace health promotion, which means the readiness for WHP implementation is high. In particular, for SMEs that have not implemented any activities related to health and productivity management in Japan (which is reported to be 80% of SMEs [16]), future studies are needed to identify the factors that inhibit WHP adoption and identify implementation strategies to overcome those barriers. Third, we could not evaluate the degree of influence quantitatively as it was difficult to compare five very different topics of the WHP activities with a wide range of intervention characteristics.
Conclusions
Multi-level factors influencing the implementation of NCD prevention measures in SMEs were identified. Especially, leadership engagement by employers was identified as the most influential and fundamental factor. These findings highlight the need to focus on the internal and external structures of an enterprise. In resource-poor settings, strong endorsement and support, and positive feedback from employers was important for health managers and employees to be highly motivated to promote or participate in health promotion; this led to the continuous implementation of or participation in health promotion activities, thus creating a positive cycle. We recommend the development of future health promotion programs at SMEs using strategies that enhance these multi-level facilitative factors. | 2021-10-22T15:47:44.907Z | 2021-09-02T00:00:00.000 | {
"year": 2022,
"sha1": "b411fa9574ebb4f04406a250449dd4586631b80d",
"oa_license": "CCBY",
"oa_url": "https://implementationsciencecomms.biomedcentral.com/track/pdf/10.1186/s43058-022-00268-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fd26f5e17385e8941e17561a8be99c942b19b516",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
19126454 | pes2o/s2orc | v3-fos-license | Evaluation of cytology versus human papillomavirus-based cervical cancer screening algorithms in Bhutan
To evaluate the performance of existing versus alternative cervical cancer screening protocols in Bhutan, cervical exfoliated cells were collected for cytology and high-risk human papillomavirus (HR-HPV) testing among 1,048 women aged 30-69 years. Conventional smears were prepared and read locally. HR-HPV was tested by GP5+/6+ polymerase chain reaction, followed by genotyping and human DNA methylation analysis among HR-HPV-positives, in Europe. Test positivity was 7.5% for ASCUS or worse (ASCUS+) cytology and 14.0% for HR-HPV. All women with ASCUS+ and/or HR-HPV positivity (n=192) were recalled for colposcopy, among whom a total of 29 cases of histologically confirmed cervical intraepithelial neoplasia grade 2 or worse (CIN2+) were identified. An additional 7 CIN2+ cases were imputed among women without colposcopy. Corrected sensitivities for CIN2+ and CIN3+ were 61% and 74% for ASCUS+, 86% and 96% for HR-HPV, and 47% and 70% for ASCUS+ triage of HR-HPV. Specificity varied from 88% for HR-HPV up to 98% for ASCUS+ triage of HR-HPV, similarly for CIN2+ and CIN3+. Among HR-HPV-positive women with biopsies, methylation analysis offered similar discrimination of CIN2/3 and cervical cancer as ASCUS+, and better than HPV16/18 genotyping alone, but sample sizes were limited. In conclusion, the performance of cytology in Bhutan is in the mid-range of that reported in other screening settings. HR-HPV testing has the potential to improve detection of CIN2+, albeit with a higher referral rate for colposcopy. Cytological triage of HR-HPV-positives (performed in the absence of knowledge of HR-HPV status) reduced referral but missed more than one third of CIN2+.
INTRODUCTION
Cervical cancer represents the most common cancer among females in Bhutan [1], where a national cytology-based screening program exists since 2000 [2]. The program is provided free of charge and recommends Papanicolaou (Pap) smears every three years for women aged 25-60 years, followed by colposcopy for screenpositive women. Due to limitations in trained personnel, most Pap smears are read, and most colposcopies are performed, in only two regional centres, the capital Thimphu, and Mongar in Eastern Bhutan. Although campaigns are also conducted in more remote, rural areas, the majority of cytology and work-up of screen- Research Paper positive women is provided in national referral hospitals, so that the population coverage of at least one lifetime Pap smear has been estimated to vary between 20% and 60% according to district [3,4]. More recently, there have been attempts to introduce cervical screening using selfcollection of samples for the detection of high-risk human papillomavirus (HR-HPV) [5].
Indeed, during the last decade, cervical cancer screening has shifted towards the molecular detection of HR-HPV, the main cause of cervical cancer, allowing for increased automation of diagnostic procedures. Randomized trials in high-income countries among regularly screened women show that HR-HPV testing provides 60-70% greater protection against invasive cervical carcinomas over cytology, and allows extension of screening intervals [6]. Large studies conducted in low and middle income countries (LMICs) have also shown good cross-sectional [7][8][9][10][11][12][13][14][15] and prospective [16,17] accuracy of HR-HPV testing versus cytology in largely unscreened populations. With respect to triage of HR-HPV-positive women, atypical squamous cells of undetermined significance (ASCUS) cytology or worse (ASCUS+), alone or in combination with HPV16/18 genotyping [18][19][20], is the recommended approach in high-income countries (HIC), but host gene methylation [21][22][23] offers an alternative molecular triage option.
Within the framework of a collaboration between the Ministry of Health (MoH) of Bhutan and International Agency for Research on Cancer (IARC) [3,24], we here report the performance of a cervical screening program carried out in Thimphu, Bhutan, among women aged 30 years or older. The cross-sectional performance of Pap smear and HR-HPV using a clinically validated test for cervical screening (GP5+/6+) [25], plus the potential use of HPV16/18 genotyping and DNA methylation markers to triage HR-HPV-positives, were evaluated based upon a gold standard of colposcopy and histologically proven cervical intraepithelial neoplasia (CIN) grade 2 or worse (CIN2+) and CIN grade 3 or worse (CIN3+).
RESULTS
Of 1,048 women screened, mean age was 40 years [interquartile range=34-46 years], 66% had a previous Pap test, 86% reported one lifetime sexual partner, and 91% were currently married (data not shown [3]). Of 192 women with abnormal cytology and/or HR-HPVpositivity who were referred for colposcopy, 159 (83%) attended (Table 1). In total, 36 CIN2+ and 23 CIN3+ cases (including 7 and 4 cases imputed among women without colposcopy) were included in the present analyses (Table 1). Only corrected indices are shown, but crude estimates can also be calculated from the data described in Table 1. Table 2 shows screening indices of cross-sectional accuracy by different primary screening and triage methods. For primary cytology, screening test positivity was 7.5% for ASCUS+, and 5.1% for ASCUS+ with HR-HPV triage of ASCUS. For primary HR-HPV, positivity was 14.0% for HR-HPV and 3.2% for HR-HPV triage by ASCUS+ (Table 2). Test positivity for primary cytology at the threshold of LSIL+ was 3.7% (data not shown).
With respect to triage of primary cytology, triage of ASCUS by HR-HPV was associated with an improvement in specificity to 97% for CIN2+ and 96% for CIN3+, and improved positive predictive value (PPV), but with a decrease of sensitivity to 56% and 70%. Triage of primary HR-HPV by ASCUS+ was also associated with an improvement in specificity to 98% for both CIN2+ and CIN3+, and higher PPV, but also with a decrease of sensitivity to 47% for CIN2+ and 70% for CIN3+.
To evaluate the potential utility of molecular markers for the triage of HR-HPV-positive women, we compared test positivity by histological diagnosis of HPV16/18 genotyping (alone or in combination with cytology) and CADM1/MAL/miR124-2 methylation, with that of ASCUS+ cytology, among a subset of 101 HR-HPV positive women (after exclusion of 19 without colposcopy/ biopsy and an additional 22 without a valid result for CADM1/MAL/miR124-2 methylation) ( Figure 1). The positivity of all three tests increased from <CIN2 (n=81), through CIN2/3 (n=15), to cancer (n=5). The trend in positivity was significant for ASCUS+ cytology (p<0.001) and CADM1/MAL/miR124-2 methylation (p<0.001), but not for HPV16/18 (p=0.284). Of note, all 5 cancers were CADM1/MAL/miR124-2 methylation positive. Combined ASCUS+ and/or HPV16/18 positivity was associated with lower discrimination across lesion grade in comparison to ASCUS+ alone or methylation, but offered the highest positivity in CIN2/3 ( Figure 1).
DISCUSSION
In this first evaluation of the cross-sectional performance of the cytology screening program in Bhutan, sensitivity of cytology ASCUS+ for CIN2+ (61%) and CIN3+ (74%) fell in the mid-range of estimates from similar studies that included colposcopy of HR-HPVpositive women, irrespective of whether they be in LMICs Table 1: CIN2+/3+ confirmed and imputed among 1,048 women aged ≥30 years, with and without colposcopy, respectively, by combination of cytology and HR-HPV results
Use of HR-HPV as a primary screening test was associated with a higher detection of CIN2+ and CIN3+ than cytology (sensitivity ratio = 1.41 and 1.29 respectively), consistent with findings from previous studies [7, 9-11, 13-15, 26-28], and a meta-analysis [29]. Higher detection rates of CIN2+ and CIN3+ have also been seen in the HR-HPV versus cytology arm of 8 randomized controls trials [29].
Performance of HR-HPV testing for detection of CIN2+ and CIN3+ has been shown to be heterogeneous across studies in LMICs [29]. In the present study from Bhutan, but for which HR-HPV testing was performed in a specialized laboratory in Europe, sensitivity of HR-HPV (96% for CIN3+) was towards the high end of estimates from previous reports (average 84%) in LMICs, reaching levels similar to that reported in HICs (average 98%) [29].
HR-HPV testing offered a higher cross-sectional negative predictive value than ASCUS+ cytology in Bhutan. Indeed, a negative HR-HPV test has also been shown to offer greater reassurance against future CIN3+ [30][31][32][33] and cervical cancer [31] in large prospective studies. Large randomized trials have also shown that primary HR-HPV screening results in a significantly lower incidence of CIN3+ [19] and cancer [34] than primary cytology. These data have led certain HICs to switch from cytology to HR-HPV as the primary screening test, including Australia, Italy, New Zealand, the Netherlands and the UK. World Health Organization [35] and U.S. guidelines [20] also recommend HR-HPV as a primary screening test.
Nevertheless, HR-HPV also resulted in a higher burden of referral to colposcopy and reduced specificity compared to cytology screening, consistent with results from previous cross-sectional studies [29]. Whilst some screening programs in LMICs have pragmatically chosen to treat all HR-HPV positive women on account of concerns of the accuracy/feasibility of triage options and of losses to follow up [36,37], triage of HR-HPV positive women would be desirable to immediately refer only those at highest risk. The current recommended methods for triage in HICs include cytology ASCUS+ [18] or cytology [19,20], but host cell DNA methylation analysis is also a promising candidate [21][22][23], especially as it can also be performed on self-collected cervicovaginal samples [23].
In Bhutan, primary HR-HPV testing followed by ASCUS+ triage of HR-HPV-positives was associated with high specificity and PPV, requiring referral of only 3.2% of screened women (versus 14% for all HR HPV-positives), offering an option for triage in Bhutan, where cytology is already established. Although the sensitivity of cytology triage might have been higher if cytotechnicians had known that they were triaging HR-HPV-positive women, cytology triage in Bhutan was nevertheless associated with a substantial loss of cross-sectional sensitivity, missing more than one third of CIN2+. So in a subset of HR-HPVpositive women, we compared the discriminating power of two molecular-based triage options, namely HPV16/18 genotyping and host DNA methylation, to that of cytology. Although sample sizes were limited, CADM1/MAL/ miR124-2 methylation was strongly related to the severity of cervical disease and was always positive in cervical cancer, as shown previously [38]. Indeed, methylation analysis appeared to offer similar discrimination of CIN2+ as ASCUS+ cytology, and better discrimination than HPV16/18 genotyping alone or in combination with cytology. Our findings are in agreement with large recent studies that noted that DNA methylation analysis (although not always of the same combination of gene markers) was a non-inferior triage option versus cytology, in both clinician-collected [21,22] and self-collected [23] HR-HPV-positive samples, and actually performed significantly better than HPV16/18 genotyping [21,22].
Prior to becoming a recommended primary screening test, HR-HPV testing was recommended to triage ASCUS in primary cytology programs, being shown to have higher sensitivity and similar specificity than repeat cytology in this group of women [39]. However, this algorithm was associated with relatively poor sensitivity in Bhutan (56% for CIN2+), substantially lower than in a larger Chinese study (84%) [40].
The strengths of this study were the high proportion of screen-positive women who received colposcopy and biopsy, and the fact that histology was imputed among the few who did not (although correction had little effect on sensitivity and specificity estimates). Furthermore, the population-based sample is expected to be broadly representative of women aged ≥30 years living in Bhutan, and the risk of HR-HPV-positivity and CIN3+ in this population was high, as reported previously [24]. The major limitations were the restricted sample size and the fact that HR-HPV testing, HPV16/18 genotyping and methylation analysis was not performed locally, but in an expert laboratory in Europe, so that the performance of the assays did not truly represent that of field conditions. Nevertheless, the clinical performance of the GP5+/6+ polymerase chain reaction (PCR)-based assay in Amsterdam has been shown to be almost identical to that of the more widely used HC2 [relative CIN2+ sensitivity=1.00 (0.96-1.04) and specificity=0.99 (0.91-1.07)] [25,41]. Lastly, we are aware that a prospective evaluation of screening algorithms involving repeat screening rounds may lead to different relative performance of HR-HPV versus cytology, but this is also dependent on a country's willingness and capacity to implement organized follow-up.
In conclusion, despite the relatively good performance of the Bhutanese cytology program, a shift to primary HR-HPV screening has the potential to further improve detection of cervical pre-cancer, albeit with a higher referral rate for colposcopy and loss of specificity. Cytological triage of HR-HPV-positive women diminishes immediate referral to colposcopy but would have missed more than one third of CIN2+. Whilst methylation analysis was shown to be a promising and objective alternative to ASCUS+ cytology in the triage of HPV-positive women samples, our results remain preliminary.
Population
The study had the approval of both the Research Ethical Board of the Bhutan Ministry of Health and the IARC Ethics Committee.
In 2012, during a population-based survey of HPV prevalence, 2,505 women aged 18-69 years were invited and underwent a gynecological examination in Jigme Dorji Wangchuck National Referral Hospital (JDWNRH) and Lungthenphu Hospital, Thimphu, Bhutan. Study procedures have been described in detail elsewhere [24]. Exfoliated cervical cells were obtained using a cytobrush (Rovers Medical Devices, The Netherlands). After preparation of a conventional Pap smear, the brush containing cellular material was placed in a vial containing PreservCyt medium for HPV and methylation testing.
The present study is restricted to the subset of these women among whom HR-HPV screening is recommended, namely 1,048 women aged >30 years. As Pap smear, and later HR-HPV, results became available, first all women with abnormal cytology (N=79), and subsequently, also HR-HPV-positive women with normal cytology (N=113), were referred for colposcopy, of whom 83% finally attended (159 of 192) ( Table 1).
Cervical disease assessment
Colposcopy was used to obtain biopsies from all suspicious areas among women with abnormal colposcopical findings. Cervical biopsies were obtained from 105 (66%) of 159 women who underwent colposcopy. Histology was performed at JDWNRH, Thimphu, and 29 cases were diagnosed as CIN2+ (including 10 CIN2, 14 CIN3 and 5 cervical cancers) ( Table 1). Treatment of colposcopy-detected lesions was performed according to local protocols, primarily using loop electrosurgical excision procedures for CIN2/3.
HPV testing and genotyping
Vials containing cellular material in PreservCyt medium were shipped to the Department of Pathology at the VU University Medical Center, Amsterdam. DNA was first extracted from the PreservCyt sample using magnetic beads on a robotic system. β-globin PCR analysis was then conducted to confirm the presence of human DNA in all specimens [42] and a general primer GP5+/6+ -mediated PCR was used to amplify HPV DNA. HPV positivity was assessed by hybridization of PCR products in an enzyme immunoassay with two oligoprobe cocktails that, together, can detect 44 mucosal HPV types. Subsequent HPV genotyping was conducted by reverse-line blot (RLB) hybridization of GP5+/6+ PCR products as described previously [42,43]. HR-HPV refers to positivity for 13 high-risk HPV types only (16,18,31,33,35,39,45,51,52,56,58, 59 and 68) [44]. HPV16/18 genotyping refers to positivity for HPV16 and/or HPV18. Non-high-risk HPV types detected by GP5+/6+RLB are ignored.
Host DNA methylation analysis CADM1/MAL/miR124-2 methylation analysis was performed at the Department of Pathology at the VU University Medical Center, Amsterdam, as previously described [38]. In brief, extracted DNA was first subjected to bisulfite treatment using the EZ DNA Methylation Kit (Zymo Research, USA). DNA methylation analysis was performed by a commercial multiplex quantitative methylation-specific PCR (PreCursor-M) which enables simultaneous amplification and detection of methylated DNA of CADM1, MAL and miR-124-2, and methylationindependent β-actin as sample quality control, within a single reaction [45]. This combination of three genes was chosen based upon prior optimization and validation work on cervical samples [38]. Samples were scored methylation positive for CADM1, MAL and miR-124-2 relative to that of β-actin, according to manufacturers' instructions (based on validated thresholds that on a validation set of cervical scrapes of HR-HPV-positive women gave rise to a maximum CIN3+ sensitivity at 70% specificity), as described previously. A sample was considered positive if any of the three genes scored positive.
Statistical analysis
Cytology and HR-HPV testing were first compared as stand-alone primary screening tests. In addition, different triage approaches for immediate referral of women to colposcopy were evaluated. These included two well-established protocols (HR-HPV testing of ASCUS cytology only, and ASCUS+ cytology of HR-HPV-positive women). Conventional screening indices of accuracy, including sensitivity, specificity, PPV, negative predictive value and their 95% confidence intervals were calculated for both CIN2+ and CIN3+. Firstly, crude indices using only CIN2+ and CIN3+ histologically confirmed among the 159 women attending colposcopy were calculated, assuming that all women without a biopsy were histologically negative. Secondly, corrected indices were calculated after imputation of missing data for the 33 HR-HPV-positive and/or cytologically abnormal women who did not attend colposcopy [7,40]. In the corrected model, observations were replaced by pseudo observations weighted by the probability of CIN2+/3+ among women with the same combination of HR-HPV and cytology results, and who underwent colposcopy. Lastly, in order to evaluate their potential to triage HR-HPV-positive women, we also compared ASCUS+ cytology, HPV16/18 genotyping, their combination, and host DNA methylation analysis across cervical diagnosis severity, among a subgroup of 101 HR-HPV-positive women (namely those who underwent colposcopy/biopsy and had a valid result for all three tests), using a chi-squared test for trend across grades of <CIN2, CIN2/3 and cancer.
Author contributions
GC conceived the study, supervised the analyses and led the writing of the manuscript. GC, SF, IB, UT and Tshokey were all involved in the development of the methodology. UT was the principal investigator and supervised the clinical protocol. TT (histology), Tshokey (cytology), PS and DH (HPV and methylation testing) contributed to data collection. VT performed the data management and statistical analyses. All authors contributed to data interpretation and read and approved the final manuscript.
CONFLICTS OF INTEREST
P.J.F. Snijders has Honoraria from Speakers Bureau from Roche and is a consultant/advisory board member for Roche and Gen-Probe. P.J.F. Snijders and D.A.M.
Heideman are minority shareholders of Self-Screen B.V., a spin-off company of VUmc which holds patents related to the work. The other authors disclose no potential conflicts of interest.
FUNDING
The primary support for this project came from the International Agency for Research on Cancer and grants from the Bill & Melinda Gates Foundation, USA (grant numbers 35537 and OPP1053353). | 2018-04-03T02:33:55.362Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "cdde1eb0b15b9966ea1738827f6274494382de27",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=19783&path[]=63177",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cdde1eb0b15b9966ea1738827f6274494382de27",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247402848 | pes2o/s2orc | v3-fos-license | Second-degree branch structure blockchain expansion model
The blockchain runs in a complex topological network which is affected by the principle of consensus, and data storage between nodes needs to maintain global consistency in the entire network, which causes the data storage inefficient. At the same time, the information exchange between large-scale communication node groups leads to the problems of bandwidth expropriation and excessive network load. In response to these problems, this article proposes a second-degree branch structure blockchain expansion model. First, a ternary storage structure is established. Data use the way of fully integrated storage, multi-cell storage, and fully split storage, and data are classified and stored in parallel between the structures. Second, a second-degree branch chain model is constructed. The main chain forks into multiple sub-chains, and a free competition chain structure and a Z-type chain structure are defined; a two-way rotation mechanism is introduced to realize the integration and transition between chain structures. Finally, a set of malicious attacks is simulated to realize the security constraints of the blockchain, to verify the security of the second-degree branch chain model. Experiment shows that the second-degree branch structure expansion model proposed in this article has great advantages in data storage efficiency and network load.
Introduction
Since the concept of blockchain was put forward, it has attracted widespread attention from the world. In terms of data storage, it is a distributed storage ledger; in terms of protocol, it is a decentralized consensus protocol; and in terms of economy, it is an Internet of value that improves cooperation efficiency. From the perspective of blockchain technology, blockchain 1 data storage uses hash compression, asymmetric encryption, 2 and other cryptographic principles to ensure reliability and to adopt distributed data storage 3,4 through point-to-point connections. 5 The blockchain ledger is jointly maintained by all nodes and stores data, which is based on a credible consensus mechanism. 6 In recent years, the application of blockchain technology has become more and more extensive such as Bitcoin, Ethereum, and Litecoin, which uses computing power to prove, 7 and it has gradually evolved from encrypted digital currency to a credible service platform applied to various industries in society. But with facing the challenges of complex environments, 8 blockchain expansion 9 problems have become more prominent.
Although the blockchain adopts expansion mechanisms such as Segregated Witness, 10 Lightning Network (LN), 11 and data sharding, 12 as the scale and production speed of blockchain data continue to increase, the rate of storing is increasingly lagging behind the rate of real-time data production, which results in more serious data storage and network load problems in the blockchain. Therefore, how to further improve the storage efficiency and reduce the network load has become a difficult point in the current blockchain research.
In response to these problems, this article proposes a second-degree branch structure blockchain expansion model. The main contributions are as follows: 1. In view of the inefficiency of blockchain data storage, a ternary storage structure is designed to significantly improve data storage efficiency through data shunting and task distribution. 2. In the design of the actual structure of the blockchain, the second-degree branch chain model is proposed, including the free competition chain structure and the Z-type chain structure. The expansion of the blockchain is based on two chain structures. 3. On this basis, in response to the incompatibility of the second-degree chain due to structural differences, a two-way rotation mechanism is proposed to enable smooth switching between structural chains, and the double-chain structure fusion is specifically demonstrated through the second-degree chain fusion transition process. 4. Finally, the security of the model is further analyed by the second-degree branch structure blockchain expansion model, and on the basis of the security analysis, to find out the security constraints which have instructive significance for the actual application of the blockchain.
Related works
At present, many scholars have conducted in-depth research on blockchain expansion technology and have achieved many research results. In the isolation authentication expansion mechanism, 13 the block body extracts the signature information from the main chain space and stores it in a new data structure to achieve expansion, but the block space saved by this mechanism is limited. Seres et al. 11 quantitatively analyze the structural characteristics of the LN and solve the expansion problem by improving the data throughput through multiple payment channels. But the LN topology needs to be improved, and the security needs to be strengthened. Min et al. 14 propose a multicenter dynamic consensus mechanism in the permission chain, and by optimizing consensus mechanism, it reduces block confirmation delays to achieve expansion. But the dependence on master nodes is risky, and the system reliability is difficult to guarantee. Jia et al. 15 propose a scalable blockchain model, which can achieve expansion by optimizing the storage structure and reducing the communication cost. But it requires high credibility of data storage nodes, which reduce the stability of the system. Burchert et al. 16 propose a new layer between the blockchain and payment channels. To deal with the expansion problem through the new layer of micropayment channels, which achieves delay-free payment. Kim et al. 17 propose a distributed storage blockchain (DSB) system, which improves storage efficiency by combining secret sharing, private key encryption, and information dispersal algorithms. But when peer failure occurs due to denial service attacks, DSB will cause serious communication cost. Fadhil et al. 18 propose the Bitcoin Clustering Based Super Node (BCBSN) protocol, which reduces the transaction propagation delay by a reasonable ratio. But when the super node has a transaction failure, the normal transactions will be seriously affected. Zhao et al. 19 propose a security strategy for DSBs, which can delete part of blockchains, so that nodes only store a part of blockchain. But due to the coexistence of multiple node modes, the operational burden of the management node is increased. Zhang et al. 20 analyze blockchain transaction databases and propose a storage optimization scheme. The proposed scheme divides blockchain transaction database into cold zone and hot zone, and achieves storage optimization by moving unspent transaction outputs outside the in-memory transaction databases. But the data query call to the cold zone is inefficient. Shah et al. 21 propose consensus-ADMM (alternating direction method of multiplier) based distributed optimization algorithm, which decomposes the optimization problem into subproblems, and solves the sub-problem locally and exchanges information with the neighboring regions to solve for the global update.
In summary, this article conducts research on the basis of existing research, further optimizes the blockchain structure, and proposes a second-degree branch structure blockchain expansion model.
Second-degree branch model
Aiming at the construction of blockchain, first this article designs a ternary storage method to optimize the blockchain storage structure, then builds a seconddegree branch chain mechanism for expansion, and uses a two-way rotation mechanism to carry out the transition of differentiated structure chains.
Ternary storage structure
After the blockchain is bifurcated into multiple subchains, the original blockchain information is distributed among the sub-chains for data storage, and the data storage rate has been significantly improved. Definition 1. Blockchain data flow circulates through multiple channels, and it is stored in parallel at the same time, which is called data shunting. The increased block per unit time before the blockchain fork is Nblock, and the effective data amount of a single block body is Dvalid. So the data storage rate before the fork is Nblock 3 Dvalid. The blockchain classifies the stored data information Dvalid as Dvalid1, . . . , Dvalidnvalidn. N sub-chains are formed after n bifurcation; the number of blocks increased per unit time in the sub-chain is Nblock, and the effective data amount from a single block body is Dvalidi. Since data amount Dvalid and Dvalidi tend to be same, after the blockchain is forked, the data storage rate is Nblock 3 (Dvalid1 + Á Á Á + Dvalidn), which is close to n 3 Nblock 3 Dvalid, and data shunting significantly improves the efficiency of blockchain storage.
After the blockchain is bifurcated into multiple subchains, at least one complete storage unit needs to be stored when the node data on the sub-chains are stored.
Definition 2. Each entire single chain that goes back to the genesis block is called a storage unit. When nodes store transaction information, it must be sure that the current block can be traced back to the genesis block of the blockchain. The storage unit composed of this entire single chain reflects the integrity and atomicity of the blockchain information.
A ternary accounting structure is proposed due to the node accounting method and environmental impact: 1. Full integrated storage structure: Each node records all the storage unit of the entire blockchain. Under this structure, a global view of the second-degree branch blockchain is obtained. This type of node is mostly the global management and service node, which is used to dynamically monitor the overall operation of the multitree blockchain and provides subsequent maintenance and update. 2. Multi-cell storage structure: Each node records a limited number of storage units and dynamically updates and stores them. A partial view of the second-degree branch blockchain is obtained. On one hand, this type of node is the local management and service node, and it is used to dynamically monitor and manage the local operation of the blockchain, and report to the blockchain global management service nodes about the current storage status; on the other hand, it is the information storage node of the blockchain and exists in the Z-type chain structure proposed below. 3. Fully split storage structure: Each node records its own separate storage unit. In this mode, the node is only responsible for the block storage of the sub-chain and does not care about the secure status of the chain where it is located. This type of node is the information storage node and exists in the free competition chain structure proposed below.
The ternary storage structure is shown in Figure 1:
Second-degree branch chain structure construction
The construction of a two-branched chain expansion structure is carried out. The two-branched chain structure includes two sub-chain structures: a free competition chain and a Z-type chain. On one hand, the two structure chains use data shunting for effective expansion; on the other hand, they have different characteristics due to structural differences.
Free competition chain structure. In the free competition chain, the blockchain forms multiple sub-chains after branching. Sub-chains are independent of each other. When the storage node downloads the sub-chain data, it must store a complete storage unit, which reflects the good traceability and integrity of the blockchain. The sub-chain only packs information on this chain and does not interact with other sub-chains. Due to the competition between the subchains, the distribution of computing power resources will be obviously uneven, which is not conducive to the stability of the blockchain, so introducing the system default allocation to balance the computing power gap.
Definition 3
System default allocation. Blockchain computing power distribution is different, which will cause the storage rate of each sub-chain to be significantly different, and then generates security problems due to malicious attacks. The system generally allocates newly added accounting nodes to the weaker computing power chain to maintain the safety and reliability of the blockchain. This process is allocated by the system by default.
The schematic diagram of the free competition chain structure is shown in Figure 2. ''Free-competition'' represents the free competition chain structure. Each single chain creates blocks in a top-down order, and the child blocks store the hash value of the parent block. The single chain does not affect each other, and the single chain with weak computing power will be allocated by the system to supply the computing power by default, so that the overall blockchain computing power tends to be consistent. The (x, y) form coordinate uniquely identifies the position of block.
In the free competition chain structure, the node which stores the block only needs to download a storage unit to carry out data storage services. In all chain structures, the storage overhead is the smallest, but it will face the secure issues brought by the shunting of computing power, which will be discussed later. The chain type Construct field of the stored block header information is ''Free-competition.'' Z-type chain structure. After the Z-type chain is branched, information is stored in the blocks on each child chain. In addition to storing the hash value of the parent block, it is also necessary to store the hash value of the adjacent block, which is the pseudo-hash value.
Data storage can only be performed on the premise of obtaining two hash values.
Definition 4. Pseudo-hash. In the Z-type chain structure, the sequence of block generation is from left to right, from top to bottom, and the hash value of the block generated in the previous order is the pseudo-hash.
As shown in Figure 3, unlike the free competition chain structure, blocks are generated sequentially from top to bottom on the child chain. All nodes on the Ztype chain structure save the hash value of the parent block, store the hash value of the adjacent block from the left of the same layer (or the rightmost block of the upper layer), and then perform the block data storage process. Therefore, the block generation sequence under the entire chain structure is from left to right and from top to bottom.
In the Z-type chain structure, honest computing power is concentrated to sequentially store and generate blocks, which avoids the advantages of centralized malicious attacks on one block and significantly improves the security of blockchain. The Construct field of the structure header information is ''Z-type,'' and a new hash supplement field HashExtra is added. This structure HashExtra is ''Fakehash.'' Finally, a block location field Location is defined. the row index of the block in Z-type chain structure, and Xrow represents the column index of the block in the Z-type chain structure. Through the Location field of the blockchain, each sub-chain block can get the block where the pseudo-hash is located.
Inter-chain rotation transition
Due to the structural differences between the two chain structures, compatibility problems can be caused when the chain structure is switched. This article proposes a two-way rotation mechanism to make the transition between different chain structures smoothly and dynamically to display the state changes of the chain structure.
Two-way rotation mechanism. In the second-degree blockchain, it is inevitable that one chain structure will become another chain structure after chain forks, and there will be the transition between the following chain structure: free competition chain and Z-type chain.
Two-way rotation between the free competition chain and the Z-type chain. When the free competition chain rotates to the Z-type chain, the data in the free competition chain are allocated to each sub-chain of the Z-type chain in an orderly manner. The node on the sub-chain cannot only store the storage unit of local sub-chain, because the storage of block information requires pseudo-hash provided by other sub-chains. So, the storage units of all sub-chains after the fork should be stored. The corresponding Construct field changes from ''Free-competition'' to ''Z-type,'' and the HashExtra field is from ''null'' to ''Fakehash.'' When the Z-type chain is rotated to the free competition chain, the data tasks stored in the Z-type chain are orderly distributed to free competition sub-chain. Nodes on the free competition chain only need to store one storage unit where the sub-chain is located (without interference from the hash value of other sub-chains), and the corresponding Construct field changes from ''Z-type'' to ''Free-competition''; HashExtra field changes from ''Fakehash'' to ''null.'' Second-degree chain integration transition. The seconddegree branch structure blockchain model is accompanied by the fusion between the two chain structures. When the blockchain transits between the chain structures, the corresponding attributes of the blocks will change. The sub-chain will form another chain after the fork, and the structure will be accompanied by this change.
On the premise that other fields are omitted except for the original block timestamp, random number, and Merkle Root, the new block information in this article is shown in Table 1.
In Figure 4, first, the initial block is bifurcated through a free competition chain structure, and then each sub-chain is further bifurcated through a Z-type chain structure. In this process, the corresponding Double chain type (Free-competition, Z-type) Location The position of the block in the fork chain [Xline,Yrow] attributes in the blocks inevitably change. Table 2 lists the block attribute changes based on Table 1.
In Table 2, the state changes of the fusion transition from the two chain structures are dynamically displayed. With the change of the chain structure, the attribute fields of the corresponding blocks have also undergone necessary changes. It changes the sub-chains structure and achieves the smooth transition.
Model safety analysis and constraints
Based on the two-degree branch structure blockchain expansion model proposed above, the expansion model is further analyzed in terms of security and based on the security analysis to construct a security constraint.
Safety analysis of free competition chain structure
Each sub-chain in the free competition chain structure runs independently, and the system defaults that the computing power among the sub-chains is uniform (which the honest computing power is evenly distributed to each branch sub-chain). Malicious nodes conduct malicious attacks on the forked single chain, as shown in Figure 5. In this case, the security analysis of the free competition chain is carried out.
Supposing the workload of the entire blockchain model is 1, the proportion of malicious computing power is q, and the chain is bifurcated into n subchains. The malicious nodes compete with the honest nodes in one sub-chain, and z represents the number of blocks that malicious nodes are chasing honest node. At this time, the computing power of the honest nodes attacked by the malicious node is (1q)/n, the percentage of the malicious computing power is q1 = q/{(1q)/n + q}, and the percentage of the computing power from the honest node is p1 = 1 -q1. Through the gambler bankruptcy model, a gambler can gamble countless times, and it tries to fill up the shortfall created. The probability that the gambler fills up the shortfall is the probability that the attacker catches up with the honesty nodes. As shown in equation (1) The probability of success of a malicious node filling z blocks is shown in equation (2) where q1 = q q + (1Àq)=n and p1 = 1 À q1. Using the transfer confirmation chase model, the success rate of malicious node attack is shown in equation (3) Because l = (q1=p1) 3 z, equation (4) is obtained after equation (3) is organized The description of the functional relationship in the free chain structure is shown in Figures 6 and 7.
In Figure 6, the malicious computing power accounts for a constant 0.1 (i.e. 10%), and the relationship between the two variables z, n and the malicious node attack probability p is obtained. Setting the number z of blocks confirmed by honest nodes to 1, 3, 5, 7, 9, 11, 13, 15, and 17, and setting the forked number n in the chain to 1, 2, 4, and 6. When the malicious computing power q is constant, each function line decreases downward, and the success rate p from the upper function line is higher than the lower function line.
In Figure 7, the number of bifurcations of the chain structure is set to be constant, and the relationship between two different variables q, z and the success probability of malicious node attack p is studied. The block number z confirmed by honest nodes is set to 1, 3, 5, 7, 9, 11, 13, 15 and 17 respectively, and the percentage of computing power of malicious nodes is set to 0.2, 0.1, 0.05 and 0.01 respectively. Figure 7 is obtained by law statistics. Each function line downward and the success probability p of the upper function line is always higher than the success probability of the lower function line. The function analysis is as follows: 1. The proportion of malicious computing power q and the number of forks n remain unchanged, and the probability of successful malicious attack p decreases monotonically with the block z confirmed by the honest chain. 2. The percentage of computing power q and the block z confirmed by the honest chain remain unchanged, and the probability of successful malicious node attack p increases monotonically with the number of forks n. 3. The number of forks n and the block z confirmed remain unchanged, and the success rate of malicious attack p increases monotonically with the proportion of malicious computing power q.
Combining with the inference, we can get that in order to ensure the safety and reliability of the blockchain, the measures that can be taken are to reduce the forks of the blockchain, increase the proportion of honest computing power, and wait for as many blocks confirmed as possible.
Safety analysis of Z-type chain structure
When all the sub-chains on the Z-type chain store the data, the forked sub-chains logically connect into a Ztype pseudo-chain, and the chain structure safety feasibility analysis is carried out on the characteristics of the Z-type chain structure.
The Z-type chain block packing sequence is from left to right and top to bottom, and all honest computing power is concentrated for data storage on the Z-type chain. However, as each block is packaged, the hash value from the parent block and the pseudo-hash value from other sub-chain blocks need to be recorded at the same time. When malicious node attacks a sub-chain, it must wait for the pseudo-hash provided by other subchains at the same time and then competes with the honest nodes. In this case, to perform security analysis on the sub-chain. Figure 8 is a diagram of the attack mode from the Z-type chain.
It can be seen from Figure 8 that the blocks on the Z-type chain structure are generated in an orderly manner, which is logically a single chain structure. Therefore, the success rate of the double-spending attack has nothing to do with the number of forked blocks. Supposing the total computing power of the blockchain is 1, the percentage of the malicious computing power is q, and the number of blocks confirmed by honest nodes is z. The successful probability of malicious nodes chasing z blocks through the gambler model is shown in equation (5) Confirming the pursuit model by transfer to get equation (6) Among them, l = q 3 z/p. Converting equation (6) to equation (7) Figure 9 is the functional relationship diagram in the Z-type chain structure.
From Figure 9, the success probability of malicious node attack has nothing to do with the number of forks n. Only the relationship between the variables q, z and the success probability of malicious node attack is considered, setting the block number z confirmed by honest nodes to 1,3,5,7,9,11,13,15, and 17, and setting the percentage of malicious computing power in the Z-type chain to 0.2, 0.1, 0.05, and 0.01. Figure 9 is obtained by law statistics. Each function line downward and the success probability p of the upper function lines are always higher than the lower function lines. The analysis of the combination function to the Z-type chain structure is as follows: 1. The proportion of malicious computing power q remains unchanged, and the probability of successful malicious node attack p decreases monotonically with the block z confirmed by the honest chain. 2. The block z confirmed by the honest chain remains unchanged, and the success rate of malicious attack p increases monotonically with the proportion of malicious computing power q.
Combined with the inference, it can be obtained that the security of the chain structure has nothing to do with the number of forked sub-chains. Against malicious attacks, the measures which can be taken are increasing the proportion of honest computing power and waiting for as many block confirmations as possible in the transaction.
Second-degree chain security constraints
The safety analysis of the two chain structures is carried out above. Among them, the second-degree chain should meet the safety constraints in practical applications and construct the safety constraints based on the safety analysis in the chain structure.
1. The construction of the security constraints in the free competition chain: In practice, there are few nodes with malicious computing power which are bigger than 1% of the total computing power (large mining pools are honest nodes by default). For example, if the malicious computing power accounted for less than 1% starts an attack, and the success rate is less than 1%, the default state of blockchain is safe. The security constraint relationship between the number of blocks z and the number of fork chains n is shown in Figure 10.
The relationship model in Figure 10 satisfies the security constraints of the blockchain. Among them, as the confirming number increases, the highest number of forks also increases accordingly. In practical applications, the security between the number of sub-chains and the number of confirmed blocks should be considered before the blockchain forks. For example, with the rapid increase in the scale of data, the second-degree branch structure blockchain expansion is carried out on the blockchain. The system determines to adopt the free competition chain to expand, and then the blockchain is transformed from a master chain into five branches. In order to prevent malicious doublespending attacks, the sub-chains should wait at least two confirmation blocks after the fork to ensure that the transaction will not be double-spending and then conduct normal transactions.
(2) The construction of the security constraints in the Z-type chain: If the proportion of malicious computing power is less than 1% and the attack success rate is less than 1%, the default state of blockchain is safe. According to the above security analysis, the number of forked sub-chains n has nothing to do with the security of the blockchain. Before the blockchain is forked, the security constraints between the block number confirmed should be mainly considered, as shown in Table 3. The influence in success probability of doublespending attacks between the block number confirmed and the proportion of malicious computing power can be obtained from the table. For example, chains with weak computing power are vulnerable to malicious attacks, and the system determines to adopt the Z-type chain to expand. If the proportion of malicious computing power is less than 1%, after the chain forks, in order to prevent malicious double-spending attacks, the sub-chains should wait for at least two confirmed blocks (or more confirmed blocks) to ensure that the transaction will not be double-spending and then perform normal transactions.
Experiment and analysis
The experimental environment is 20 servers with 32core CPU, 128G memory, and 10T storage space. Docker virtualization technology is used to deploy three-dimensional chain nodes, Kubernetes is used to manage docker clusters, and the servers use gigabit networks and communicate between containers through flannel technology. Twenty hosts with TestRPC installed are equivalent to 20 master nodes NodeDB. NodeDB is mainly responsible for managing and maintaining the internal container nodes, and the internal nodes exchange information through NodeDB. The experimental design architecture diagram is shown in Figure 11.
In this experiment, there are 20 main nodes NodeDB. Each NodeDB has about 10 container nodes, and the network is composed of 250 nodes. The second-degree branch chain branches into four subchains for expansion. The blockchain uses the poof of work (POW) consensus mechanism and does not use random numbers for verification, which reduces the amount of data transmission by broadcasting the block header data, and adds additional information to verify the consistency of transaction information. The experimental comparison mechanisms to the two-degree branch chain (TDBC) are the Segregated Witness Expansion (SWE) mechanism and the directed acyclic graph (DAG) expansion mechanism, to compare and analyze in terms of transaction persecond (TPS), communication overhead, confirmation delay, and effective data rate.
Storage rate
In the second-degree branch chain TDBC, each node only broadcasts its own block header information to the entire network, and the transaction information can be packaged in the block after verifying the valid transaction identifier. Without the verification process, the transaction volume is compressed and stored in the block body, which expands the upper limit of data throughput. The TDBC mechanism effectively shunts data, and tasks are allocated to each sub-chain for parallel storage. The DAG mechanism uses a network structure in the data structure, and the blockchain consensus is converted from the longest chain consensus to the heaviest chain consensus, which retains a certain degree of independence and autonomy of the local network, and allows the parallel creation of blocks. The SWE mechanism isolates the transaction signature from the storing block, and the extra space is used for the storage of transaction information. In the experiment, this article deploys an experimental environment with a master node of 20, and tests the data throughput per unit time under the three expansion mechanisms in different periods. The final figure is shown in Figure 12. Through analysis, due to shunting and paralleling storage in TDBC, TDBC blockchain storage rate reaches nearly 10,000 transactions per second at the peak, which is much higher than the 100 transactions per second in SWE and the nearly 1000 transactions per second in DAG. Therefore, the advantage of TDBC in data storage throughput rate is obvious.
Communication overhead
With the increasing of communication nodes, the communication channels have increased exponentially, and the number of communication nodes has become the main factor restricting communication overhead. In the TDBC mechanism, the blockchain information is distributed to each sub-chain in an orderly manner. After the internal container node recognizes that the block belongs to the sub-chain, data communication is carried out; otherwise, it does not respond. The DAG mechanism adopts a directed acyclic structure. Due to the independence of the local blockchain, the blocks are allowed to be created in parallel. So, the blocks can be confirmed without the recognition of all nodes. The SWE mechanism broadcasts block information to entire network before it is confirmed by the entire network. If a conflict block occurs, it must enter the second confirmation. This experiment sets up an experimental environment of 10 and 20 main nodes. In this experiment, some communication nodes may fail; the number of communication nodes required directly reflects the communication overhead of the blockchain.
As shown in Figure 13(a), TDBC confirms that the average number of communication nodes required is about 30, which is significantly lower than the number that is about 60 nodes in DAG and about 100 nodes in SWE. It can be seen that the data shunting and task classification in TDBC reduce the communication overhead greatly. In Figure 13(b), the number of communication nodes in the TDBC mechanism is still stable and keeps at a minimum level, which reflects the good expansion of TDBC.
In the information exchange of communication nodes, the number of valid transactions carried by the global block in each communication also reflects the communication efficiency. In Figure 14, the size of the global block in the TDBC and SWE mechanisms is relatively stable, because DAG mechanism does not support strong consistency and the block size has obvious volatility, which brings instability factors to the communication of the blockchain.
Confirmation delay
In the blockchain, the transaction information is written into the block body after validity verification. The block body generates the broadcast header information, and the node transmits the block header information to other nodes for verification; finally, it gets verified by other nodes in the entire network. This process is confirmation delay. In TDBC, the information stored in the block is strictly classified, and the storage node only stores the block data on the sub-chain and does not have to be submitted to all nodes in the entire network for verification. In the DAG mechanism, it is affected by the independence of the blockchain local network where blocks are allowed to be verified by nodes in the local network. The SWE mechanism has relatively high requirements for network verification nodes, and blocks need to be confirmed by the entire network. In this experiment, to determine the impact of different blockchain network scales on the confirmation delay, the 5, 10, and 20 NodeDB experimental environments are, respectively, deployed. The abscissa is different periods, and the ordinate is the statistical confirmation delay range at this stage. Three types of average block confirmation delay are obtained under the mechanism. Figure 15 shows that the average deterministic delay of TDBC is about (8,10,13), the average deterministic delay of SWE is about (10,14,20), and the average deterministic delay of DAG is about (9,12,16). It can be obtained that the confirmation delay of the TDBC blockchain that uses data shunting and task allocation is less than the other two mechanisms, and as the network scale becomes larger, the increase of the confirmation delay from the TDBC mechanism is also slightly less than DAG and SWE mechanisms.
Effective data rate
The transaction information generated is validated and stored in the block body. The blocks are verified by the entire network nodes and then stored on the blockchain, which are valid blocks. But the transaction data from generating to the incoming chain inevitably produce data loss such as network instability which leads to data packet loss, network congestion which leads to data cache being cleared, and invalid data caused by double-spending attacks. The blockchain adopts the TDBC mechanism to strictly classify data, and the blockchain network is orderly and not redundant, which greatly alleviates network congestion. The blockchain using the DAG mechanism adopts a directive acyclic graph and allows the parallel generation of blocks. But there will be repeatable valid transaction information in the parallel blocks, and the block storage space is wasted to a certain extent. For blockchains using the SWE mechanism, the degree of network congestion is the main factor that restricts the effective data rate in the blockchain. In this experiment, four blockchain environments including 5, 10, 15, and 20 main nodes are deployed, and the effective data rate is reflected by the ratio of the effective data amount to the total data amount. Figure 16 is obtained through experiments. Through analysis, the data efficiency of the TDBC mechanism is better than the other mechanisms, and with the increase of the blockchain nodes, the data efficiency is always relatively stable. The DAG mechanism data efficiency is better than SWE mechanism in a long term. The data efficiency of SWE mechanism is higher in the early stage, and with the expansion of the blockchain network, its data effectiveness has significantly decreased.
Conclusion and future work
This article conducts an in-depth study on the deficiencies of the existing blockchain system expansion and proposes a second-degree branch structure blockchain expansion model. In order to improve data storage efficiency, the model optimizes the data structure through ternary storage. A second-degree branch chain model is constructed: free competition chain structure and Ztype chain structure. The model expands the blockchain that alleviates the communication burden on the network. To merge and transit the two structural chains through the two-way rotation mechanism, which ensures the stability of the blockchain expansion. On the basis of the simulative malicious attacks on the blockchain, putting forward the safety constraints to ensure the security of the expansion. And finally, to realize the effective expansion in the blockchain.
The second-degree branch structure blockchain expansion model in this article is mainly researched under the POW consensus. The future work is to study the expansion to other mainstream blockchains such as proof of stake (POS) blockchain and delegated proof of stake (DPOS) blockchain. In addition, with the rapid increase of blockchain transactions, the huge global storage ledger of the second-degree branch chain has problems in the management and maintenance for the blockchain. Another future work is to study the global ledger of the second-degree branch chain and, through distributed services, to further optimize the storage structure.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2022-03-12T16:01:51.758Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "ea8bfdf2e724e48b95463ea06b5c6c9430b6f363",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/15501477211064755",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "21ee520e86b6fb55b6b3793a36f5d9c4197a31da",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
248652947 | pes2o/s2orc | v3-fos-license | Identification of Long Noncoding RNAs Involved in Eyelid Pigmentation of Hereford Cattle
Several ocular pathologies in cattle, such as ocular squamous cell carcinoma and infectious keratoconjunctivitis, have been associated with low pigmentation of the eyelids. The main objective of this study was to analyze the transcriptome of eyelid skin in Hereford cattle using strand-specific RNA sequencing technology to characterize and identify long noncoding RNAs (lncRNAs). We compared the expression of lncRNAs between pigmented and unpigmented eyelids and analyzed the interaction of lncRNAs and putative target genes to reveal the genetic basis underlying eyelid pigmentation in cattle. We predicted 4,937 putative lncRNAs mapped to the bovine reference genome, enriching the catalog of lncRNAs in Bos taurus. We found 27 differentially expressed lncRNAs between pigmented and unpigmented eyelids, suggesting their involvement in eyelid pigmentation. In addition, we revealed potential links between some significant differentially expressed lncRNAs and target mRNAs involved in the immune response and pigmentation. Overall, this study expands the catalog of lncRNAs in cattle and contributes to a better understanding of the biology of eyelid pigmentation.
The study of coat and skin color in cattle has both economic and scientific interest. Several ocular pathologies, such as ocular squamous cell carcinoma, also known as eye cancer, and infectious keratoconjunctivitis, also known as pink eye, have been associated with low pigmentation of the eyelids (Heeney and Valli, 1985;Anderson, 1991). These ocular pathologies have a considerable economic impact since affected animals suffer weight loss and are underpaid at slaughter. Skin pigmentation is a complex, polygenic trait (Cichorek et al., 2013;Ainger et al., 2017). This trait is the result of differences in biochemical processes and the activity of melanocytes (Solano, 2014), which produce two types of melanin, eumelanin (black/brown) and pheomelanin (red/yellow) (Le Pape et al., 2008). The lack of eyelid pigmentation in Hereford cattle is the result of a genetic background that impact melanocyte development, including cell migration. This genetic background may be caused by variations in the expression of gene KIT during embryo development, resulting in impaired migration of melanocyte precursors to the region around the eyes (Grichnik, 2006). The receptor tyrosine kinase KIT and its ligand (KITLG) play an important role in the development of melanocytes, including migration, survival, proliferation, and differentiation (Grichnik, 2006;Mort et al., 2015;Ainger et al., 2017). Gene KITLG has been reported as a candidate gene affecting both eye area pigmentation and eyelid pigmentation in cattle (Pausch et al., 2012;Jara et al., 2022). There are many genes specifically expressed in melanocytes, such as TYR, TYRP1, DCT, PMEL, MITF, and MLANA (D'Mello et al., 2016;Ainger et al., 2017). The regulation of these important genes by lncRNAs has not been studied in relation to eyelid pigmentation in cattle. In recent years, the importance of lncRNAs in the biology of the skin (Wan and Wang, 2014) in different livestock species, including sheep (Yue et al., 2016), goats (Ren et al., 2016), and cattle (Weikard et al., 2013), has been demonstrated. In cattle, 4,848 potential lncRNAs were identified in a study that compared regions of pigmented and nonpigmented skin (body spots) (Weikard et al., 2013). The authors concluded that the transcription pattern of bovine skin is complex and suggested a possible functional relevance of new transcripts, including lncRNAs, in the modulation of pigmentation. The catalog of lncRNAs involved in skin pigmentation in cattle is not very extensive and limited to intergenic lncRNAs (Weikard et al., 2013).
The main objective of this study was to analyze the transcriptome of eyelid skin in Hereford cattle using strandspecific RNA sequencing (ssRNA-seq) to characterize and identify lncRNA possibly involved in eyelid pigmentation. Two contrasting groups were evaluated: steers with completely pigmented eyelids versus steers with no pigmentation in both eyelids (Jara et al., 2020). We evaluated the differential expression of lncRNAs between these two groups and analyzed the interactions between lncRNAs and putative target coding genes to reveal the genetic basis underlying this complex, economically relevant phenotype. Our study provides a valuable resource for the comprehension of lncRNAs, enriches the lncRNA catalog in cattle, and contributes to a better understanding of the molecular mechanisms underlying eyelid pigmentation.
Data
These RNA-seq data are available at the NCBI BioProject database with accession number PRJNA627111. The transcriptomes of 11 eyelid skin samples, five samples from 100% pigmented animals, and six samples from 0% pigmented animals were analyzed (Jara et al., 2020). The eyelid transcriptomes were generated using poly-A capture and strand-specific RNA sequencing in an Illumina HiSeq 2500 sequencing system. A total of 542,751,474 (38.5 G) clean reads were obtained. These reads were mapped to the latest cattle (Bos taurus) reference genome ARS-UCD1.2, using the software Hisat2 (v2.1.0) (Kim et al., 2015). The overall mapping rate ranged from 91 to 93%, and only uniquely mapped reads were considered (Supplementary Table S1). Transcriptomes were assembled for each sample using Cufflinks, and then, all the assemblies were merged into one using Cuffmerge.
Identification of Long NonCoding RNAs
Potential lncRNAs were identified using successive filters starting from all obtained transcripts. First, transcripts that presented the class code " = ", "e", "p", and "c" from the output generated by Cuffcompare were filtered out (You et al., 2019). Transcripts with less than two exons with low expression levels (ten mapped reads per sample in at least five biological replicates were defined as the expression threshold) and shorter than 200 bp were also filtered out. DNA sequences of the transcripts were extracted using gffread (Trapnell et al., 2010). The sequences of known lncRNAs were downloaded from two multispecies lncRNA databases, ALDB (Domestic-Animal LncRNA Database) (Li A. et al., 2015) and NONCODE (Zhao Y. et al., 2016), which contain 8,250 and 23,515 cattle lncRNAs, respectively. BLASTn (version 2.2.31) (Altschul et al., 1990) was used to align the unannotated transcripts to lncRNAs in these two databases using stringent parameters (e-value ≤ 1 × 10 6 , coverage and identity ≥90%). Accurately aligned transcripts against either ALDB or NONCODE (5.0) were regarded as known lncRNAs. On the other hand, the transcripts that did not align with sequences in either ALDB or NONCODE were considered putative novel lncRNAs. This pipeline was adapted from Konsinska-Selbi et al. (2020).
Coding Potential Analysis
We calculated the coding potential of each transcript using three complementary tools: Coding Potential Calculator 2 (CPC2, version 0.1) (Kang et al., 2017), Coding-Potential Assessment Tool (CPAT, version 2.0.0) (Wang et al., 2013), and Predictor of long noncoding RNAs and messenger RNAs based on an improved k-mer scheme (PLEK, version 1.2) (Li et al., 2014). PLEK and CPC2 are based on the same support vector machinebased (SVM) classification model, while CPAT is based on a logistic regression (Wang et al., 2013;Antonov et al., 2019). All these programs are alignment-free tools and have been proven to be highly effective in discriminating lncRNAs (Han et al., 2016).
The CPAT program estimates coding probability scores. The optimum cutoff value for protein-coding probability is speciesspecific, and hence, the CPAT was trained using a set of 10,000 known bovine protein-encoding transcripts (Bos_taurus.ARS-UCD1.2.cds.all.fa, version 95) and a set of 10,000 bovine noncoding sequences (larger than 200 bases). The final reference dataset of noncoding RNAs comprised 37,695 sequences (5,930 from Bos_taurus.ARS-UCD1.2.ncrna.fa (version 95), 23,515 from NONCODEv5_cow.fa, and 8,250 from ALDB.cow.lincRNAs.v1.0.fa). Bovine protein-coding transcripts and noncoding sequences were extracted randomly from each annotation, following previously published studies (Billerey et al., 2014;Gupta et al., 2019). In brief, the two training sets were randomly split into ten different parts to perform a 10-fold cross-validation analysis. The cutoff value was selected to maximize specificity and sensitivity. On the other hand, PLEK uses a sliding window-based approach to analyze the transcripts based on a k-mer frequency distribution. PLEK was trained using the same set of sequences that were used for training the CPAT program.
Transcripts that displayed a CPC2 score lower than 0.5, a CPAT score lower than or equal to 0.36, and a PLEK score lower than 0 were considered noncoding genes and were used in subsequent analyses.
Gene Expression Analysis
Differentially expressed genes were identified using the R package DESeq2 (version 1.18.1) with default parameters (Love et al., 2014). Putative novel lncRNAs, known lncRNAs, and annotated protein-coding genes were all included in this statistical analysis. Only genes with at least ten reads per sample in at least five biological replicates were considered. Genes with an adjusted p-value ≤ 0.05 (Benjamini and Hochberg, 1995) and a |log 2 FC| ≥ 1.5 were considered differentially expressed (DEs) between pigmented and unpigmented samples.
Sequence Analysis
The analyses of the different sequence characteristics were performed on 14,361 coding genes (34,447 transcripts), 4,937 putative novel lncRNAs, and 218 known lncRNAs. The GC content and the length of transcripts were obtained using the infoseq function of the EMBOSS package version: 6.6.0.0. The exon number was estimated using custom scripts in bash. The abundance was estimated using Cuffnorm. The minimum free energy (ME) was calculated using the RNAfold program included in the ViennaRNA package version 2.4.9 (Zuker and Stiegler, 1981;Lorenz et al., 2011). To make the ME value of the different RNA sequences comparable, we normalized the ME by the sequences' length, yielding the MEN. Thus, MEN was calculated by dividing the ME value by the transcript's size and multiplying it by 100. Thus, the MEN value relates the ME estimation to a segment of 100 nucleotides: MEN = (ME/ sequence length)*100 (Zhang et al., 2006).
Prediction of Target Genes
Two strategies were used to study the association between lncRNAs and target genes acting in cis and trans. In the first case, all "neighbor" protein-coding genes showing differential expression (DEGs, p-value ≤ 0.01, and |log 2 FC| ≥ 1) were identified in a range of 300 kb upstream and downstream of differentially expressed lncRNAs. Genes that showed significant correlation coefficients at the expression level with neighboring lncRNAs were considered cis target genes (Pearson, r ≥ |0.60|, p-value ≤ 0.05).
For potential associations in trans with differentially expressed protein-coding genes (DEGs, p-value ≤ 0.01 and |log 2 FC| ≥ 1), sequence complementarity with differentially expressed lncRNAs (p adjust ≤0.05 and |log 2 FC| ≥ 1.5) was analyzed using LncTar (Li J. et al., 2015). Note that a more stringent threshold was used to detect DE lncRNAs than to detect DE protein-coding genes to obtain a more comprehensive and diverse sample of genes and their functions. The program LncTar identifies potential lncRNA targets by finding the minimum free energy of lncRNA and mRNA pair joint structures. This program was run with a threshold value of ndG −0.08. All predicted interactions of lncRNAs with target mRNAs that showed a significant coexpression correlation (Pearson, r ≥ |0.60|, p-value ≤ 0.05) were kept for further analysis.
The functional roles of the target genes were evaluated using the R package EnrichKit (https://github.com/liulihe954/ EnrichKit). Different gene set databases, including GO, MeSH, Reactome, InterPro, and MsigDB, were interrogated in the enrichment analysis. Terms significantly enriched within target genes were detected using Fisher's exact test, a test of proportions based on the hypergeometric distribution.
RNA-Seq Data Validation
The results of the RNA-seq analysis were validated using realtime polymerase chain reaction (qRT-PCR). We selected three differentially expressed lncRNAs and three differentially expressed protein-coding genes for validation. Primer sequences and expected product lengths are listed in Supplementary Table S2. Real-time PCRs were performed using 7.5 µL of SYBR ® Green master mix (Maxima SYBR Green qPCR Master Mix (2X), with separate ROX vials, Thermo Scientific ™ , United States of America), equimolar amounts of forward and reverse primers (200 nM, Operon Biotechnologies GmbH, Cologne, Germany), and 20 ng diluted cDNA (1:7.5 in RNase/DNase free water) in a final volume of 15 µL. Samples were analyzed in duplicate in a 72-disk Rotor-GeneTM 6000 (Corbett Life Sciences, Sydney, Australia). Standard amplification conditions were 10 min at 95°C and 40 cycles of 15 s at 95°C, 30 s at 60°C, and 15 s at 72°C. At the end of each run, dissociation curves were analyzed to ensure that the desired amplicon was detected, discarding contaminating DNA or primer dimers. Gene expression was normalized using ACTG1 as a housekeeping gene. Normalized gene expression values (ΔCt) were analyzed using a linear model including the pigmentation group as an independent variable. The association between the normalized gene expression and the pigmentation group was tested using a t-test. The mean and the range of the log2-fold change for each gene were calculated as log 2 (2 −ΔΔCt ) using the estimated ΔΔCt value ±standard error.
RESULTS AND DISCUSSION
The aim of this study was to identify and analyze lncRNAs associated with eyelid pigmentation in Hereford cattle. Our findings provide further evidence that lncRNAs are actively involved in pigmentation, as suggested by previous studies not only in cows (Weikard et al., 2013) but also in pigs (Zou et al., 2018), goats (Ren et al., 2016), humans (Zhao W. et al., 2016;Tang et al., 2020) and invertebrate organisms such as Crassostrea gigas (Feng et al., 2018).
Identification of Long NonCoding RNAs
A total of 111,791 transcripts, including 27,806 unannotated transcripts, were obtained after mapping the sequencing reads to the latest bovine genome reference. Among the novel transcripts, 218 were detected in the ALDB and NONCODE databases and were therefore considered known lncRNAs (Supplementary Data Sheet S1). After training, a coding potential cutoff of 0.36 was selected for the CPAT. This value maximizes specificity and sensitivity (98.3%) (see Supplementary Figure S1), similar to other studies (Billerey et al., 2014;Gupta et al., 2019). All transcripts with a score below 0.36 were retained as potential long noncoding RNAs. After integrating the results from CPC2, CPAT, and PLEK, we identified a total of 4,937 unannotated transcripts as putative lncRNAs (Figure 1, Supplementary Figure S2).
Based on RNA-seq analysis of 18 different tissues, Koufariotis et al. (2015) identified 9,778 lncRNAs in cattle. Of special interest, seven lncRNAs were found to be differentially expressed between white and black skin samples (Koufariotis et al., 2015). A recent paper reported a total of 1,535 expressed lncRNAs encoded by 1,183 putative noncoding genes in bovine oocytes (Wang et al., 2020). More specifically, Weikard et al. (2013) identified 4,848 lncRNAs that are associated with pigmentation in cattle, supporting the idea that lncRNAs play an important role in skin pigmentation. Notably, this number of lncRNAs is similar to the 4,937 transcripts that we reported in the present study, although only 314 were in common. The difference between studies could be explained in terms of the analytical pipeline used to identify putative lncRNAs in each case. Note that at the time of writing this manuscript, there is no widely accepted pipeline to identify lncRNAs.
Comparison Between mRNAs, Novel Long NonCoding RNAs, and Known Long NonCoding RNAs
We identified a total of 4,937 novels and 218 known lncRNAs in our RNA-seq dataset. The lncRNAs were characterized as novel isoforms (51.5%), sense exon overlap (14.5%), antisense intron overlap (12.3%), intergenic (16.5%), and antisense exon overlap (5.2%) forms (Supplementary Figure S3, Supplementary Data Sheet S1). Of the 51.5% characterized as novel isoforms, 94% were generated from regions that harbor protein-coding genes while 6% were novel isoforms from known lncRNAs. Note that similar results were recently reported by Alexandre et al. (2020) working on lncRNAs associated with feed efficiency in cattle.
We analyzed the guanine-cytosine content (GC), normalized minimum free energy (MEN), transcript length, exon number, and expression level of all putative novel lncRNAs and compared these metrics with those from known lncRNAs and proteincoding genes. Both groups of lncRNAs displayed significantly lower GC content ( Figure 3B), shorter length ( Figure 3C), and higher MEN ( Figure 3D) than protein-coding genes (p-value ≤ Table 1). We found no significant differences in the GC content nor in the MEN between the novel and known lncRNAs (p-value = 0.37 and p-value = 0.55, Wilcoxon test, respectively, Table 1). These results agree with previous studies that showed that lncRNAs have some unique sequence features compared to coding genes (Niazi and Valadkhan, 2012). Both novel and putative lncRNAs showed significantly higher MEN values than protein-coding genes (p-value ≤ 0.05, Wilcoxon test, Table 1). The sequence itself seems to contribute considerably to MEN since sequences with the same GC content can potentially present different folding energy values based on their secondary structure (Niazi and Valadkhan, 2012). It can be inferred that lncRNAs have a more flexible structure than mRNAs, which may reflect a higher potential to interact with other molecules. Indeed, it has been shown that lncRNAs have GC and MEN values similar to those of 3′ UTR sequences, suggesting that lncRNAs have regulatory functions (Niazi and Valadkhan, 2012).
We found that the novel lncRNAs were significantly shorter than the known lncRNAs (p-value ≤ 0.05, Wilcoxon test, Table 1). Both groups of noncoding transcripts showed lower expression levels ( Figure 3A) and a lower number of exons ( Figure 3E) than protein-coding genes (p-value ≤ 0.05, Wilcoxon test).
Overall, our results showed that lncRNAs are characterized by a lower number of exons, lower GC content, lower expression level, and higher normalized minimum free energy and tend to be shorter than protein-coding sequences (Derrien et al., 2012;Harrow et al., 2012;Billerey et al., 2014). Note that these sequence feature analyses support the reliability of the putative novel lncRNAs identified. We believe that the marginal differences observed in the length, expression level, and lower number of exons between the novel and known lncRNAs could be explained by the limited catalog of lncRNAs in cattle (Kosinska-Selbi et al., 2020). However, we could not completely discard the possibility that some of our novel lncRNAs are either pseudogenes or misidentified coding sequences.
Expression Analysis
We found 65 differentially expressed protein-coding genes (p adjusted ≤0.05 and |log 2 FC| ≥1.5) between the pigmented and unpigmented samples (Appendix S2). Among them, MC1R, TYR, PMEL, DCT, MLANA, and KIT showed upregulated expression in pigmented eyelid samples (Jara et al., 2020). These genes are key for the generation, storage, and distribution of melanin (D'Mello et al., 2016). Gene KIT encodes a receptor of tyrosine kinase and is considered a proto-oncogene, involved in the development of melanocytes, including migration, survival, proliferation, and differentiation (Grichnik, 2006). Previous studies showed the important role of KIT in UVB-induced epidermal melanogenesis in humans (Yamada et al., 2013) and in the survival of melanocytes (Mort et al., 2015). Here, KIT showed higher expression in pigmented eyelid samples, suggesting an important role in pigmentation. Interestingly, our analyses suggest that lncRNAs do not interact directly with KIT, and hence, it seems this gene is not directly regulated by these non-coding elements. Moreover, we identified a total of 27 differentially expressed lncRNAs (p adjusted ≤0.05 and |log 2 FC| ≥ |1.5|); twenty-four were putative novel lncRNAs, and three were classified as known lncRNAs. Eight lncRNAs showed upregulated and 19 lncRNAs showed downregulated expression in the pigmented eyelid samples. The top lncRNA with significantly downregulated expression was TCONS_00073858 (novel) with log 2 FoldChange = −10.3, while the top lncRNA with upregulated expression was TCONS_00088900 (novel) with log 2 FoldChange = 7.7.
Association of Long NonCoding RNAs With Putative Target Genes Acting in cis
We analyzed potential target mRNAs of lncRNAs acting in cis, that is, protein-coding genes within 300 kb upstream and downstream of the location of significant differentially expressed lncRNAs. We found four putative target genes, namely, CXCL13, FABP4, GPR143, and UGT1A1, in the flanking regions of four differentially expressed lncRNAs, TCONS_00094682 (novel), TCONS_00021379 (novel), TCONS_00109545 (novel), and TCONS_00078693 (novel), respectively ( Table 2). The lncRNAs TCONS_00094682 and TCONS_00021379 showed downregulated expression in the pigmented samples and acted at the cis level with the CXCL13 and FABP4 genes, respectively. The CXCL13 gene encodes a homeostatic chemokine that traffics B cells and is involved in the immune response (Müller et al., 2002). Fatty acid transporters, such as FABP4, were recently shown to increase their expression during metastatic melanoma development, suggesting a higher uptake of fatty acids and metabolic reprogramming that serves as a signature for this condition (Lee et al., 2020). In contrast, the lncRNAs TCONS_00109545 and TCONS_00078693 showed upregulated expression in the pigmented samples. The lncRNA TCONS_00109545 interacts at the cis level with the GPR143 gene, which plays a critical role in retinal health and development (McKay, 2019). Mutations in GPR143 have been associated with the ocular albinism type 1 (OA1) phenotype (Gao et al., 2020), a genetic disorder characterized by reduced ocular pigmentation. Finally, lncRNA TCONS_00078693 was correlated with the expression of UGT1A1, a gene implicated in retinoic acid binding. Retinoid signaling is affected in early carcinogenesis (Tang and Gudas, 2011), and retinoic acid is often used to prevent photoaging of human skin, preventing melanocytic and keratinocytic atypia (Cho et al., 2005). Note that all these lncRNAs showed a positive correlation with their target genes, and hence, we hypothesize that these lncRNAs interact at the cis level, promoting the expression of these genes through the recruitment of proteins that enable transcription loops (Long et al., 2017;Li et al., 2019;Gil and Ulitsky, 2020).
Association of Long NonCoding RNAs With Putative Target Genes Acting in Trans
We analyzed the potential target mRNAs of lncRNAs acting in trans from a total of 273 differentially expressed genes that showed sequence complementarity with at least one of the twenty-seven differentially expressed lncRNAs. Of these 273 genes, 193 showed sizable expression correlations with significant lncRNAs (Pearson, r > |0.60|, p-value < 0.05) (Supplementary Data Sheet S3); hence, they were classified as potential trans target genes. We identified target genes involved in melanogenesis, melanosome development (Lin and Fisher, 2007;Kubic et al., 2008;Schmutz and Dreger, 2013), pigmentation processes (Lalueza-Fox et al., 2007;Enriqué Steinberg et al., 2013), tumor pathways (Hoashi et al., 2005), innate immunity and inflammatory signaling (Ito et al., 2015), such as MC1R, PMEL, MLANA, PAX3, IGFBP2, FGF23, and TREM-2. Interestingly, many of these potential trans target genes were reported previously as differentially expressed in pigmented versus non-pigmented samples (Jara et al., 2020) (Supplementary Figure S4, Supplementary Data Sheet S3). The genes MC1R, PMEL, MLANA, and PAX3 showed upregulated expression in the pigmented samples and interacted with lncRNAs with upand down-regulated expression (Supplementary Data Sheet S2, S3). MC1R is a key pigmentation gene, and its activation in melanocytes stimulates melanogenesis, particularly eumelanogenesis (Beaumont et al., 2007;Cheli et al., 2010;Enriqué Steinberg et al., 2013). The PMEL gene is a key component of mammalian melanosome biogenesis (Clark et al., 2006), and it is required for the generation of cylindrical melanosomes in zebrafish (Watt et al., 2013;Burgoyne et al., 2015). Mutations in PMEL have been shown to regulate hypopigmented phenotypes in vertebrates (Kwon et al., 1995;Kerje et al., 2004). The transcription factor PAX3 interacts with lncRNA TCONS_00088900 and regulates the expression of MITF (Lin and Fisher, 2007), a gene associated with ambilateral circumocular pigmentation in cattle (Pausch et al., 2012). MC1R and MLANA expression levels showed significant positive correlations with the novel lncRNAs TCONS_00088900 and TCONS_00091890, while PMEL also showed a significant positive correlation with TCONS_00091890. MLANA encodes a protein (MART-1) that is localized in melanosomes (De Maziere et al., 2002). Interestingly, MLANA interacts with PMEL and regulates its expression, stability, trafficking, and processing (Hoashi et al., 2005). MLANA showed negative correlations with the lncRNAs ALDBBTAT0000004273 and TCONS_00106295. LncRNAs have different ways of affecting the transcription of target genes in trans, for example, by stabilizing the mRNA (Cao et al., 2017;Long et al., 2017;Li et al., 2019). The participation of lncRNAs in mRNA stabilization and in increasing the expression of specific genes has been recently discussed (Li et al., 2019). From our results, we can infer that ALDBBTAT0000004273 and TCONS_00106295 could be involved in reducing the stability of the mRNA encoded by the MLANA gene and consequently contributing to reduced expression in unpigmented samples. Additionally, the lncRNAs could be involved in the stabilization of the mRNAs of certain genes responsible for pigmentation, such as PEML and MC1R, and thus enhance their expression levels in pigmented eyelids. The genes IGFBP2, FGF23, and TREM-2 showed downregulated expression in the pigmented samples and interacted with lncRNAs with up-and down-regulated expression (Supplementary Data Sheet S2, S3).The FGF23 gene may act as a proinflammatory cytokine (Ito et al., 2015), suggesting a connection between FGF23 and inflammatory processes (Courbebaisse and Lanske, 2018). In fact, high expression of FGF23 is associated with an increased risk of mortality, likely because of its contribution to decreased host defense response to infection, inflammation, and anemia (Courbebaisse and Lanske, 2018). FGF23 showed positive correlations with the ALDBBTAT0000004273 and TCONS_00106295 lncRNAs. The IGFBP2 gene is involved in tumor pathways (Pickard and McCance, 2015) and showed positive correlations with the lncRNAs ALDBBTAT0000001157 and TCONS_00106295. The fatty acid-binding protein 7 (FABP7) gene was also described as a key regulator of cancer metastasis (Cordero et al., 2019), and its expression was upregulated in pigmented samples (Jara et al., 2020). This gene showed significant positive and negative correlations with several lncRNAs. The TREM-2 gene showed a positive correlation with lncRNA TCONS_00073858, which is implicated in innate immunity and inflammatory signaling (Sharif and Knapp, 2008). TREM (TREM-1/TREM-2) gene expression is lower in cutaneous melanoma versus control samples (Nguyen et al., 2015), and these genes could have prognostic and therapeutic value in the treatment of melanoma (Nguyen et al., 2015;Nguyen et al., 2016). From our results, it can be speculated that the lncRNAs described here are involved in stabilizing the mRNAs of genes involved in the immune system and cancer, such as FGF23 (Courbebaisse and Lanske, 2018), IGFBP2 (Pickard and McCance, 2015) and TREM-2.
Gene Set Enrichment Analysis of Putative Target Genes
A total of 193 genes were identified as putative targets of DE lncRNAs in pigmented samples. To identify enriched functions among them, we performed a gene set enrichment analysis using the annotation retrieved from different databases, including GO, KEGG, MeSH, InterPro, Reactome, and MSigDB. Figure 4 shows the most relevant biological terms and pathways associated with eyelid pigmentation. The most significant functional terms were related to skin pigmentation, such as melanosome (GO:0042470), melanin biosynthetic process (GO:0042438), melanosome organization (GO:0032438), melanocyte differentiation (GO: 0030318), melanogenesis (bta04916), skin pigmentation (D012880), pigmentation (D010858) and melanogenesis (M7761) (Figure 4, Supplementary Data Sheet S4). Notably, we found several significant terms related to the inflammatory response and infectious and tumoral pathways: immune response (GO:0006955), defense response to bacterium (GO:0042742), leukocyte chemotaxis (GO:0030595), cytokine-cytokine receptor interaction (bta04060), T cell activation (GO:0042110), cytokinecytokine receptor interaction (bta04060), immune response (M12401), and immune (humoral) and inflammatory response (M8838) (Figure 4, Supplementary Data Sheet S4). Our findings indicate that target genes of lncRNAs with up-and down- regulated expression in eyelid skin are associated not only with pigmentation or melanogenesis but also with the immune response.
Validation of Gene Expression Using Quantitative Real-Time Polymerase Chain Reaction
We validated the findings of the RNA-seq experiment using qRT-PCR. Three lncRNAs and three protein-coding genes were evaluated. Figure 5 shows the log2-fold differences in gene expression measured by both RNA-Seq and qRT-PCR, confirming that the six genes showed very similar patterns of abundance with both methods.
CONCLUSION
In this work, we described the expression patterns of lncRNAs in the eyelid skin of cattle. We predicted 4,937 putative novel lncRNAs, mapped them to the latest bovine reference genome, and compared their sequence features to those of known lncRNAs and protein-coding genes. A total of 27 lncRNAs were identified as differentially expressed between the pigmented and unpigmented samples. Potential associations were found between specific lncRNAs and putative target genes directly implicated in pigmentation, immune responses, and cancer development. Overall, our study enriches the catalog of lncRNAs in B. taurus, specifically those related to the regulation of eyelid skin pigmentation. Future functional studies should further evaluate the biological functions of these significant lncRNAs.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
EJ, FP, EA, and AI designed the experiments. GR and LT extracted the eyelid samples. EJ and CM analyzed the data. EJ drafted the manuscript. EJ examined the data. FP, EA, and AI reviewed and edited the manuscript. All authors agreed with the manuscript. | 2022-05-10T16:03:52.835Z | 2022-05-04T00:00:00.000 | {
"year": 2022,
"sha1": "f6186f6adc04f8e44ebb8e3dcdf01f576998e913",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2022.864567/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e96b2a3d18b7125e39b97ccf5b3c6eadd895ee3c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240999321 | pes2o/s2orc | v3-fos-license | Unambiguous tracking of protein phosphorylation by fast high-reso-lution FOSY NMR
: Phosphorylation is a prototypical example of post-translational modifications (PTMs) that dynamically modulate protein function, where dysregulation is often implicated in disease. NMR provides information on the exact location and time course of PTMs with atomic resolution and under nearly physiological conditions, including inside living cells, but requires unambiguous prior assignment of affected NMR signals to individual atoms. Yet, existing methods for this task base on a global, hence, costly and tedious NMR signal assignment that may often fail, especially for large intrinsically disordered proteins (IDPs). Here we introduce a sensitive and robust method to rapidly obtain only the relevant local NMR signal assignment, based on a suite of FOcused SpectroscopY (FOSY) experiments that employ the long overlooked concept of selective polarisation transfer (SPT). We then demonstrate the efficiency of FOSY in identifying two phosphorylation sites of proline-dependent glycogen synthase kinase 3 beta (GSK3β) in human Tau40, an IDP of 441 residues. Besides confirming the known target residue Ser 404 , the unprecedented spectral dispersion in FOSY disclosed for the first time that GSK3β can also phosphorylate Ser 409 without priming by other protein kinases. The new approach will benefit NMR studies of other PTMs and protein hotspots in general, including sites involved in molecular interactions and conformational changes. FOSY experiments use a concatenation of highly selective polarisation transfer (SPT) steps to single out a specific coupled spin system, i.e. residue, by selecting several of its known nuclear frequencies. Various schemes for SPT have been reported that differ, in transfer transfer time, achievable selectivity, susceptibility to relaxation and competing passive couplings (given by the local spin topology). Notably the latter demands vary significantly along the desired polarisation transfer pathway through the peptide spin system. To maximise efficiency and robustness, FOSY therefore implements and adjusts the respectively best suited SPT scheme for each required transfer step. The key for concatenation of the SPTs into the experiment polarisation is the introduced frequency selective and spin state selective polarisation transfer S 4 PT, as The pure concentrated to about 500 flash-frozen in liquid nitrogen, and stored at -80°C till usage.
Post-translational modifications (PTMs) constitute an additional level of complexity in modulating protein function, where phosphorylation is the best studied signalling switch, being abundant and essential in the regulation of intrinsically disordered proteins (IDPs). [1][2][3] Phosphorylation is conventionally detected and quantified by mass spectrometry, then validated by mutagenesis or antibody binding assays. This procedure is expensive, time-consuming, and often fraught with problems especially for highly charged phosphopeptides, repetitive sequences, and proximal modification sites, all of which are hallmarks of IDPs. [4][5] To address these difficulties, time-resolved heteronuclear NMR methods have been developed that also allow to derive the reaction kinetics 6-10 , but require prior NMR signal assignment for the entire protein. The latter is obtained from a suite of multi-dimensional experiments [11][12][13][14][15][16] with often limited sensitivity, and after several days of measurement and sophisticated spectra analysis to resolve assignment ambiguities that may still prove inextricable in highly crowded spectral regions. 17 Thus, a fast, simple, and robust NMR approach for tracking PTMs like phosphorylation would be of great benefit for elucidating their role in signalling proteins with atomic resolution. [18][19][20] To address this need, we here introduce FOcused SpectroscopY (FOSY) for local (instead of global) de novo NMR signal assignment at structural hotspots, such as PTM sites, based on a minimal set of frequency-selective experiments that focus on their sequential vicinity and combine the ultra-high signal dispersion of up to six-or seven-dimensional (6D, 7D) spectra with the sensitivity, speed, and simplicity of 2D spectra.
Central to our method of focussing onto the few residues affected by PTMs and solving the spectral dispersion problem in only two dimensions is the use of frequency Selective Polarisation Transfer (SPT) to single out one coupled nuclear spin system (i.e. a residue) at a time, and with an efficiency and versatility higher than achievable by traditional broadband experiments. [21][22] While optimising sensitivity, multiple selection of known frequencies along a chosen spin system minimises spectral complexity and makes their lengthy sampling in further indirect dimensions redundant. SPT is a known technique with several clear advantages, but its use has so far been limited to isolated two-spin systems 23-25 like 1 H-15 N amide groups, allowing a reduction by just two spectral dimensions. Thus, a 6D could be cut down to a 4D experiment that would still take impractically long to measure for each selected residue at a time. Here we introduce frequency Selective and Spin-State Selective Polarisation Transfer (S 4 PT) as a generalized SPT approach for coupled multi-spin systems, as in isotopically labelled amino acids, which allows to eliminate several pertaining spectral dimensions as well as detrimental evolution of competing passive spin couplings. The novel 2D FOSY spectra ( Figure 1) yield a signal dispersion as in a 6D HNCOCANH 11 and 7D HNCOCACBNH, yet with far superior sensitivity and ease of analysis.
As a showcase application for the proposed FOSY approach to monitor PTMs in IDPs, we identify phosphorylated residues in vitro by proline-dependent glycogen synthase kinase 3 beta (GSK3β) 26 in the 441-residue long human hTau40 protein, which comprises 80 serines and threonines as potential phosphorylation sites. Abnormal hTau40 hyper-phosphorylation is directly linked to dysregulation and, possibly, aggregation that characterises neurodegenerative tauopathies such as Alzheimer's disease. 27-31 As a first step of our strategy (outlined in Figure 2a), spectral changes from GSK3β-mediated phosphorylation of hTau40 in the most sensitive and best dispersed 3D HNCO spectrum revealed several shifted or newly appearing signals for p-hTau40 (peaks a-g in Figure S4). Due to the abundance of proline and glycine residues in IDPs, we assembled a list of hTau40 sequence stretches conforming with the general motif (P/G)-X n -p(S/T)-X n -(P/G) where X ≠ Pro, Gly. The list can be further refined based on reported phosphorylation sites or known kinase consensus sequence motifs (Table S1). This preparatory step concludes with the acquisition of two complementary proline selective 2D experiments 32 to identify the residues following (PX-) or preceding (-XP) prolines ( Figure S5).
To start the FOSY assignment process, we focus on signal 0 (Figure 2b) that appears in the 3D HNCO spectrum after hTau40 phosphorylation (peak 'a' in Figure S4) and likely corresponds to a phosphorylated serine (pS) or threonine (pT). The proline selective spectra show that this new signal derives from a pS or pT preceding a proline (i.e., a p(S/T)P motif, Table S1). A pair of 2D FOSY hnco(CA)NH and hncoCA(N)H experiments ( Figure 1) is then recorded for signal 0 using its associated exact 1 H N,0 , 15 N H,0 , 13 C O,-1 frequencies from the 3D HNCO spectrum. This yields the 15 N H,-1 , 1 H N,-1 , and 13 C A,-1 frequencies of the preceding residue X and connects signal 0 with signal -1 (Figure 2b) to compose a -Xp(S/T)P motif. We continue the FOSY walk to signal -2 using the 15 N H,-1 , 1 H N,-1 , and 13 C O,-2 frequencies for signal -1 (the latter again derived from the 3D HNCO). This iteration is repeated until reaching a proline or glycine signal, the latter identified by the negative intensity (from constant-time evolution) and characteristic chemical shift of its 13 C A signal. For starting signal 0 we, thus, identify a GXXp(S/T)P motif that matches with either GYSS 199 P or GDTS 404 P stretch in the hTau40 sequence.
To resolve this assignment ambiguity, or in case of several peaks observed in the 2D hnco(CA)NH and hncoCA(N)H spectra due to an overlap of the initially selected frequencies (e.g., the FOSY walk from signal -2 connects with both signals -3 and -3* in Figure 2b), we furthermore record a 2D FOSY hncocacbNH experiment ( Figure 1) with five fixed frequencies ( 1 H N,0 , 15 N H,0 , 13 C O,-1 , 13 C A,-1 , 13 C B,-1 ) to determine the preceding residue type. The unknown 13 C B,-1 frequency to be tested is chosen from published residue-type-specific chemical shifts for random coil. 33-34 By producing a signal only if the correct 13 C B,-1 frequency is used for selective decoupling 35-36 , the 2D FOSY hncocacbNH spectrum for signal 0 reveals that the associated p(S/T) residue is preceded by a threonine. Thus, GDTS 404 P is the correct assignment and signal 0 corresponds to pS 404 . Signal 0 newly appears after hTau40 phosphorylation by GSK3β and corresponds to a residue preceding a proline, as revealed by a proline-selective experiment. The sequential walk (black arrows) along NH-NH correlations is traced out by successive iterations of 2D FOSY-hnco(CA)NH experiments. Thus connected signals are numbered by the pertaining FOSY step number, and are gradually coloured as in the associated peptide sequence (below, left). For signal -2, the 2D FOSY-hnco(CA)NH spectrum also opens an alternative branch of signals indicated by asterisks and connected by dashed grey arrows. To resolve such ambiguities, preceding residue types were tested using the 2D FOSY-hncocacbNH experiment, which only produces a signal (inserted 1D 1 H projections) if the correct residue specific 13 C B,-1 frequency is preset. The derived PXXSGXTp(S/T)P motif unambiguously maps to PVVSGDTS 404 P and, thus, assigns the phosphorylation site signal 0 to pS 404 . Similar 2D FOSY hncocacbNH experiments for the alternative Gly signals -3 and -3* show that only the former is preceded by a Ser (signal -4), as in the identified GDTS 404 P stretch, while signal -3* is preceded by a Glu (signal -4*) and, therefore, starts a false branch. Further FOSY walking leads to signals -5 and -6 that the proline-selective spectrum shows to succeed a proline (in contrast to signal -6* on another false branch). This results in a PXXSGXTp(S/T)P motif that uniquely maps onto the hTau40 PVVSGDTS 404 P stretch and, thus, corroborates signal 0 assignment to pS 404 . Overall, the minimal set of 2D FOSY spectra ( Figure S5) for unambiguous identification of the pS 404 site comprises three pairs of complementary hnco(CA)NH and hncoCA(N)H for the sequential walk plus three hncocacbNH to identify the preceding residue type, altogether recorded within less than two hours. As these highly selective 2D spectra contain only a single or very few peaks, their real-time analysis is most straightforward. Analogous identification of the second phosphorylation site pS 409 by a similar minimal set of 2D FOSY spectra to compile the unique PRHLpS 409 motif is illustrated in the Supporting Information ( Figure S6).
Together, pS 404 and pS 409 account for all newly appearing and shifted signals (of nearby residues) in the HNCO spectrum of p-hTau40, proving them to be the only two phosphorylation sites for GSK3β under our experimental conditions. Of note, while proline dependent S 404 phosphorylation was known from prior studies, 26, 39 only the extreme spectral simplification and dispersion afforded by our new FOSY approach allowed to also confirm S 409 phosphorylation by GSK3β. The latter was previously suggested to occur in the pre-neurofibrillar tangle state of hTau40, 30 but so far could not be unambiguously shown to occur without priming by other protein kinases. 40 In summary, we have demonstrated the de novo identification of phosphorylation sites in IDPs by the fast and robust new FOSY NMR approach. The proposed strategy requires no lengthy prior signal assignment nor knowledge of the target sites for PTM. Key to the new approach is its local focus on the relevant modification sites to identify short motifs for unambiguous sequence mapping, which is much faster and broader applicable than the conventional approach of global NMR signal assignment limited by the size and spectral complexity of the protein. Analysis is fast and most straightforward due to the extreme simplicity of 2D FOSY spectra that still reflect the enormous signal dispersion of their conventional complex 6D and 7D counterparts. The great benefits and versatility of such frequency selective NMR approaches, here demonstrated by a proof-of-concept study on IDP phosphorylation, are generalizable and should enable broad new studies on a plethora of biomolecular hotspots hitherto inaccessible by NMR.
Supporting Information
Unambiguous tracking of protein phosphorylation by fast, high-resolution FOSY NMR.
Selective Polarisation Transfer schemes
The FOSY experiments use a concatenation of highly selective polarisation transfer (SPT) steps to single out a specific coupled spin system, i.e. residue, by selecting several of its known nuclear frequencies. Various schemes for SPT have been reported that differ, e.g. in transfer mechanism, transfer time, achievable selectivity, susceptibility to relaxation and competing passive couplings (given by the local spin topology). Notably the latter demands vary significantly along the desired polarisation transfer pathway through the peptide spin system. To maximise efficiency and robustness, FOSY therefore implements and adjusts the respectively best suited SPT scheme for each required transfer step. The key for concatenation of the SPTs into the experiment polarisation flow is the newly introduced frequency selective and spin state selective polarisation transfer S 4 PT, as described below.
PI-SPT
For a heteronuclear IS two spin-½ system with mutual scalar JIS coupling, I spin coherence produces an in-phase signal doublet with frequencies ν I + JIS/2 and ν I -JIS/2 as the coupled S spin may be aligned either parallel (Sα) or antiparallel (Sβ) with the external magnetic field, respectively. An inversion pulse applied with high selectivity at the frequency ν I -JIS/2 then creates γS enhanced anti-phase 2IzSz = IzSα -IzSβ magnetization. This scheme for magnetization transfer, 1-2 which we abbreviate PI-SPT (Population Inversion for Selective Polarisation Transfer), is frequency selective for spin I and notably immune to any further (other than JIS) passive scalar couplings of spin S. Furthermore, in proteins, the amide HN group forms a particular IS system with significant cross-correlated relaxation giving rise to a strong TROSY effect. 3 For such systems it was shown that PI-SPT reaches the physical limit of efficiency for magnetization transfer, surpassing all broadband PT schemes, if the HzNβ TROSY polarisation component is selectively inverted by a CROP-shaped pulse 4 . In FOSY experiments, we employ such optimal PI-SPT for the initial Hz→2NzHz transfer step.
For the following section, we note that term 2IzSz can be also presented as 2IzSz = IαSz-IβSz, where Iα|β present single spin states of spin I. Both IαSz and IβSz terms are directly used by the subsequent S 4 PT block without need for intermittent complete magnetization conversion of 2IzSz to spin S.
HH-S 4 PT -Heteronuclear Hartmann-Hahn (HH) frequency Selective and Spin-State Selective Polarisation Transfer (S 4 PT)
In an isolated heteronuclear IS two-spin system with mutual JIS coupling, selective I→S Hartmann-Hahn polarization transfer 5 (HH-SPT) can be achieved by simultaneous weak continuous wave (CW) irradiation at the exact νI and νS frequencies for a duration τCW= 1/JIS and with identical field strengths where n = 1, 2, etc. and n = 1 provides the weakest radiofrequency field B1 = JIS•√3/4, affording highest νI and νS frequency selectivity as well as maximal tolerance to relative B1 miscalibration and mismatch. These latter benefits of frequency selective heteronuclear Hartmann-Hahn transfer are crucially important and in stark contrast to the broadband implementation of Hartmann-Hahn transfer, where the losses from B1 mismatch (that scale with B1 and are inevitable for separate probe coils for I and S) are exacerbated and effectively prevent a wider use in solution state NMR.
Further passive couplings JSM of the receiving spin S, however, may strongly compromise the efficiency of HH-SPT. If JSM ≈ JIS, the detrimental effects can be efficiently suppressed by simply using a slightly stronger B1 field, given by Eq. S1, for n = 2 or 3, in exchange for a small loss of frequency selectivity in the Hartmann-Hahn polarisation transfer. If JSM > JIS, however, the signal of spin S gets "broadened" beyond the extremely narrow HH-SPT bandwidth of approximately B1 ≈ JIS (see above) due to splitting by JSM. Efficient HH-SPT then requires to separately irradiate both lines of the S signal doublet 1-2, 6 with the same weak B1 field strength given by Eq. S1, i.e. CW irradiation at the three frequencies νI, νS -JSM/2, and νS + JSM/2. Under these conditions, HH-SPT passes via separate parallel Ix → SxMα and Ix → SxMβ pathways that are immune to JSM coupling evolution and can each be described by the HH-SPT formalism for an isolated two spin system. 5 As the parallel heteronuclear Hartmann-Hahn polarisation transfer pathways are both (νI, νS) frequency selective and (Mα|β) spin state selective, we propose the acronym HH-S 4 PT. Importantly, the sign for Ix → SxMα and Ix → SxMβ pathways in HH-S 4 PT can be controlled via the phase of the pertaining CW irradiation at νS -JSM/2 vs. νS + J SM /2. Thus, identical CW phases produce SxMα + SxMβ = Sx inphase transfer while opposite phases SxMα -SxMβ = 2SxMz achieve antiphase transfer. Furthermore, HH-S 4 PT can be generalized to larger than three-spin systems, where passive couplings "broaden" both spin I and S resonances, by adjusting the number of CW irradiation frequencies.
In FOSY experiments, we employ HH-S 4 PT for direct 2Nx i Hz i → 2COx i-1 CAz i-1 antiphase-to-antiphase polarisation transfer. Both the starting N and receiving CO spins show passive couplings ( 1 JN,H ≈ 90 Hz and 1 JCO,CA ≈ 55 Hz) much larger than the active 1 JN,CO ≈ 15 Hz coupling. Consequently, HH-S 4 PT must enable the following four transfer pathways:
FOSY experiments
The above. The additional frequency selection of C A,i-1 and C B,i-1 here produces an effective frequency dispersion equivalent to a 7D experiment. The employed field strength of the C B,i-1 decoupling of several hundred Hertz provides only relatively low resolution in the C B,i-1 dimension, but it is sufficient in most cases to identify the preceding residue type. To cancel out any perturbation of C A,i-1 coherence, we compensate the effects of C B,i-1 decoupling at the opposite side of the 13 C A,i-1 resonance ( Figure S2d). If the C B,i-1 frequency is unknown, the corresponding LT 2/" frequency "splitting" by 1 JCA,CB coupling is handled in the same way as the other passive 1 JCA,CO coupling, i.e. by implementing the S 4 PT module qcw6 with quadruple frequency selective CW irradiation ( Figure S2e), as explained above.
For non-deuterated proteins, broadband 1 H decoupling in the hnco(CA)NH and hncoCA(N)H experiments ( Figure S1a) is implemented by a pair of 1 H inversion pulses ( Figure S3a) that also ensures a return of both water and aliphatic proton magnetisations towards their thermal equilibrium. To similarly adapt the hncocacbNH experiment, we replace the module shown in Figure S1c by the block depicted in Figure S3b. The corresponding LSF-S 4 PT sandwiches with and without 13 C B,i-1 decoupling are depicted in Figure S3c and Figure S3d, respectively. Of note, as the LSF-S 4 PT scheme employs irradiation only on 13 C, it naturally leaves all (water and protein) 1 H magnetization unaffected.
Proline -Selective Experiments
The triple resonance experiments used in this work are based on the BEST-TROSY intra-HNCA and HNcoCA experiments 9 with modifications for proline selection as described by Solyom et al. 10 Thus, a selective 8.25 ms long REBURP 11 shape 15 N inversion pulse covering the distinct proline chemical shift range 138±2.77 ppm (250 Hz) is applied during the constant-time 13 C A chemical shift evolution period (TC ≈ 2/ 1 JCACB = 56 ms). This pulse is omitted in alternating scans such that the JCANpro coupling either evolves or not, while the receiver phase is inverted for pairwise subtraction of scans. In the resulting spectrum, signals are only observed if the corresponding 13 C A couples with a 15 N proline spin inverted by the REBURP pulse. Thus, modified iHNCA and HNcoCA difference experiments contain signals only for residues X preceding (XP) or succeeding (PX) a proline, respectively.
Experimental NMR technical details
All NMR experiments were recorded on an 800 MHz Bruker AVANCE IIIHD spectrometer equipped with a 3 mm TCI 1 H/ 13 C/ 15 N cryoprobe. Spectra acquisition, processing, and analysis were performed using TopSpin 3.5 (Bruker BioSpin). 2D FOSY-hncoCA(N)H experiments were recorded with 150 × 1024 complex data points; 2D FOSY-hnco(CA)NH and 2D FOSY-hncocacbNH experiments were recorded with 65 × 1024 complex data points. The spectral widths for 13 CA, 15 N, and 1 HN were 30 ppm, 26 ppm, and 14 ppm, respectively. The number of scans ranged from 2 to 32, depending on the signal intensity. The proline-selective experiments were recorded with 128 × 2048 complex data points and a 15 N spectral width of 26 ppm for the PX spectrum vs. 64 × 1024 complex data points and a 15 N spectral width of 41 ppm for the XP spectrum. The standard 3D BEST-TROSY HNCO experiment, obtained from the IBS library 12 , was recorded with 100 × 100 × 2048 complex data points and spectral widths of 10 ppm for 13 CO, 26 ppm for 15 N, and 16 ppm for 1 HN. A short recycle delay of 0.5 s was used in all experiments. Table S1. Peptide sequence stretches of hTau40 conforming with the general motif (P/G)-X n -p(S/T)-X n -(P/G), with X ≠ (P/G), as traced out by the presented FOSY assignment approach. This list was filtered for stretches containing the phosphorylation sites reported in literature 16 and/or known kinase consensus recognition motifs. Progressing one sequence position at a time, the X amino acid can be classified as preceding a proline (-XP), following a proline (PX-), following a glycine (GX-) or none of the above, ruling out peptide sequences that cannot be mapped based on the derived sequence motif. The table shows a tick mark (√) for those residues in each stretch that conform with the amino acid type indicated by the delineated FOSY assignment protocol. Thus, starting signals 'a' and 'd' (Figure S4) could be unambiguously mapped to the phosphorylated residues pS 404 and pS 409 after only three and two FOSY steps tracing out the unique GXTp(S/T)P and PXXXp(S/T) motifs, respectively (shown bold 15 N, 13 C] 2 H-hTau40 and unphosphorylated protonated [U-15 N, 13 C] 1 H-hTau40 samples. Exact chemical shifts were obtained from the 3D BEST-TROSY HNCO 12 , 2D FOSY-hnco(CA)NH, and 2D FOSY-hncoCA(N)H spectra of deuterated ( Figure S1) and non-deuterated ( Figure S3) hTau40. The sequential assignment walk started from the newly appearing pS 404 peak for the p-2 H-hTau40 sample, whereas for the unphosphorylated 2 H-hTau40 and 1 H-hTau40 the initial T 403 peak in HNCO spectrum was identified using the values taken from published assignment. 17 p-2 H-hTau40 2 H-hTau40 1 H-hTau40 to 'c' and 'd' with 'e' to 'g'. Each individual spectrum is coloured differently, occasionally revealing multiple peaks in a single spectrum due to overlap of the selected frequencies. Figure S6. FOSY assignment walk for the second phosphorylation site in p-hTau40. The walk in the 2D 1 H, 15 N plane starts from the newly appearing signal 0 (corresponding to peak 'd' in the HNCO spectrum in Figure S4) and proceeds via the indicated three FOSY steps (-1 to -3) until reaching the signal from a PX-or GX-type residue. This traces out a PXXXp(S/T) motif that can be mapped to either PQLAT 427 , PVDLS 316 , or PRHLS 409 stretches in the hTau40 primary sequence. FOSY-hncocacbNH is then used to narrow down on the pertaining amino acid type, indicating a Leu (not Ala) for signal -1, His for signal -2, and Arg for signal -3. Thus, PRHLS 409 is unambiguously confirmed as correct assignment. This stretch is also the only one of the three alternatives that was shortlisted as a known GSK3β-mediated phosphorylation site (Table S1). | 2021-09-28T16:49:40.545Z | 2021-07-13T00:00:00.000 | {
"year": 2021,
"sha1": "df682629833e957d654dd5ceda1b9426077b1a88",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/ange.202102758",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ecd7ad3acbd9da3096cb1d614accc31ab98f19bd",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
264289309 | pes2o/s2orc | v3-fos-license | A CLASSIFICATION OF ANOMALOUS ACTIONS THROUGH MODEL ACTION ABSORPTION
. We discuss a strategy for classifying anomalous actions through model action absorption. We use this to upgrade existing classification results for Rokhlin actions of finite groups on C ∗ -algebras, with further assuming a UHF-absorption condition, to a classification of anomalous actions on these C ∗ -algebras.
Introduction
Connes' classification of automorphisms on the hyperfinite II 1 factor R ( [8,7]) paved the way towards a classification of symmetries of simple operator algebras.Over the next decade, this was followed by Vaughan Jones' classification of finite group actions on R ( [29]) and Ocneanu's classification of actions of countable amenable groups on R ( [37]).To achieve these classification results, an important role is played by adaptations of Connes' non-commutative Rokhlin lemma, which yields that outer group actions on R satisfy a condition often called the Rokhlin property that is analogous to properties of ergodic measure preserving actions of amenable groups on probability spaces ( [41], [38]).In the C * -setting, the analogous property is not automatic.However, there has been substantial progress in the classification of those group actions on C * -algebras that satisfy the Rokhlin property ( [17,18,19,12,21,22,23]). Very recently, groundbreaking results towards a classification of group actions without the need for the Rokhlin property have appeared ( [15,25,26]).
Connes, Jones and Ocneanu also classify group homomorphisms G → Out(R) up to outer conjugacy ( [8,29,37]).Such a homomorphism is called a G-kernel on R. The classification of G-kernels on injective factors was completed by Katayama and Takesaki ([31]).These can be understood as the first classification results for quantum symmetries of R which do not arise as group actions.Quantum symmetry is a broad term that encapsulates generalised notions of symmetry that appear in topological and conformal field theories.These symmetries are often encoded through the action of a higher category equipped with a product operation such that the category weakly resembles a group.In the case of G-kernels, these can be understood as actions of 2-groups The author was supported by the Ioan and Rosemary James Scholarship awarded by St John's College and the Mathematical Institute, University of Oxford.
In comparison to the success in understanding the existence and classification of G-kernels on von Neumann algebras, the study of G-kernels on C * -algebras has up to recently been underdeveloped.In [27] Corey Jones studies the closely related notion of ω-anomalous action. 1 In his paper, Corey Jones provides a C * -adaptation of Vaughan Jones' work ( [28]), laying out a systematic way to construct anomalous actions on C * -crossed products.Corey Jones also establishes existence and no-go theorems for anomalous actions on abelian C * -algebras.In [13] Evington and the author lay out an algebraic K-theory obstruction to the existence of anomalous actions on tracial C * -algebras.Recently, Izumi has developed a cohomological invariant for G-kernels ( [24]).This invariant introduces new obstructions to the existence of G-kernels which also apply in the non-tracial setting.Further, Izumi uses this invariant to classify G-kernels of some poly-Z groups on strongly self-absorbing UCT Kirchberg algebras.
This paper provides a classification of anomalous actions with the Rokhlin property on C * -algebras where K-theoretic obstructions vanish.The Rokhlin property for finite group actions was first systematically studied by Izumi ([21,22]).In his work, Izumi uses the Rokhlin property to boost existing classification results of Kirchberg algebras in the UCT class ( [39,32]) and unital, simple, separable, nuclear, tracially approximate finite dimensional (TAF) algebras in the UCT class ( [34]) by their K-theory, to a classification of finite group actions with the Rokhlin property on these classes of C * -algebras by the induced module structure on K-theory ([22, Theorem 4.2, Theorem 4.3]). 2he strategy of this paper is to bootstrap Izumi's classification of G actions with the Rokhlin property, for finite groups G, to achieve analogous classification results for anomalous actions.To do this, we will assume that our C * -algebras satisfy a UHF absorbing condition.To be precise, that the C * -algebras are stable by the UHF algebra this sort of assumption is considered for example in [2]).Further assuming the Rokhlin property, we will establish a model action absorption result (Proposition 3.6).Second, we will use the model action absorption combined with a trick, that builds on ideas of Connes in the cyclic group case ([8, Section 6]).This trick lets us use the existence of anomalous action on the UHF-algebra M |G| ∞ to reduce the classification of anomalous actions to the classification of cocycle actions.This trick is not available through replacing M |G| ∞ by Z or O ∞ due to the obstruction results of [13, Theorem A] and [24,Theorem 3.6].This argument allows us to prove the following.
Theorem A. (cf.Theorem 4.2 and Theorem 4.3) Let G be a finite group and A ∼ = A⊗M |G| ∞ be either a Kirchberg algebra in the UCT class or a unital, simple, separable, nuclear TAF algebra in the UCT class.If (α, u), (β, v) are anomalous G actions on A with the Rokhlin property, then (α, u) is cocycle conjugate to (β, v) through an automorphism that is trivial on K-theory if and only if K i (α g ) = K i (β g ) for all g ∈ G and the anomalies of (α, u) and (β, v) coincide.
Similarly, we can boost Nawata's classification of Rokhlin G actions on W (see [35]) to a classification of anomalous actions on W.
Theorem B. (cf.Theorem 4.4) Let G be a finite group and (α, u), (β, v) be anomalous G actions on W with the Rokhlin property, then (α, u) is cocycle conjugate to (β, v) if and only if the anomalies of (α, u) and (β, v) coincide.
The procedure utilised for the proof of Theorem A can be expected to work in more generality.The reason for restricting to unital, simple, nuclear TAF algebras in the tracial setting is due to the need to apply classification results for (cocycle) group actions.With more novel stably finite classification results in hand ( [5]), and using similar techniques to [21,22], a classification of finite group actions with the Rokhlin property on simple, separable, nuclear, Z-stable C * -algebra satisfying the UCT through the induced module structure on the Elliott invariant is plausible.A strategy to approach this classification problem has been proposed by Szabó in private communications.With such a result in hand, one could apply the abstract Lemma 4.1 to yield the equivalent to Theorem A in the generality of simple, separable, nuclear, M |G| ∞ -stable C * -algebras satisfying the UCT.
Recent advances in the classification of more general symmetries on C * -algebras pave the way towards a classification of quantum symmetries.Significant results in this direction are the classification of AF-actions of fusion categories on AF-algebras ( [6]), as well as Yuki Arano's announcement of an adaptation of Izumi's techniques in [21] to actions of fusion categories with the Rokhlin property.In the final section of this paper, we connect our results to the work in [6].We demonstrate the existence of an AF ω-anomalous G-action with the Rokhlin property on M |G| ∞ which we denote by θ ω G .This has structural implications for anomalous actions with the Rokhlin property on any AF-algebra A. Indeed, combined with Theorem A, the existence of θ ω G implies that every anomalous action on A with the Rokhlin property, that consists of automorphisms that act trivially on K-theory, is automatically AF (see Corollary 5.3).Under some assumptions on the anomaly, an application of the classification results of [6] establish the converse (see Corollary 5.3).This partial converse exhibits a difference in behavior between anomalous actions and group actions (see the discussion following Corollary 5.3).
The paper is organised as follows.In Section 1 we recall some necessary background on anomalous actions.Section 2 recalls the construction of model anomalous actions on UHF algebras.In Section 3 we prove a model action absorbing result for finite group anomalous actions.In Section 4 we set out an abstract lemma for the classification of anomalous actions (Lemma 4.1) which we use to prove our main results.Finally, in Section 5, we discuss an application of the classification result to AF-actions.
Acknowledgements.The author would like to thank Samuel Evington, Corey Jones and Stuart White for useful discussions related to the topic of this paper.This work forms part of the authors DPhil thesis [16].
Preliminaries
Throughout, A and B will be used to denote C * -algebras and G, Γ, K will be used to denote countable discrete groups.We let T ⊂ C be the circle group.We denote the multiplier algebra of A by M(A).Any automorphism α ∈ Aut(A) extends uniquely to an automorphism of M(A), we denote this extension also by α.For a unitary u ∈ M(A) we write Ad(u) for the automorphism a → uau * of A and the group of inner automorphisms on A by Inn(A).Recall that a G-kernel of A is a group homomorphism G → Aut(A)/Inn(A) = Out(A).We now recall the definition of an anomalous action from [27, Definition 1.1].In the case that A has trivial centre this notion coincides with a lift of a G-kernels into Aut(A).Definition 1.1.An anomalous action of a countable discrete group G on a C * -algebra A consists of a pair (α, u) where are a pair of maps such that Firstly, note that in (1.1) and (1.2) we have used the subscript notation α g and u g,h instead of α(g) and u(g, h) for g, h ∈ G.We will use this throughout when notationally convenient.
As shown in [10,Lemma 7.1] the formula in (1.2) defines a circle valued 3-cocycle i.e. an element of Z3 (G, T).We will call this the anomaly of the action and denote it by o(α, u).For ω ∈ Z 3 (G, T) we say (α, u) is a (G, ω) action on A to mean that (α, u) is an anomalous action of G on A with anomaly ω. 3 If ω = 1 then we call (α, u) a cocycle action.Note that any anomalous action (α, u) induces a G-kernel when passing to the quotient group Out(A), we denote its associated G-kernel by α.For any G-kernel α on A we denote by ob(α) ∈ H 3 (G, Z(U(M(A)))) its 3-cohomology invariant (see e.g.[13,Section 2.1]).
The reader should be warned that there is a slight variation in Definition 1.1 to the definitions of anomalous actions in [27] and [13].Given our conventions in Definition 1.1, a (G, ω) action induces an ω anomalous action as in [27,Definition 1.1], this is seen by taking m g,h = u * g,h .Throughout this paper, we will denote the algebra of bounded sequences of A quotiented by those sequences going to zero in norm by A ∞ .For a * -closed subset S of A ∞ we may consider the commutant We may then denote Kirchberg's sequence algebra by In the case that S is the C * -algebra of constant sequences in A ∞ we denote this simply by F (A) = F (A, A ∞ ) and F (A) the central sequence algebra of A. Note that F (A) is a unital C * -algebra whenever A is σ-unital.Indeed, the unit is given by h = (h n ) for any sequential approximate unit h n for A.
Any automorphism θ ∈ Aut(A) induces an automorphism θ of A ∞ through (a n ) → (θ(a n )) for any (a n ) ∈ A ∞ . 4If a subset S of A ∞ is invariant under both θ and θ −1 , then so are A ∞ ∩ S ′ and A ∞ ∩ S ⊥ and θ induces an automorphism of F (S, A ∞ ).
Remark 1.2.When A is equipped with a (G, ω) action (α, u), it induces a (G, ω) action on A ∞ .In fact, α induces a group action on Similarly, if S = S * is an α invariant subset of A ∞ containing A and S is also invariant by u g,h for all g, h ∈ G (i.e.u g,h S + Su g,h ∈ S for all g, h ∈ G), then α induces a group action on F (S, A ∞ ) (see [43,Remark 1.8]).A subset which is invariant under both α and u will be called (α, u)-invariant.
We will be interested in anomalous actions with the Rokhlin property.This notion was introduced in [21, Definition 3.10] for actions of finite groups on unital C * -algebras and later generalised by Nawata to the generality of σ-unital C * -algebras in [35].Its definition in the setting of anomalous actions is ad verbatim.Definition 1.3.An anomalous action (α, u) of a finite group G on a σ-unital C * -algebra A is said to have the Rokhlin property, if there exist projections p g ∈ F (A) for g ∈ G such that: (1) g∈G p g = 1, (2) α g (p h ) = p gh .
Remark 1.4.The Rokhlin property also makes sense for G-kernels.In this case, a G-kernel α of a finite group G on a σ-unital C * -algebra A satisfies the Rokhlin property if for any/some lift (α, u) of α there exists a partition of unity of projections p g ∈ F (A) for g ∈ G such that α g (p h ) = p gh for all g, h ∈ G.
Our main goal is to classify anomalous actions with the Rokhlin property.To make sense of this question, we first need to introduce equivalence relations for anomalous actions.Before we do so, we start by introducing some notation that will allow us to streamline future definitions.
Definition 1.5.Let (α, u) be an anomalous action of a group G on a gh , g, h ∈ G is an anomalous action.We say that (α Ú , u Ú ) is a unitary perturbation of (α, u).
Definition 1.6.Let A, B be C * -algebras, (α, u) be an anomalous G action on A and (β, v) be an anomalous action on B. Then we say that We denote this by (α, u) ≃ (β, v).(iii) If A and B are equal and (α, u) ≃ (β, v) with the conjugacy holding through an automorphism θ such that K i (θ) = id K i (A) for i = 1, 2, we say (α, u) and (β, v) are K-trivially cocycle conjugate.We denote this by (α, u) Finally, recall the definition of a unitary one cocycle.
Definition 1.7.Let α be a (G, ω) action on a C * -algebra A. We call a map v :
Model actions
Given a finite group G and ω ∈ Z 3 (G, T) a 3-cocycle, [13, Theorem C] constructs a (G, ω) action on M |G| ∞ .This result is based on a construction of Corey Jones in [27] which in turn is based on a construction of Vaughan Jones in the setting of von Neumann algebras ( [28]).
In this section, we recall this construction as we will need its specific form to deduce properties of the action.In [27], Corey Jones shows that if ω is a normalised 3-cocycle and one has the following data: one can induce a (G, ω) action on the twisted reduced crossed product B ⋊ r π,c K, with K = ker(ρ) (see [4] for a reference on twisted crossed products). 5The automorphic data of this (G, ω) action is given by for a k ∈ B, v k the canonical unitaries in M(B ⋊ r π,c K), g ∈ G and g → ĝ a choice of set theoretic section to ρ : Γ → G. 6 In fact, given an arbitrary finite group G, Corey Jones constructs a finite group Γ, a surjection ρ and a 2 cochain c with the conditions needed above and additionally c| ker(ρ) = 1.Additionally to Γ and c, the extra data considered in [13, Theorem C] is: with λ Γ the left regular representation and Ad(λ Γ ) γ (T ) = λ Γ (γ)T λ Γ (γ) * for all T ∈ B(l 2 (Γ)) and γ ∈ Γ.In this case, the crossed product B⋊ r π K is shown to be isomorphic to the UHF algebra M |G| ∞ .Corey Jones' construction then yields a (G, ω) action on M |G| ∞ through (2.1) for any has the Rokhlin property.Proof.We use the notation set up in the previous paragraphs.Furthermore, denote by r i : B(l 2 (Γ)) → B the unital embedding into the i-th tensor factor.As for any complex scalars µ γ .Let p n = r n (e K ) for n ∈ N. Note that the projection p = (p n ) ∈ B ∞ commutes with any constant sequence of elements in B.Moreover, p commutes with the subalgebra We claim that the projections n∈N form a set of Rokhlin projections.We start by showing that the sum The maps r n are unital so it suffices to show that g∈G Ad(λ Γ ) ĝ(e K ) = 1 B(l 2 (Γ)) .To see this, let γ ∈ Γ, g ∈ G and δ γ ∈ l 2 (Γ) the point mass at γ, then The left K cosets are pairwise disjoint and cover the whole group Γ.Therefore, it follows that g∈G Ad(λ Γ ) ĝ(e K )(δ γ ) = δ γ for every γ ∈ Γ.As the operators g∈G Ad(λ Γ ) ĝ(e K ) and id B(l 2 (Γ)) coincide on a spanning set of l 2 (Γ), these operators are equal.
It remains to show that for g, h ∈ G the projections s ω G (g)p h = p gh .This follows as )p gh = p gh where the last equality in the chain holds as p gh commutes with A.
Absorption of model actions
In this section we show that any Rokhlin anomalous action of a finite group G, on an M |G| ∞ -stable C * -algebra, absorbs the action up to cocycle conjugacy. 7The methods utilised in this chapter are an adaptation of Vaughan Jones' work ( [29]) to the C * -setting.
In his work [43,44,42], Szabó establishes the theory of strongly selfabsorbing C * -dynamical systems as an equivariant version of strongly self-absorbing C * -algebras that were introduced in [45].We recall the main definition below.Definition 3.1.Let G be a locally compact group.A group action γ on a unital, separable C * -algebra D is called strongly self-absorbing if there exists an equivariant isomorphism ϕ : The relevant example of a strongly self-absorbing action for this paper is s G .That s G is strongly self-absorbing follows as a consequence of [43,Example 5.1].
In [43,Theorem 3.7] Szabó shows equivalent conditions for a cocycle action to tensorially absorb a strongly self-absorbing action.Although Szabó's theory only treats the case of cocycle actions absorbing a given strongly self-absorbing group action, many of the arguments follow in exactly the same way when replacing cocycle actions by anomalous actions that may have non-trivial anomaly.The proofs of [ We still require a few more results before we can achieve the model action absorption.These are based on known results in the setting of finite group actions on unital C * -algebras.These generalise line by line to anomalous actions of finite groups on unital C * -algebras, we adapt the arguments also for non-unital C * -algebras.Lemma 3.3 (cf.[20,Theorem 3.3]).Let A be a C * -algebra, G be a finite group and (α, u) be an anomalous action of G on A with the Rokhlin property.If B = B * is a separable (α, u)-invariant subset of A ∞ and there exists a unital * -homomorphism M → F (B, A ∞ ) for some separable, unital C * -algebra M, then there exists a unital Let S = B ∪ g∈G α g (ψ 0 (M)) ∪ g∈G α g (ψ 0 (M)) * so S = S * .By the Rokhlin property followed by a standard reindexing argument, there exist positive contractions
Now consider the linear mapping
Firstly, for m, m ′ ∈ M and b ∈ B it follows from (i) and (iiiv) Where in the last line we have used that B is u invariant and so the observation in Remark 1.2 applies.Therefore, the map This homomorphism is unital through combining (iii) and (iiv) and * -preserving by (ii).
In the next lemma, recall that if α is a action of a group G on a C * -algebra A, an α-cocycle is a family of unitaries v g ∈ U(M(A)) for g ∈ G such that v g α g (v h ) = v gh .Lemma 3.4 (cf.[18, Lemma III.1]).Let A be a separable C * -algebra and G a finite group.Let (α, u) be an anomalous action of G on A with the Rokhlin property.Let B = B * be a separable (α, u)-invariant subset of A ∞ .For any α-cocycle v g for the action induced by α on As in the previous lemma, one may apply the Rokhlin property combined with a reindexing argument to get a family of positive elements Then for any b ∈ B by (ii),(iiv) and (iiiv) it follows that Similarly uu * b = b for any b ∈ B.Moreover, (iii),(i),(iv) and (iiv) imply that for b ∈ B and g ∈ G Therefore, by passing to the quotient, u defines a unitary in F (B, A ∞ ) such that uα g (u * ) = v g for all g ∈ G.
The proof of the next lemma is based on the proof of [29, Proposition 3.4.1].Lemma 3.5.Let G be a finite group and A be a separable C * -algebra such that A ∼ = A ⊗ M |G| ∞ .Let (α, u) be an anomalous action with the Rokhlin property of G on A .Then there exists a G-equivariant unital embedding Proof.To prove this we inductively construct unital equivariant *homomorphisms φ n : (B(l 2 (G)), Ad(λ G )) → (F (A), α) for n ∈ N with commuting images.Then the map defined by Suppose φ 1 , φ 2 , . . ., φ n : (B(l 2 (G)), Ad(λ G )) → (F (A), α) are equivariant maps with commuting images and let Then S is separable, S = S * and S is (α, u) invariant.We check that u g,h S ⊂ S for all g, h ∈ G, the remaining conditions follow similarly.For a ∈ A, m ∈ M and 1 [45,Theorem 2.2].Moreover by reindexing one can also choose a homomorphism as stated.)It follows from Lemma 3.3 that there exists a unital embedding B(l 2 (G)) → F (S, A ∞ ) α .Let (e ′ g,h ) g,h∈G in F (S, A ∞ ) α be the images of e g,h under this unital embedding.The permutation unitary v g = h∈G e ′ gh,h gives a unitary representation of G on F (S, A ∞ ) α and as α g (v h ) = v h it follows that v g is an α-cocycle.Therefore, by Lemma 3.4 there exists a unitary u ∈ F (S, A ∞ ) such that uα g (u * ) = v g .Now f g,h = u * e ′ g,h u for g, h ∈ G is a set of matrix units such that Hence the * -homomorphism defines an Ad(λ G ) to α equivariant * -homomorphisms.Moreover, the image of φ n+1 commutes with φ i for all 1 ≤ i ≤ n.Considering φ n+1 as a unital equivariant homomorphism into A ∞ ∩A ′ /A ∞ ∩A ⊥ the induction argument is complete.
We have collected all the necessary ingredients to prove the model action absorption.Proposition 3.6.Let G be a finite group and A a separable C * -algebra such that A ∼ = A ⊗ M |G| ∞ .Let (α, u) be a (G, ω) action on A with the Rokhlin property.Then (α, u) and (α⊗s G , u⊗1 M |G| ∞ ) are cocycle conjugate through an isomorphism that is approximately unitarily equivalent to id A ⊗1 M |G| ∞ .
Classification
We now discuss the abstract approach to bootstrapping the classification of group actions on a given class of C * -algebras to a classification of anomalous actions.This method is a generalisation of that used by Connes in [8, Section 6], a similar strategy was recently used in [24] to classify G-kernels of poly-Z groups on O 2 .
Before proceeding with the result, we set up notation.For a group G, we say "(α, u) is an anomalous G-action on A" and "(A, α, u) is an anomalous G-C * -algebra" interchangebly.Let F be a functor whose domain category is the category of C * -algebras (denoted C*alg).We say F is invariant under approximate unitary equivalence if F (α) = F (θ) whenever α ≈ a.u θ.We also say that F restricted to a subcategory C ⊂ C*alg is full on isomorphisms , if whenever Φ ∈ Hom(F (A), F (B)) is an isomorphism for A, B ∈ C, then there exists an isomorphism ϕ : A → B in C with F (ϕ) = Φ.The sort of functors with these properties are those used in the classification of C * -algebras.For example, the functor consisting of pointed K 0 and K 1 is invariant under approximate unitary equivalence, it is also full on isomorphisms when restricted to the category of unital Kirchberg algebras satisfying the UCT (see [39]).Similarly, the functors KT u and KT u of [5] are invariant under approximate unitary equivalence and is full on isomorphisms when restricted to classifiable C * -algebras.
If F is invariant under unitary equivalence, an anomalous action (A, α, u) induces a G-action on F (A) through the automorphisms F (α g ).If (A, α, u) and (B, β, v) are anomalous actions, we say the induced actions F (α g ) and F (β g ) are conjugate if there exists an isomorphism Φ : F (A) → F (B) with ΦF (α g )Φ −1 = F (β g ) for all g ∈ G.We denote this by F (α) ∼ F (β).
Let (A, α, u) and (A, β, v) be two anomalous G-C * -algebras.We write (α, u) ≃ F (β, v) if (α, u) ≃ (β, v) through an automorphism θ with F (θ) = id F (A) .This notion recovers K-trivial cocycle conjugacy of Definition 1.6 when F is taken to be the functor consisting of K 0 ⊕K 1 .Finally, if R is a class of anomalous G-C * -algebras, we will say R is closed under conjugacy, if whenever (A, α, u) ∈ R and ϕ : A → B is an isomorphism in C*alg then (B, ϕαϕ −1 , ϕ(u)) ∈ R. Lemma 4.1.Let G be a group, D a strongly self-absorbing C * -algebra and R a class of anomalous G-C * -algebras that is closed under conjugation.Let F be a functor with domain category the category of C *algebras such that F is invariant under approximate unitary equivalence and is full on isomorphisms for C * -algebras in R. Suppose further that, (A1) there exists a G-action (D, s G , 1) such that if (A, α, u) ∈ R, then (A, α, u) ≃ (A⊗D, α ⊗s G , u ⊗1) through an automorphism that is approximately unitarily equivalent to id A ⊗1 D ; (A2) if there exists a (G, ω) action in R for some ω ∈ Z 3 (G, T), there exist a (G, ω) and (G, ω) action (D, s ω G , u ω ) and (D, With the same hypothesis but replacing (A3) with the condition that (A3') for cocycle actions (A, α, u) and Proof.First we show that if (A1)-(A3) hold and (A, α, u), (B, β, v) are anomalous actions in R, then (A, α, u) ≃ (B, β, v) if and only if ) and also that F (α) ∼ F (β) as F is trivial when evaluated at inner automorphisms.We now turn to the converse.Suppose F (α) ∼ F (β) and o(α, u) = o(β, v).First note that this implies that also F (α ⊗ id D ) ∼ F (β ⊗ id D ).Indeed, by (A1) let φ A : A → A ⊗ D and φ B : B → B ⊗ D be isomorphisms which are approximately unitarily equivalent to the first factor embeddings and Φ : ) (and similarly replacing A by B).Hence we compute that We now prove our classification theorems Theorem 4.2.Let G be a finite group and A be a unital Kirchberg algebra satisfying the UCT with are anomalous actions of G on A with the Rokhlin property then (α, u) ≃ K (β, v) if and only if o(α, u) = o(β, v) and K i (α g ) = K i (β g ) for all g ∈ G and i = 0, 1.
Proof.We check that the hypothesis of Lemma 4.1 is satisfied.Let D = M |G| ∞ , F be the functor consisting of the pointed K 0 group direct sum the K 1 group and R the class of Rokhlin anomalous G-actions on unital Kirchberg algebras satisfying the UCT.That F is full on isomorphisms follows from [39].Condition (A1) follows from Proposition 3.6.For any ω ∈ Z 3 (G, T), we have actions (D, follows from [18, Theorem III.6] combined with [21, Lemma 3.12] as the actions (D, s ω G , u ω ) have the Rokhlin property (and hence property R ∞ ) by Proposition 2.1.Therefore, (A2) is also satisfied.Finally (A3') is satisfied by Izumi's classification result [22,Theorem 4.2] and that every cocycle action with the Rokhlin property is a unitary perturbation of a group action [22,Lemma 3.12].Theorem 4.3.Let G be a finite group and A be a unital, simple, nuclear TAF-algebra in the UCT class such that A ∼ = A ⊗ M |G| ∞ and (α, u), (β, v) are anomalous actions on A with the Rokhlin property, then (α, u) ≃ K (β, v) if and only if o(α, u) = o(β, v) and K i (α g ) = K i (β g ) for all g ∈ G.
Proof.We apply Lemma 4.1 with D = M |G| ∞ , R the class of Rokhlin anomalous actions on M |G| ∞ -stable unital, simple, separable, nuclear TAF-algebras satisfying the UCT and F the functor consisting of ordered K 0 and K 1 .Firstly, F is full on isomorphisms by [34].(A1) holds by Proposition 3.6.(A2) holds for the same reason as in the proof of Theorem 4.2.Condition (A3') follows from a combination of [22,Theorem 4.3] and [21,Lemma 3.12].
We have shown a classification of anomalous actions on some classes of simple C * -algebras.Such a classification also implies a classification of G-kernels, we illustrate it by using Theorem 4.2, the same argument may also be used to rewrite the results of Theorem 4.4, Theorem 4.3 and Corollary 4.5.
Corollary 4.6.Let A be a unital Kirchberg algebra satisfying the UCT with A ∼ = A⊗M |G| ∞ and α, β be G-kernels with the Rokhlin property on A. Then α and β are K trivially conjugate if and only if ob(α) = ob(β) and K i (α g ) = K i (β g ) for all g ∈ G and i = 0, 1.
Applications
We start this section by giving an alternative construction, of a (G, ω) action on the UHF algebra M |G| ∞ which is visibly compatible with a Bratteli diagram of M |G| ∞ .This action is an AF-action in the sense of [11] and [6,Definition 4.8] (see also the discussion in [16,Section 6.1]).The existence of an AF ω-anomalous action on M |G| ∞ follows from an adaptation of the Ocneanu compactness argument to the C * -setting ( [36]).We build it explicitily below.Proposition 5.1.Let G be a finite group and ω ∈ Z 3 (G, T), then there exists an AF-(G, ω) action with the Rokhlin property on M |G| ∞ .We denote this action by θ ω G .
Proof.In this proof we will use the symbols g, h, k, x, y, x i , y i , s i for i ∈ N to denote elements of the group G.
) be the multiplication operator by f .Consider the *homomorphisms ϕ n : ).The inductive system (A n , ϕ n ) has an inductive limit (we write the limit by A) which is known to be isomorphic to M |G| ∞ .Indeed, the Bratelli diagram of this AF-algebra is easily seen to be the complete bipartite graph on |G|-vertices, it is common knowledge that this coincides with the UHF-algebra of type |G| ∞ (see [9,Example III.2.4] for the case |G| = 2) We construct a (G, ω) action on each finite dimensional algebra A n such that the actions commute with the inclusion maps ϕ n .This will induce an AF ω-anomalous G action on M |G| ∞ by the universal property of the inductive limit (see [16,Section 6.1]).
Let e g,h ∈ B(l 2 (G)) be defined by with d n (g) defined inductively and for all n > 2 with the convention that x 0 = y 0 = k.As we have defined d n (g) on a spanning set of A n , d n (g) extend to linear maps from A n to itself.In fact each d n (g) is an endomorphism of A n .First, it is clear that they preserve the * -operation.To show the multiplicativity, it is sufficient to check on a spanning set.We show this by induction.For the case n = 2 it is only non-trivial to check that The left hand side is given by y 2 ) which coincides with the right hand side.To show that d n (g) is multiplicative for n > 2 it suffices to show that This follows immediately from the induction hypothesis and a direct computation of the left hand side (as in the case for n = 2).Notice that each d n (g) fixes elements of the form To construct a (G, ω) action on the first stage A 1 , we let u 1 (g, h)(k) = ω k −1 ,g,h .That (θ 1 , u 1 ) defines a (G, ω) action on C(G) is a straightforward computation (this is computed in [3,Section 4]).We proceed to extend this action on A 1 to all of M |G| ∞ through the inductive limit.Let u n (g, h) = ϕ 1,n (u 1 (g, h)) and θ n (g) = d n (g)θ ′ n (g).For the remaining part of the proof we check that (θ n , u n ) satisfy (1)-( 4) for all n ∈ N. We will repeatedly use the 3-cocycle formula during the calculations, instead of commenting on this every time, we will instead colour code the parts of our equations to which we apply the 3-cocycle formula.
We start by showing (1).Firstly, This holds trivially for n = 1.For n = 2 it follows from the 3-cocycle formula that We now proceed with an inductive argument for arbitrary n.We assume that (1) holds for n − 2, preforming a similar computation to the case n = 2; For (4) it suffices to show that ϕ n d n (g) = d n+1 (g)ϕ n .For n = 1 as d 1 is the identity map.The case n = 2 follows too Assuming that the case n − 2 holds, we now argue by induction, Condition ( 3) is immediate.It remains to show that (2) holds for arbitrary n.This follows from (2) for the case n = 1 and from (4).For To show that θ ω G has the Rokhlin property we construct a family of Rokhlin projections.The projections δ g ⊗ id B(l 2 (G)) ⊗n−1 ∈ Z(A n ) satisfy θ n (g)(δ h ⊗ id B(l 2 (G)) ⊗n−1 ) = δ gh ⊗ id B(l 2 (G)) ⊗n−1 and also g∈G δ g ⊗ id B(l 2 (G)) ⊗n−1 = id An .Therefore, the projections p g ∈ A ∞ with n-th coordinate given by ϕ n,∞ (δ g ⊗ id B(l 2 (G)) ⊗n−1 ) for g ∈ G satisfy the conditions of Definition 1.3.Remark 5.2.In the case that ω = 1 the construction in Proposition 5.1 greatly simplifies.Indeed, d n (g) is the identity automorphism and u n (g, h) is the unit for all g, h ∈ G and n ∈ N. Therefore, θ 1 G restricts to the group action θ n = λ G n−1 i=0 Ad(λ G ) on each A n with λ G the left regular representation.This action coincides with the infinite tensor product action s G (see Section 3).To see this, consider the inductive system (B n , φ n ) with B 2n−1 = A n , B 2n = n i=0 B(l 2 (G)) and φ 2n−1 (f ⊗ T ) = M f ⊗ T , φ 2n (S) = 1 ⊗ S for all n ∈ N, f ⊗ T ∈ A n and S ∈ B 2n .The even terms of the inductive system (B 2n , φ 2n+1 • φ 2n ) coincide with the inductive limit ( n i=1 B(l 2 (G)), M → id B(l 2 (G)) ⊗M).The odd terms (B 2n−1 , φ 2n • φ 2n−1 ) coincide with the inductive system (A n , ϕ n ) from the proof of Proposition 5.1.This allows to interpolate between ( n i=1 B(l 2 (G)), M → id B(l 2 (G)) ⊗M) and (A n , ϕ n ).It is immediate that θ G and s G are conjugate.Moreover, it follows from Theorem 4.3 that θ ω G is cocycle conjugate to s ω G for any ω ∈ Z 3 (G, T).We end this paper by studying to what extent Rokhlin anomalous actions on AF-algebras are AF-actions and vice versa.To do this, we will require results of [6].In [6], the authors associate an invariant to any AF-action F , of a fusion category C, on a AF-algebra A. Vaguely, this invariant consists of the K 0 -groups of all Q-system extensions of A by F and all natural maps between these extensions.The authors also show that any two AF-actions on AF-algebras A and B are equivalent if and only if their invariants are isomorphic.As observed in [6, Section 5.1], if the acting category C is torsion-free (see [1,Definition 3.7]), the invariant of [6] simplifies to just the module structure of K 0 (A) under the action of the fusion ring of C. We apply this when the acting category is Hilb(G, ω) and the action is induced by an anomalous action (α, u) as explained in [13,Proposition 5.6].The fusion ring of Hilb(G, ω) is Z[G] and the module structure of K 0 (A) is given by K 0 (α g ).
Corollary 5.3.Let G be a finite group and A a simple, unital AFalgebra such that A ∼ = A ⊗ M |G| ∞ .Let (α, u) be a (G, ω)-action on A such that K 0 (α g ) = id A for all g ∈ G.If (α, u) has the Rokhlin property, then (α, u) is an AF-action.Moreover, if [ω| H ] = 0 for any subgroup H < G then the converse holds.
Proof.If (α, u) is a (G, ω)-action with the Rokhlin property on an AFalgebra A, then by Theorem 4.3 it is cocycle conjugate to the AF ω-anomalous G-action id A ⊗ θ ω G on A. Therefore (α, u) is AF as (by definition) being AF is preserved under cocycle conjugacy (see [16,Remark 6.1.7]).
We now consider the converse statement.An AF ω-anomalous G action (α, u) induces an AF-action of the fusion category Hilb(G, ω) in the sense of [6] (to see how a (G, ω)-action induces a Hilb(G, ω) action see [13,Proposition 5.6], that this is AF is discussed [16,Remark 6.1.7]).By the hypothesis on ω, the fusion category Hilb(G, ω) is torsion free, so as K 0 (α g ) = id A and K 0 (id A ⊗ θ ω G ) = id A , then [6, Theorem A] yields that the AF ω-anomalous G actions induced by (α, u) and id A ⊗ θ ω G are cocycle conjugate.So (α, u) has the Rokhlin property.
Remark 5.4.One may drop the hypothesis that A ∼ = A ⊗ M |G| ∞ in Corollary 5.3 if one instead assumes that the anomaly ω of (α, u) is such that [ω] has order |G|.Indeed, it follows from [16,Corollary 5.4.4] that in this case A will automatically absorb M |G| ∞ .Also, note that under this assumption on [ω] it is automatic that [ω| H ] = 0 for any subgroup H < G.
The behavior observed in the converse of Corollary 5.3 is quite different from the behaviour of group actions.It was already observed in [14] that there exist AF-actions of Z 2 on M 2 ∞ which do not have the Rokhlin property.
Theorem 4 . 4 .
Let G be a finite group and (α, u), (β, v) be anomalous G actions with the Rokhlin property on W.Then (α, u) ≃ (β, v) if and only if o(α, u) = o(β, v).Proof.We check the conditions of Lemma 4.1 with D = M |G| ∞ , R the class of Rokhlin anomalous actions on W and F the trivial functor.Firstly, (A1) holds by Proposition 3.6.Moreover, (A2) holds as in the proof of Theorem 4.2.Finally, (A3) follows from[35, Corollary 3.7] as every cocycle action of a finite group on W is cocycle conjugate to a group action (this follows as W ∼ = W ⊗ M |G| and hence [15, Remark 1.5] applies).In light of[5, Theorem B], it follows from[21, Theorem 3.5] that all Rokhlin anomalous actions of G on classifiable M |G| ∞ -stable C *algebras are classified up to cocycle conjugacy by their induced action on the total invariant KT u (see[5, Section 3]) and their anomaly.
12
43, Lemma 2.1, Theorem 2.6] and [43, Theorem 3.7, Corollary 3.8] for example, make no use of the anomaly associated to (α, u) and (β, w) being trivial.Under this observation, we can state a specific case of [43, Corollary 3.8].Theorem 3.2 (cf.[43, Theorem 2.8]).Let A and D be separable C *algebras and G a finite group.Assume (α, u) : G A is an anomalous action.Let γ : G D be a group action such that (D, γ) is strongly self-absorbing.If there exists an equivariant and unital * | 2023-10-19T06:43:06.073Z | 2023-10-17T00:00:00.000 | {
"year": 2023,
"sha1": "4f539e3b23670e8dc6466ae4a612cf444471892e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/394DCABEF18CC678552EACD044ACB293/S0008414X2400018Xa.pdf/div-class-title-a-classification-of-anomalous-actions-through-model-action-absorption-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "4f539e3b23670e8dc6466ae4a612cf444471892e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
256047048 | pes2o/s2orc | v3-fos-license | Super-Chern-Simons spectra from exceptional field theory
Exceptional Field Theory has been recently shown to be very powerful to compute Kaluza-Klein spectra. Using these techniques, the mass matrix of Kaluza-Klein vector perturbations about a specific class of AdS4 solutions of D = 11 and massive type IIA supergravity is determined. These results are then employed to characterise the complete supersymmetric spectrum about some notable N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 2 and N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathcal{N} $$\end{document} = 3 AdS4 solutions in this class, which are dual to specific three-dimensional superconformal Chern-Simons field theories.
Introduction
For holographic conformal field theories (CFTs), the spectrum of single-trace operators with scaling dimension of order one, at strong coupling and large N , is mapped to the spectrum of Kaluza-Klein (KK) perturbations about their dual anti-de Sitter (AdS) solutions of string or M-theory [1][2][3]. The KK spectrum of Type IIB supergravity on the background AdS 5 × S 5 [4], relevant to N = 4 super-Yang-Mills, was computed in [5]. The face-value calculation of [5] entails complicated field redefinitions, a demanding linearisation of the type IIB equations of motion, an involved expansion of the linearised fields in scalar, spinor and vector spherical S 5 harmonics and, finally, a diagonalisation of the resulting mass matrices. In this particular case with maximal (super)symmetry, all fields turn out to fill out short supermultiplets of the four-dimensional maximally supersymmetric conformal algebra. For this reason, both the algebraic structure of the spectrum and the physical masses of all fields are actually dictated by group theory [6]. A similar remark applies to the KK spectrum [7][8][9] of the maximally (super)symmetric D = 11 Freund-Rubin solution AdS 4 × S 7 [10], dual to ABJM [11].
For AdS/CFT dual pairs with less (super)symmetry, group theory still determines the algebraic structure of the spectrum, namely, the possible supermultiplets in given representations of the residual symmetry group that are present in the spectrum. Typically, these spectra still contain short multiplets, whose conformal dimensions are again fixed by representation theory. But, unlike in the maximally supersymmetric cases, long multiplets will usually be contained in the spectra as well. For these multiplets, group theory only JHEP04(2021)283 requires that a unitarity bound be respected but, other than this, has no power to predict the actual value of their dimensions. Thus, for AdS solutions with less than maximal supersymmetry, there is no alternative to computing the long KK spectrum other than direct calculation. For homogeneous Freund-Rubin-type solutions, this problem can be attacked using coset space technology [12][13][14][15]. More generally, though, coset methods will not be available for AdS solutions with inhomogeneous internal geometries of relatively small symmetry, supported by fluxes and warp factors. If the direct calculation of the spectrum [5] of the maximally supersymmetric background AdS 5 × S 5 [4] was so demanding, the calculation of KK spectra about inhomogeneous AdS solutions with less (super)symmetry is downright prohibitive with the traditional methods of [5].
Recently, a powerful alternative to the techniques of [5] for the calculation of KK spectra about certain AdS solutions of string or M-theory has been put forward [16,17] based on Exceptional Field Theories (ExFTs) [18][19][20] (see [21] for a review). Like their Exceptional Generalised Geometry cousins [22,23], these correspond to reformulations of the D = 10 and D = 11 supergravities where the exceptional symmetries of the lowerdimensional maximal supergravities, e.g. E 7 (7) for D = 4 which is fixed henceforth, are explicitly realised. This is done at the expense of reducing the manifest D = 10 or D = 11 local Lorentz covariance to only a manifest D = 4 local Lorentz covariance. Even if, due to the latter feature, the ExFTs superficially look four-dimensional, they still are fully-fledged higher-dimensional theories. For this reason, not only the AdS solutions of the higherdimensional theories can be recovered as solutions of ExFT, but also the full towers of KK perturbations about these AdS solutions are contained within ExFT as well.
The reason why ExFT methods to compute KK spectra [16,17] have an edge over the traditional approach [5] is essentially two-fold. On the one hand, the complicated field redefinitions needed prior to linearisation are already built-in (at the full non-linear level, in fact) into ExFT. On the other hand, all fields need to be expanded only in scalar harmonics of the ordinary internal manifold. Unlike the method of [5], though, the ExFT technology [16,17] comes with a regime of applicability which hinges also on two assumptions. Namely, that the relevant AdS solutions have an associated consistent truncation to lower-dimensional maximal gauged supergravity, and that the internal space in the ordinary D = 10 or D = 11 sense, must be topologically spherical. These are not severe limitations, since the class of AdS solutions of this type still comes in cornucopious abundance [24][25][26][27]. Furthermore, having fixed an allowed lower-dimensional gauging, the same set of spherical harmonics on the associated round sphere is valid to compute the KK spectra about any other AdS solution that uplifts from the same gauging, even if the round sphere is not a (supersymmetric) solution itself.
Within their validity regime, these ExFT techniques [16] certainly outperform the standard methods of [5]. One must nevertheless linearise the field equations of ExFT which, for certain fields, particularly the internal scalar fields, is still a rather involved task. Unlike the scalars', the linearisation of the ExFT vector and graviton equations of motion is significantly more manageable. Fortunately, for AdS solutions with sufficiently high supersymmetry, N ≥ 2 in D = 4, an explicit calculation of the spectrum of KK scalars (and spin 1/2 fermions) is not necessary and can be indirectly inferred from the vector and JHEP04(2021)283 graviton spectra, and from group theory. The reason, for N = 2, is that the only scalars and fermions that belong to OSp(4|2) multiplets which do not contain gravitons or vectors, must necessarily belong to short hypermultiplets. And, for these, their dimensions are fixed by the R-charges, similarly to the maximally supersymmetric cases discussed above. The situation is even more restrictive for N = 3, for which already the vector multiplets are necessarily short. In these N ≥ 2 cases, the complete supersymmetric KK spectra are thus fixed by group theory together with the graviton and vector spectra. With this in mind, section 2 presents a derivation of the KK vector mass matrix from ExFT. An alternative, though equivalent, form of the KK vector mass matrix can be found in [17].
In this paper, I will compute the complete supersymmetric KK spectrum about the supersymmetric AdS 4 solutions of D = 11 [28] and massive type IIA supergravity [29] that uplift from D = 4 N = 8 supergravity with specific gaugings. More concretely, I will focus on the D = 11 N = 2 AdS 4 solution found by Corrado, Pilch and Warner (CPW) [30]. For this solution, the OSp(4|2) supermultiplet structure of the spectrum was elucidated in [31], the graviton spectrum was computed in [32] using specific methods for spin-2 (see [33]), and the complete spectrum has been recently given in [17]. In section 3, I compute the spectrum of vectors and allocate them in supermultiplets, finding agreement with [17]. I will also characterise the complete supersymmetric KK spectrum about two specific AdS 4 solutions of massive IIA supergravity with N = 2 [34] and N = 3 [35,36] supersymmetry. The graviton spectra for these solutions was respectively computed in [37] and [38]. By the previous arguments, all that is left to determine the complete supersymmetric KK spectra is an analysis of their OSp(4|2) or OSp(4|3) supermultiplet structure and the calculation of the KK vector spectra. These items are addressed in section 4. Section 5 concludes, and some appendices contain useful supplementary material.
KK vector mass matrix from ExFT
The starting point for the present analysis is the E 7(7) ExFT [20] reformulation of D = 11 supergravity [28], and its deformation thereof [39] suitable to accommodate massive type IIA supergravity [29]. Both theories, [20] and [39], can be treated simultaneously for the purposes of this analysis, and both of them will be collectively referred to as ExFT. Let A M µ (x, Y ) be the gauge fields present in the theory, and F M µν (x, Y ) their field strengths. These depend on both the external, x µ , µ = 0, 1, 2, 3, and the internal, Y M , M = 1, . . . , 56, ExFT coordinates. The curved index M labels the fundamental representation of E 7 (7) , and the gauge field, as well as any other field in the theory, is subject to the relevant section constraints. These will not be explicitly needed. The objective of this section is to linearise the field equations of A M µ (x, Y ) and find the mass matrix for the KK vectors about the class of AdS 4 backgrounds of D = 11 supergravity and massive IIA supergravity that uplift from the D = 4 N = 8 gaugings to be specified momentarily.
Preliminaries
The ExFT (pseudo-)Lagrangian involves A M µ or F M µν in four instances [20,39]: the Einstein-Hilbert term for the external metric g µν (x, Y ), the topological term for F M µν , the kinetic JHEP04(2021)283 term for F M µν and the kinetic term the internal scalar metric M M N (x, Y ). Neither the first nor the second terms are expected to contribute to the vector mass matrix: the Einstein-Hilbert term contains the couplings of the metric to the vectors and should be partially responsible for Higgsing the KK gravitons. The topological terms for F M µν in the ExFT (pseudo-)Lagrangian can be eliminated on-shell in favour of the gauge equations of motion and, for that reason, can also be disregarded. Thus, one is led to focus on the gauge kinetic terms and the scalar kinetic terms of the Lagrangian [20]: Here, e ≡ |det g µν |, external indices are raised with the inverse metric g µν , and M M N is the matrix inverse to M M N . The covariant derivative of M M N is [20,39] 2) and similarly for D µ M M N . In (2.2), the projector onto the adjoint of E 7 (7) has been introduced as and elsewhere with the inverse, κ αβ , of the Cartan-Killing form [39] of the original ExFT formulation of [20]. For D = 11 configurations, F M N P = 0, and for type IIA, F M N P encodes a magnetic gauging contribution from the Romans mass: see [39] for the details. I would like to further restrict the problem to the specific configurations of ExFT that give rise to a D = 4 N = 8 gauged supergravity upon consistent truncation. On top of this configuration, D = 4 KK vector perturbations will also be kept. The concrete D = 4 N = 8 gaugings that I will consider will be the SO(8) gauging [40] and the dyonic [41][42][43] ISO (7) gauging [44]. Both of these indeed arise after consistent truncation of D = 11 and massive IIA supergravity on S 7 [45] and S 6 [34,46], respectively. Under these assumptions, the external and internal ExFT metrics g µν and M M N are set to [47][48][49][50] Here, g µν and M M N respectively correspond to the D = 4 N = 8 metric and E 7(7) /SU (8) scalar matrix. The Y -dependent function ρ and the generalised Scherk-Schwarz twist matrix U M N are needed in (2.4) for consistency. Their explicit expressions will not be needed.
All that will be necessary to know from them is that they must obey the (generalised parallelisability) relations [48][49][50][51] projection to the 912 representation of E 7 (7) . The indices M = 1, . . . , 56 are flat, fundamental E 7 (7) indices, as pertains to strictly D = 4 quantities. Quantities with curved and flat E 7 (7) indices are related through the twist matrix U M N and its inverse.
When the dictionary that relates ExFT to D = 11 or D = 10 supergravities [49,50] is employed, the expressions (2.4), together with those for the remaining ExFT fields, give rise to the explicit embedding of D = 4 N = 8 supergravity in the higher-dimensional theories. The scalar potentials of the SO(8) [40] and ISO (7) [16,17], in terms of a combined direct-product index M Λ. Here, M is a flat index in the 56 of E 7 (7) , and the index . . (the latter linearised as well for the present purposes) through [16,17] Here, Y Λ denotes the infinite tower of scalar spherical harmonics on the round S 7 or S 6 spheres. These lie in the representations of SO (8) or SO(7) indicated in (2.7). The explicit expressions of Y Λ in terms of S 7 or S 6 coordinates will not be needed: suffice it to note that the action of the Scherk-Schwarz twist matrix on these is given by [16,17] The (constant, x-and Y -independent) matrices (T N ) Λ Σ correspond to the generators of SO (8) or SO(7) in the infinite-dimensional, reducible representations (2.7), normalised as (2.10)
The KK vector mass matrix
Equipped with the definitions introduced in section 2.1, the goal is now to linearise the ExFT vector equations of motion or, equivalently, to retain the quadratic terms in the action (2.1), in order to read off the mass matrix for the KK vectors A M Λ µ .
JHEP04(2021)283
The action ( (2.11) and similarly for D µ M M N . In order to reach this result, the consistent truncation conditions (2.5) and the action (2.9) on the spherical harmonics need to be used. The explicit form for the projector to the 912 representation of E 7 (7) , which can be found in e.g. [52], is also needed. All the dependences on the internal coordinates Y brought by ρ and U M N through (2.4), (2.8), (2.11) drop out at the level of the ExFT field equations [49]. The only dependences on the internal ExFT coordinates are brought into the Lagrangian (2.1) through a quadratic combination Y Λ Y Σ of spherical harmonics. Under the integral sign at the level of the action, this dependence simply becomes δ ΛΣ by virtue of the orthogonality of the spherical harmonics.
JHEP04(2021)283 3 The complete KK spectrum of the N = 2 CPW solution
Four-dimensional N = 8 SO(8) gauged supergravity has an N = 2, SU(3) × U(1)-invariant critical point [58], which uplifts on S 7 to the CPW AdS 4 solution of D = 11 supergravity with the same (super)symmetry [30]. The complete spectrum of the CPW solution is now known and, for that reason, this presentation will be brief. A more detailed analysis for the analogue N = 2 solution of massive type IIA [34] will be discussed in section 4. The KK spectra for all fields at KK level n = 0 has long been known [59], due to the fact that this level agrees, when the non-linear interactions within it are restored, with D = 4 N = 8 SO(8) supergravity. More recently, the KK level n = 1 spectrum was computed in [16] using ExFT techniques, and extended to higher levels in [17]. The full KK graviton spectrum is known [32], as is the generic structure of the entire KK spectrum in representations of OSp(4|2) × SU(3) [31]. Here, I recover the complete spectrum of this solution [17,31] by putting together the group theory analysis of [31], the graviton spectrum of [32], and the present calculation of the vector spectrum.
The KK vector spectrum for the CPW solution is presented for the first three KK levels, n = 0, 1, 2, in the entry labelled as N = 2 U(3) in table 14 of appendix A. Firstly, the multiplicities shown in the table are compatible with the OSp(4|2)×SU(3) group theory of [31]. Secondly, all individual vectors that enter short graviton, short gravitino and short vector multiplets do indeed have their masses fixed by the conformal dimension of those multiplets as given in [31]. Thirdly, there exist vector masses in the tables compatible with those predicted by the KK graviton analysis of [32]. The remaining individual vectors must arrange themselves in long gravitino and long vector multiplets. From table 14, together with the analysis of [31], one deduces that short and long gravitino and vector multiplets occurring at KK level n with SU(3) × U(1) charges [p, q] y 0 must have scaling dimensions: Short and long gravitino mult. : Short and long vector mult. : in agreement with [17]. Here, is the eigenvalue of the SU(3) quadratic Casimir operator. The data in table 14 are enough to infer the results (3.1), (3.2) for all KK level n. A further calculation of the n = 3 KK vector masses using (2.15) is in agreement with these. Further, when the SU(3) × U(1) quantum numbers are restricted accordingly, (3.1), (3.2) reproduce the scaling dimensions of the short gravitino and vector multiplets given in [31].
To summarise, the complete KK spectrum of the CPW solution contains the long and short OSp(4|2) × SU(3) multiplets specified in [31]. The dimension of the short multiplets, including the hypermultiplets, was given in that reference. The spectrum of (long and short) graviton mutliplets was given in [32], and the spectrum of (long and short) gravitino and vector multiplets [17] is reproduced by (3.1) and (3.2) above. In the short cases, these correctly reduce to [31].
JHEP04(2021)283 4 Complete KK spectra of AdS 4 solutions of type IIA
Similar steps lead one to obtain the complete KK spectrum about the N = 2 and N = 3 AdS 4 solutions of massive type IIA supergravity [34][35][36] that uplift from ISO (7) supergravity [34,60]. For these solutions, the spectrum at KK level k = 0 [44,60] and the graviton spectrum at all levels [37,38] are completely known. Here, I will give the complete supermultiplet structure and the scaling dimensions at all KK levels.
Putative SO(7) structure of the KK spectra
The massive type IIA AdS 4 solutions that uplift from ISO (7) supergravity have topologically S 6 internal spaces, equipped with G-invariant metrics for certain subgroups G of SO (7). The spectrum for each solution should thus branch from SO (7) representations down to G representations. Similarly to the SO(8) gauging case [7], these putative SO (7) representations arise by tensoring the N = 8 supergravity multiplet, herewith identified at the linearised level with KK level k = 0, with the symmetric traceless representation [k, 0, 0] of SO (7). Higgsing must also be taken into account: each massive graviton in a given SO (7) representation must eat a vector and a (pseudo)scalar in the same representation, which thereby disappear from the physical spectrum; each massive gravitino must eat a spin 1/2 field; and each massive vector must eat a (pseudo)scalar.
This exercise was carried out for the vectors in equation (2.17). More generally, table 1 summarises the result for all the fields. The scalars and pseudoscalars in the table are respectively denoted 0 + and 0 − . The KK level k = 0 represented on the left table is actually 7 scalars short of being the D = 4 N = 8 supergravity multiplet. These scalars disappear from the spectrum, as they are Stückelberg and are eaten by the 7 vectors shown in the k = 0 table. The latter always become massive at any AdS 4 vacuum. In turn, the massless vectors that gauge the residual symmetry G always branch from the 21 vectors present at k = 0.
The KK spectra about all the AdS 4 solutions in this class can be argued to have the SO(7) structure specified in table 1, even if dyonic ISO(7) supergravity does not have an N = 8 SO(7)-invariant solution. From a bulk perspective, this SO(7) structure is in agreement with the ExFT approach of section 2. From the boundary point of view, an argument similar to that put forward [31] for the CPW solution [30] can be made. The AdS 4 solutions under consideration of massive type IIA supergravity are dual to superconformal infrared fixed points of maximally supersymmetric three-dimensional Yang-Mills [61,62]. Despite its lack of conformal symmetry, and thus lack of a dual AdS 4 solution, the latter does have SO(7) R-symmetry. This SO(7) symmetry is thus inherited by all the infrared fixed points, necessarily branched out into representations of their corresponding flavour groups G. See [62] for the holographic interpretation of the k = 0 SO(7)-covariant fields in table 1.
Spectrum of the N = 2 solution
As has been just discussed, the KK spectrum of AdS 4 solutions with symmetry G ⊂ SO (7) comes in the representations of G that arise by branching the SO (7) Table 1. States in SO (7) representations at KK level k = 0 (left) and k = 0, 1, 2, . . . (right) that compose the KK towers for AdS 4 solutions of massive IIA that uplift from ISO (7) supergravity.
Representations with negative Dynkin labels are absent. For a solution with residual symmetry G ⊂ SO (7), the spectrum organises itself in the representation of G that branch from these SO (7) representations. [34]. At fixed KK level k, the SO (7) representations in table 1 for fields of each spin must be branched out under SU(3) × U(1) ⊂ SO (7). The U(1) factor in the residual symmetry group corresponds to the R-symmetry and thus belongs to OSp(4|2). The spectrum must thus be arranged in OSp(4|2) × SU(3) representations: at fixed KK level, fields of different spin and U(1) R-charge but the same SU(3) Dynkin labels [p, q] must be allocated into OSp(4|2) supermultiplets. The tables in appendix A of [31] come in very handy to carry out this exercise. I follow their notation for the OSp(4|2) supermultiplets, only with a subindex 2 attached in order to emphasise that these are N = 2. This allocation into OSp(4|2) supermultiplets proceeds from higher maximum spins to lower, as follows.
Firstly, the spin s = 2 states are assigned to short, SGRAV 2 (or massless, MGRAV 2 , for k = 0), or long, LGRAV 2 , graviton multiplets. Then, fields of lower spins in the same SU(3) representations are used to complete these supermultiplets. Secondly, The s = 3/2 fields that were left unassigned to SGRAV 2 or LGRAV 2 multiplets are then ascribed to short, SGINO 2 or long, LGINO 2 , gravitino multiplets. Again, fields of lower spin with the same Dynkin labels [p, q] are then used to fill out these multiplets. Thirdly, the s = 1 fields that still remain unassigned to the previous supermultiplets are allocated into short, SVEC 2 (or massless, MVEC 2 , for k = 0), or long, LVEC 2 , vector multiplets. Spin 1/2 fermions and scalars are then used to complete these supermultiplets. Finally, the remaining s = 1/2 fermions and scalars are assigned to hypermultiplets, HYP 2 .
The resulting multiplet structure is the following. At KK level k = 0, this analysis was already carried out in [37], and here I simply import the results summarised in table 3 therein. There is one real MGRAV 2 and one real MVEC 2 , which are respectively a singlet and an 8 0 of SU(3) × U(1). The former corresponds to the N = 2 pure supergravity multiplet and the latter contains the massless gauge fields in the adjoint of the residual symmetry group SU (3). Also at KK level k = 0 there is one SGINO 2 and one HYP 2 , both of them complex, together with their complex conjugates. KK level k = 0 is completed with a real SU(3)-singlet 1 LVEC 2 . At each KK level starting at k ≥ 1, there exists one, and JHEP04(2021)283 [ with the first tower present for all k ≥ 1 and the second kicking in at k ≥ 2. The OSp(4|2) × SU(3) structure of the KK spectrum up to level k = 3 is summarised in tables 3-6.
With the spectrum allocated into OSp(4|2) × SU (3) representations, the conformal dimension E 0 for each multiplet remains to be given for the long multiplets. Unlike for the short ones, E 0 is not fixed by group theory in terms of the R-charge y 0 , and is only required to obey a unitarity bound. Thus, E 0 needs to be computed independently in these cases.
in [63], in agreement with the general arguments of [64,65]. A similar result holds [66] for the CPW solution. When non-linear interactions are restored, the MGRAV2 and MVEC2 furnish the SU(3)-invariant sector [44] of D = 4 N = 8 ISO(7) supergravity. This was explicitly embedded in type IIA in [67]. Of course, the entire KK level k = 0 arises upon non-linear consistent truncation [34,46] conj. to [2,1] LGINO 2 LGINO 2 LGINO 2 By an argument similar to that made in section 3 for CPW, the knowledge of the graviton spectrum [37], together with the above group theory analysis and the individual KK vector spectra computed in section 2, is enough to reconstruct the entire, complete KK spectrum and all possible values of E 0 . Table 6. N = 2 supermultiplets at KK level k = 3.
JHEP04(2021)283
The KK vector spectrum for the N = 2 solution, computed from the mass matrix (2.15), is presented for the first three KK levels, k = 0, 1, 2, in the entry labelled as N = 2 U(3) in table 15 of appendix A. Firstly, the multiplicities shown in the table are compatible with the OSp(4|2) × SU(3) group theory just described. Secondly, all individual vectors that enter short multiplets do indeed have their masses fixed by the conformal dimension of those multiplets as follows from table 2. Thirdly, there exist vector masses in the tables compatible with those predicted by the KK graviton analysis of [37]: these are the spin-1 states that accompany the spin-2 states into long and short graviton multiplets. The remaining individual vectors must arrange themselves in long gravitino and long vector multiplets. From the data in the table, one deduces that short or long gravitino and vector multiplets occurring at KK level k with SU(3) × U(1) charges [p, q] y 0 must have scaling dimensions: Short and long graviton mult. : Short and long gravitino mult. : Short and long vector mult. : with C 2 (p, q) given in (3.3). The dimension E 0 in (4.2) for graviton multiplets has been imported from (3.1) of [37] with n there = k here , there = p here + q here and y 0here = 2 2, and its R-charge y 0 , computed with group theory as described above, are shown as a label (E 0 ) y 0 next to each entry. A label 2 × (E 0 ) y 0 indicates that there are two such multiplets. States in SU (3) representations with [p, q] quantum numbers such that q > p appear in the tables as "conjugate to [q, p]" representations; these have their R-charges negated. For example, in table 3 . The format of these tables has been kindly borrowed from [31].
The spectrum of KK scalars can be deduced from the above results. Table 7 lists all the scalars with scaling dimensions ∆ less than or equal to 3. The table includes the analytical value of ∆ together with a convenient numerical approximation. Also shown in the table is the KK level k at which each scalar appears, as well as its SU(3) × U(1) charges [p, q] r . The OSp(4|2) supermultiplet with dimension and R-charge (E 0 ) y 0 , at the same KK level k and with the same SU (3) tables in appendix A of [31] are useful to derive ∆ and r from E 0 and y 0 . These will only match if the scalar is the superconformal primary of the multiplet. The scalars in table 7 are dual to relevant (∆ < 3) or classically marginal (∆ = 3) operators in the dual field theory. All scalars with ∆ ≤ 3 turn out to arise at KK levels k = 0, 1, 2, 3. Each of these KK levels contain scalars dual to irrelevant (∆ > 3) operators as well. At KK levels k ≥ 4, all scalars are dual to irrelevant operators.
Spectrum of the N = 3 solution
The KK spectrum of the N = 3 AdS 4 solution of [35,36] can be obtained similarly. The residual symmetry group of this solution is the SO(4) ≡ SO (3) [14] in referring to the R-symmetry SO(3) R spin j as isospin.
At fixed KK level k, the SO (7) representations in table 1 for fields of each physical spin must be branched out under (4.5). Fields of different physical spin and SO(3) R isospin j but the same SO(3) F flavour spin h must be allocated into the same OSp(4|3) supermultiplets. These supermultiplets have been summarised for convenience in appendix B (with the isospin denoted therein as j 0 ). The allocation proceeds from higher maximum physical spins to lower spins, similar to the N = 2 case discussed in detail in section 4.2. In the present N = 3 case, the process terminates by allocating the individual vectors that did not enter graviton or gravitino multiplets into vector multiplets, as there are no N = 3 hypermultiplets. Further, the N = 3 vector multiplets are necessarily short.
The resulting multiplet structure is the following. At KK level k = 0 (given already in [60]), there is one MGRAV 3 and one MVEC 3 , which are respectively a singlet and a (1, 1) of SO(3) R × SO(3) F . The former corresponds to the N = 3 pure supergravity multiplet. This contains the massless graviton, the N = 3 massless gravitini and the massless Rsymmetry graviphotons which lie in the adjoint of SO(3) R and are singlets under SO(3) F . The MVEC 3 multiplet contains the massless R-symmetry-singlet vectors, in the adjoint of SO(3) F , that gauge the residual flavour group. There is also a 1 2 , 1 2 SGINO 3 and a singlet 2 LGINO 3 . At each KK level starting at k ≥ 1, there exists one, and only one, short OSp(4|3) multiplet of each possible type. The list of short multiplets present in the spectrum is given in table 8. For completeness, the table also shows for each multiplet its scaling dimension E 0 . This is fixed in terms of the isospin j as reviewed in appendix B. All other multiplets are long. The OSp(4|3) × SO(3) F structure of the KK spectrum up to level k = 3 is summarised in tables 9-12 below. All multiplets are real. Now that the spectrum has been arranged into OSp(4|3) × SO(3) F representations, the conformal dimension E 0 of the long multiplets remains to be given. Like in the previous cases, the knowledge of the graviton spectrum [38], together with the above group theory JHEP04(2021)283 Short and long graviton mult. : Short and long gravitino mult.
The dimension E 0 in (4.6) corresponds, up to straightforward notational changes, to the expression given in appendix B of [37] for the graviton multiplet dimensions [38]. The dimension (4.7) for short and long gravitino multiplets has been deduced from table 15 and successfully cross-checked at level k = 3. LGINO LGINO
JHEP04(2021)283
appendix B are useful to determine the scalar charges and dimensions reported in table 13. All scalars dual to relevant and classically marginal operators of the dual field theory arise at levels k ≤ 4. Each of these KK levels also contain scalars dual to irrelevant operators. All scalars at KK levels k ≥ 5 are dual to irrelevant operators.
Discussion
In this paper, I have derived the KK vector mass matrix of a class of AdS 4 solutions of D = 11 supergravity and massive type IIA supergravity from E 7 (7) ExFT [18,20,39], following [16]. Then, I have used these and previous partial results [31,32,37,38] to determine the complete supersymmetric KK spectrum of some N = 2 and N = 3 solutions in this class. The N = 2 AdS 4 CPW solution in D = 11 [30] is dual to an N = 2 Chern-Simons CFT with gauge group U(N ) × U(N ). This CFT is defined on a stack of M2-branes, and arises as the infrared fixed point of a superpotential mass deformation [31] of ABJM [11]. The N = 2 and N = 3 CFTs dual to the AdS 4 solutions [34][35][36] of massive type IIA arise instead as superconformal fixed points of the D2-brane field theory, threedimensional N = 8 SU(N ) super-Yang Mills, augmented with Chern-Simons terms [61,62]. These CFTs have been described in [34]. The present results determine the complete spectrum of single-trace operators with dimension of order one for these CFTs, in the strong coupling regime and at large N . See [69][70][71] for some results on the large-N spectrum of operators with dimensions that scale with N raised to various powers, for some of these CFTs. The N = 2 CFTs discussed above are intrinsically strongly coupled. In contrast, the N = 3 CFT admits a weakly coupled limit which has been investigated in [72,73]. The spectrum of operators of the N = 3 CFT that lie in short representations of OSp(4|3) has been computed at weak coupling and large N directly from the field theory [73]. This short spectrum can be expected to be subject to non-renormalisation theorems and, for this reason, both its structure in terms of OSp(4|3) × SO(3) F representations and the conformal dimensions E 0 of these operators must remain unaltered at strong coupling. Satisfactorily, the bulk calculation of the short spectrum at strong coupling found in table 8 perfectly matches the short spectrum reported at weak coupling in table 15 of [73].
The analysis in this paper determines, in particular, the spectrum of relevant and classically marginal operators of these CFTs. For example, the N = 2 CFT dual to the massive type IIA AdS 4 solution [34] contains SU(N )-adjoint hypermultiplets Z a , a = 1, 2, 3, in the fundamental of the SU(3) flavour group. The theory has a superpotential, W ∼ tr abc Z a [Z b , Z c ], analogous to that of four-dimensional N = 4 super-Yang-Mills written out in N = 1 language. Tables 2 and 3 (see also table 5 of [62]) show the existence of 6 hypermultiplets tr Z (a Z b) , which arise at KK level k = 0. These are superpotential mass terms, thus relevant in agreement with table 7, that can be added to the N = 2 theory to generate renormalisation group (RG) flow [72]. Being generated by a k = 0 deformation, this flow can be holographically built in gauged supergravity [62] in analogy with similar mass deformation flows of N = 4 super-Yang-Mills [74] and ABJM [30,75].
JHEP04(2021)283
Tables 2 and 4 also show the existence of 10 hypermultiplets of the form tr Z (a Z b Z c) at KK level k = 1 in the spectrum of the N = 2 CFT with AdS 4 dual in massive IIA. These cubic terms can again be added to the superpotential to generate deformations that have direct analogues in four-dimensional N = 4 super-Yang Mills and ABJM. In the ABJM case, this cubic superpotential deformation is relevant [69,76] (see also [57]) and generates RG flow. In the present massive type IIA case, the cubic deformation is instead classically marginal according to tables 2, 4 and 7. This is exactly like the analogue N = 1 deformation of N = 4 super-Yang-Mills. As is well known, out of the classically marginal N = 1 deformations of N = 4 super-Yang-Mills, only the so-called β-and cubic deformations are exactly marginal. It would be interesting to determine the exactly marginal deformations in the N = 2 massive IIA case, and engineer their gravity duals following [77,78]. The exactly marginal deformations of the N = 3 theory that preserve N = 2 have been determined in [73] at weak coupling. These are expected to be non-renormalised, and should thus be also included in table 13 above.
On a different note, it was discussed in [37,79] that the KK graviton mass matrix for the D = 11 and massive type IIA AdS 4 solutions that uplift from SO(8) and ISO (7) supergravity displays some universality behaviour. Specifically, the graviton traces match for solutions in both gaugings with the same residual (super)symmetry, at fixed SO(8) KK level n and combined SO(7) KK levels k = 0, 1, . . . , n (in order to trace over the same number of states, through [n, 0, 0, 0] → ⊕ n k=0 [k, 0, 0]). I have checked that the same holds for the KK vector mass matrix (2.15) with (2.13), (2.14). However, the traces now involve the unphysical states that are eaten by massive gravitons, discussed in section 2.2. Thus, these mass matrix traces carry no physical significance. Matchings still occur for some solutions when tracing over physical states only. For example, the vector mass matrix trace taken over physical states for the SO(7) v and SO(7) c AdS 4 solutions of D = 11 supergravity match. Also, while they do not share the same symmetry, there is a similar match tracing over physical KK vectors states for the N = 0 G 2 and N = 3 SO(4) AdS 4 solutions in massive IIA. These solutions have the same cosmological constant [44], and the KK graviton traces also match [37].
The SO (8) and ISO(7) gaugings considered in this paper also have interesting N = 1, or even non-supersymmetric, AdS 4 solutions. In these cases with low or no supersymmetry, the KK scalar spectra is not implied by the KK graviton and vector spectra, and will need to be computed independently. The scalar spectrum of some D = 11 and type IIB AdS solutions have already been computed using ExFT techniques in [16,80].
The As for the tensors (T M ) Λ Σ = (T AB ) Λ Σ , (T AB ) Λ Σ ≡ 0 , depending on how they are defined they encode the SO (8) or SO(7) generators in the infinite-dimensional, reducible representations (2.7). As discussed in section 2.2, the mass matrix (2.15) is block diagonal, and can be diagonalised KK level by KK level. This implies that each individual block can be treated independently. One can then focus on the generators in just the symmetric traceless representations of SO (8) and SO (7). More concretely, breaking out the Λ, Σ indices as in (2.6), one has [26]. For D = 4 N = 8 ISO(7) supergravity, the latest classification of critical points can be found in [27], although the latter should admit improvements from the machine learning methods employed in [26].
For definiteness, I have focused on the D = 11 solutions that preserve at least an SU(3) subgroup of SO (8), following the conventions of [83]. The D = 4 critical points in this sector were classified in [58], and their D = 11 uplift is also known [10,30,[84][85][86][87]. The entire KK spectrum of the N = 8 SO(8)-invariant solution [10] has long been known [7][8][9], and my results reproduce their KK vector spectrum. For the N = 2 SU(3) × U(1) solution [30,58], the spectrum for all KK fields at levels n = 0 [59] and n = 1 [16] is known, along with the entire KK graviton spectrum [32]. My results for the KK vectors again agree with [16,59] and extend them to higher KK levels. The KK graviton spectra of the other solutions has been computed in [79], and the KK vector spectra reported below are new. The results are summarised for levels n = 0, 1, 2 in table 14. In the table, the mass M 2 eigenvalues have been normalised to the corresponding AdS 4 radius squared, L 2 = −6/V , where V < 0 is the cosmological constant of that point. The eigenvalues are given as (M 2 L 2 ) (p) , where p is a positive integer that denotes the multiplicity. Recall that the scaling dimension ∆ of a vector of mass M 2 L 2 is given by This formula has been used throughout to convert the KK vector mass eigenvalues to the conformal dimensions reported in the main text. For completeness, recall that for gravitons and scalars the analogue relation is Turning to massive IIA, I have again focused for concreteness on the solutions that preserve the SU(3) subgroup of SO (7). These solutions were classified as critical points of D = 4 N = 8 ISO(7) supergravity in [44]. The latter reference also contains the spectrum for all bosonic fields at KK level k = 0. Due to its particular interest, I have also looked at the N = 3 solution with SO(4) residual symmetry, which lies outside of this class. This solution was first found as a critical point of the D = 4 supergravity in [60], where the k = 0 KK spectrum was also determined. The type IIA uplift of all these solutions is known [34][35][36]67]. The KK spectrum of gravitons at all levels is also known [37,38]. The present results reproduce the vector spectrum at k = 0 level and extend them to higher KK levels. The results for levels k = 0, 1, 2 are summarised in , 0 (21) , 12 (42) , , 9 (20) , 27
(256)
, 6 (270) , 3 (84) , 9 4 (140) The state content of the supermultiplets of OSp(4|2) has been conveniently tabulated in appendix A of [31]. These tables are very useful to allocate the spectrum of the N = 2 AdS 4 solution discussed in section 4.2 into OSp(4|2) supermultiplets. For OSp(4|3), similar tables do not seem to be available in the literature: reference [14] tabulates the field content for representations of integer isospin only. In the spectrum of the N = 3 AdS 4 solution of section 4.3, multiplets of both integer and half-integer isospin arise. In this appendix, the state content of the OSp(4|3) representations that appear in the spectrum of this solution are presented for convenience in tables 16-37 below. The unitary representations of OSp(4|3) are characterised by three quantum numbers: the Dynkin labels of the superconformal primary under the bosonic subalgebra SO(3, 2) × SO(3) R . These are the SO(3, 2) spin s 0 and energy E 0 , and the Dynkin label j 0 (usually referred to as isospin) of the R-symmetry group SO(3) R . In the main text, I have denoted the isospin simply by j. Alternatively, the superconformal primary spin s 0 can be traded by the maximum SO(3, 2) spin s max present in the multiplet. This is the convention that I will use, thereby labelling OSp(4|3) multiplets with (s max , E 0 , j 0 ). Using this convention, the multiplets can be given names according to the value of s max : one can thus speak of graviton, gravitino, or vector multiplets, if s max = 2, s max = 3 2 , or s max = 1. There are no hypermultiplets (which would have s max = 1 2 ). As for the range of the quantum numbers, in supergravity applications one only needs to consider the above three values of s max . I use conventions in which the isospin is a non-negative half-integer: j 0 = 0, 1 2 , 1, 3 2 , 2, 5 2 , 3, . . . , so that the isospin j 0 representation of SO(3) R is (2j 0 + 1)-dimensional. Finally, E 0 is a real number subject to the unitarity bound E 0 ≥ j 0 + 3 2 if s max = 2, or E 0 ≥ j 0 + 1 if s max = 3 2 , or to the equality E 0 = j 0 if s max = 1. In the first two cases, if the bound is saturated the multiplets are short (or massless), and long otherwise. Vector multiplets are always short (or massless).
A massless gravitino multiplet exists, but it cannot arise in supergravity spectra as its presence would indicate an enhancement N > 3 of supersymmetry. A subindex 3 has been attached to the above acronyms in order to emphasise that they are N = 3. | 2023-01-21T15:08:08.880Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "fed4998683a2dc9a17d8b0e8dbaba8a3c9b4377e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP04(2021)283.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "fed4998683a2dc9a17d8b0e8dbaba8a3c9b4377e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
264953874 | pes2o/s2orc | v3-fos-license | Echocardiographic predictors of outcomes in hypertrophic cardiomyopathy
The use of echocardiography, a straightforward and widely available technique, allows for a comprehensive assessment of the patient with
Introduction
Hypertrophic cardiomyopathy (HCM) is a common condition, characterized by a heterogeneous spectrum of phenotypes and clinical trajectories, which can significantly vary even in the same family.While the disease may have a favorable course, heart failure (HF)-related complications are common, and potentially lethal arrhythmias may occur [1].The identification of high-risk subsets remains an important challenge, which goes hand-in-hand with the daily efforts to provide answers to the patients on their state of health and risk profile.These clinical needs have inspired a bulk of studies based on imaging techniques, first and foremost echocardiography, which provides diagnostic and prognostic information, and allows the examination of HCM patients both at rest and during provocation.Based on over 3 decades of intense research, this review aims to provide an overview of well-established as well as innovative echocardiographic predictors of outcome in patients with HCM (Table 1).
Resting echocardiography
By allowing the evaluation of chamber dimension and function, presence of LVOT obstruction or valvular disease, transthoracic echocardiography provides crucial data in the assessment of risk for death or SCD, HF, AF, and stroke.The degree of hypertrophy, LV diastolic and systolic function, LA size and function, and LVOT obstruction, all have been identified as major echocardiographic predictors.
Degree of hypertrophy
Over the years, several studies have been conducted to assess the influence of hypertrophy on the disease course, with the earliest evidence focusing on the prediction of SCD.Two important studies [2,3] conducted at the beginning of this millennium helped to pinpoint a maximum wall thickness (MWT) ≥ 30 mm as a major risk factor of SCD (Figure 1).Spirito and colleagues [4] first reported a proportional increase in SCD risk with the magnitude of myocardial wall thickness.Specifically, over a mean follow-up period of 6.5 years, the risk doubled from each subgroup to the next, starting from LV hypertrophy (LVH) ≥ 15 mm [4].Young adults with MWT ≥ 30 mm appeared to be at high risk, even without outflow obstruction under basal conditions and with few or no symptoms.At the same time, this data reassured clinicians about the management of HCM population with mild hypertrophy (MWT < 19 mm), which represents about 40% of the total cases and for which the rate of sudden death was close to zero 10 years after the initial evaluation and less than 3% at 20 years [4].In parallel, Elliot et al. [5] confirmed the positive association between increased MWT and higher rates of SCD or implantable cardioverter-defibrillator (ICD) discharge or death from any cause but emphasized the importance of focusing on a set of risk indicators rather than a single risk factor.This led to the development of two different multiparametric approaches endorsed by the European Society of Cardiology and the American Heart Association (AHA) to assess the risk of SCD [2,3].
Beyond MWT, these include other echocardiographic indicators, such as LA size, resting LVOT gradient (LVOTG), LV ejection fraction (LVEF) < 50%, and LV apical aneurysm [3,6].Unlike MWT, there is no data regarding the relationship between LV mass or hypertrophy distribution and arrhythmic risk [4].Noticeably, in children, an inverted U-shaped relationship was found between LVH and predicted SCD risk, indicating that MWT should not be used as a main criterion to guide ICD implantation [7].While the direct association between LVH and SCD is well-demonstrated in adults, LVH does not seem to be directly associated with symptoms of worsening HF [8].Moreover, distinguishing between hypertrophy patterns (asymmetric, concentric, and apical) has added little to prediction of outcomes [9], with one distinctive exception: the "mixed" apical HCM [10].The latter-characterized by simultaneous apical and septal hypertrophy-shows a higher risk of arrhythmias, HF, and SCD [11], whether the apical aneurysm is present or not.Of note, conventional echocardiographic techniques do not allow accurate measurement of LV mass in HCM, a parameter shown to be a strong prognostic indicator in cardiac magnetic resonance (CMR) studies [12].However, 3D echocardiography might hamper this limitation in the future [13,14].
LA size and function
LA enlargement is a common finding in HCM and its aetiology is multifactorial, expressing a mix of primary atrial myopathy, elevated LV filling pressures, and MR (Figure 2).It has also been proposed that an intrinsic atrial myopathy may result from abnormalities in sarcomeric genes [15] or in genes involved in collagen formation and the renin-angiotensin-aldosterone pathway [16].As an alternative, higher filling pressures derived from MR, LVOT obstruction, LV diastolic dysfunction, and myocardial fibrosis have been identified as causes of increased LA size [17].In turn, patients with HCM and LA enlargement not only appear to show a 4 to 6-fold greater likelihood of developing AF compared to the general population but also higher rates of all-cause mortality and disabling symptoms [18].Notably, LA diameter, volume, and strain, are all independently associated with new-onset AF [19].Thus, even in individuals with LA diameter < 45 mm, LA volume and strain enhances prediction of newonset AF, and both measures show higher sensitivity, negative predictive value, and positive predictive value than LA diameter [19].
The prediction of outcomes was tested in a nationwide Italian registry on 1,491 HCM patients, showing that each 5 mm increase in LA diameter was associated with a 20% increase in all-cause mortality, and identifying a cut-off of > 48 mm for LA diameter as an independent predictor of all-cause mortality, cardiovascular and HF death [20].LA volume has also proven to be an independent predictor of adverse cardiovascular outcomes in general [20][21][22].
Finally, a correlation exists between LA and arrhythmic events.In fact, both LA size and function were associated with SCD, ventricular arrhythmias, and appropriate ICD discharge in independent studies [6,23,24].In particular, the reduction of LA function as a reservoir (LA reservoir strain < 17%) displayed a significant additive predictive value on top of the HCM SCD risk score [24].
LV systolic and diastolic function
In the absence of LVOT obstruction, drug-resistant HF symptoms are usually caused by diastolic dysfunction which may be challenging to treat pharmacologically and may occasionally require heart transplantation.Subtended by a variety of mechanisms including delayed LV relaxation, diffuse myocardial fibrosis, and abnormal calcium homeostasis [25], severe diastolic dysfunction is a predictor of adverse outcome in HCM [21,[26][27][28].Tissue Doppler imaging (TDI) has allowed identifying patients with higher risk not only of developing HF [28] but also of experiencing death, cardiac arrest, sustained ventricular tachycardia (VT), or ICD discharge [26].Moreover, TDI is useful in the prediction of adverse cardiovascular events even in patients who are asymptomatic or mildly symptomatic [27].Of note, TDI has also been suggested to have a role in detecting subtle alterations in genotype-positive, and phenotype-negative individuals [29].
Systolic dysfunction with LVEF below 50% identifies the so-called end-stage phase of HCM, characterized by severe and diffuse LV fibrosis [30].However, abnormalities in regional systolic function can be identified at earlier stages, in which LVEF is preserved but fibrosis starts to accumulate, if alternative imaging modalities are used.Reduced systolic strain assessed by speckle tracking echocardiography identifies patients with more severe disease manifestation [31,32] and an increased risk for major cardiac events [23,33].Specifically, in a study by Haland et al. [32], mechanical dispersion (calculated as the standard deviation of time from the beginning of the QRS complex on electrocardiogram to peak longitudinal strain in 16 LV segments) was a strong independent risk predictor of ventricular arrhythmias and related to the extent of fibrosis at CMR, improving risk stratification when added to the conventional SCD score.
Resting LVOT obstruction
Resting LVOT obstruction is present in about one-third of HCM patients [34] and is an independent predictor of all-cause or cardiovascular death [34,35], HF hospitalization, stroke [35], and SCD or appropriate ICD discharge (Figure 3) [6,35].However, the risk does not increase with the augmentation of the gradient above the threshold of 30 mmHg [35].In addition to the obstruction, the presence and progression of MR during follow-up (more than mild), together with mitral annular calcification, are significant contributors to poor outcomes in patients with HCM [36].In children, however, recent evidence shows an opposite pattern, as patients with higher gradients seem to be at lower risk of SCD [37,38].
Specific features (apical aneurysm, RV involvement)
While current risk-scoring algorithms successfully identify high-risk individuals, especially in the late HCM stages, their accuracy is limited in patients without overt structural remodeling or advanced symptoms.This led to the investigation of other potential risk factors detectable during baseline echocardiography, such as apical aneurism (associated with risk of sustained VT) or RV involvement.The presence of apical aneurysms was incorporated in the stratification of SCD risk in the 2020 AHA /American College of Cardiology (ACC) guidelines on HCM.In the evaluation of this morphological alteration, contrast echocardiography has shown high sensitivity equal to CMR (Figure 4) [39].Preliminary data, reported by a recent retrospective study of Chang et al. [40], recognized tricuspid annular plane systolic excursion and RV-free wall strain as independent predictors of hospitalization for HF, sustained VT, or all-cause death.
Stress echocardiographic predictors of outcomes in HCM
Stress echocardiography (SE) has gained an important role in the clinical evaluation of patients with HCM.Exercise is the most widely used modality for stress testing, albeit pharmacological testing provides invaluable information as well.The latest HCM guidelines recommend the execution of exercise SE for patients without LVOT obstruction at rest, with class 1 and 2 indications to rule out exercise-induced gradients [2,3].The recommendation is stronger (class 1) for symptomatic patients and weaker for asymptomatic individuals [class 2a (US) and 2b (Europe)].These guidelines consider exercise SE mainly as a tool to uncover and quantify exercise LVOTG, which may be related to symptoms but has scarce and contradictory prognostic relevance.In contrast, emerging data demonstrate the precious functional and prognostic value of other stress echocardiographic parameters, such as regional WMAs, the ratio of E/e', pulmonary pressures, or CFVR.Since exercise echo is operator-dependent, specific training for this technique, including expertise on HCM patients, is required to accrue meaningful data.
All echocardiographic findings must be considered in conjunction with the other pieces of information that can be derived by exercise testing, such as blood pressure response, heart rate response, and exercise capacity [41][42][43][44].Although technically challenging to perform in the same session, the addition of cardiopulmonary testing may also add prognostically important information [45][46][47].
Inducible LVOT obstruction
Increased LVOTG during exercise may be attributable to effort-related symptoms in patients with HCM, whereas the relation between provoked LVOTG and clinical outcomes remains uncertain.Inducible LVOT obstruction was associated with worse outcomes in 5 studies, [48][49][50][51][52] whereas it showed no association with prognosis in other 6 [41,42,[53][54][55][56].In the most recent one, Lu and coworkers [55] examined 705 HCM patients (91% with treadmill SE) and, surprisingly, found that patients with obstruction only on provocation had the lowest event rates and the best event-free survival for composite cardiovascular outcome compared to nonobstructive and rest obstructive patients.The same group revealed also that there is a bi-modal distribution of events and exercise LVOTG.In detail, latent obstructive patients can be subdivided into a benign latent subgroup (rest LVOTG < 30 mmHg & provoked gradient 30-89 mmHg) and an adverse latent sub-group (rest LVOTG < 30 mmHg & provoked gradient ≥ 90 mmHg).They revealed that the benign latent HCM subgroup had the best event-free survival among the overall four categories, followed by the adverse latent, non-obstructive, and rest obstructive groups [57].This pattern is reminiscent of that observed in children based on resting gradients only, and suggests that, in specific scenarios, the capacity to generate gradients with effort is a reflection of a preserved contractile reserve, rather than a marker of risk [38].
Evidence of ischemia and CFVR
Regional LV WMAs during stress are relatively infrequent findings in patients with HCM, occuring in about 6-13% of patients.They have a complex and multifactorial pathophysiology and are usually not related to epicardial coronary artery disease [53,54,56,58,59].It is more frequently detected during physical than pharmacological stress and carries a substantial risk of future adverse events [53,56,60,61].Patients with peak exercise WMAs are characterized by poorer exercise capacity, lower rest and peak stress ejection fraction, lower rest and post-exercise LVOTG, and more extensive myocardial fibrosis on cardiac resonance imaging [53,58].In 2015, Peteiro et al. [53] demonstrated that exercise WMAs have an independent prognostic value in predicting follow-up events and that their prognostic value is additive to the presence of extensive myocardial fibrosis.
Another ischemia-related stress parameter of the utmost importance is CFVR of the left anterior descending (LAD) coronary artery during vasodilator SE.CFVR during dipyridamole or adenosine SE is thought to be a marker of microvascular function in HCM, a complex and multifactorial phenomenon [62].CFVR ≤ 2 proved to be a powerful predictor of unfavorable outcomes in several studies [61,63,64].
Cortigiani and his colleagues [61] prospectively evaluated 68 HCM patients undergoing dipyridamole SE with CFVR evaluation.At a median follow-up of 22 months, decreased CFVR was an independent predictor of future adverse events including death, non-fatal myocardial infarction, ICD implantation, hospitalization for HF or unstable angina, syncope, and AF.In their study, reduced CFVR was related to symptoms, greater LV wall thickness, and LVOT obstruction.However, its prognostic value was independent of all these parameters.Furthermore, asymptomatic individuals with decreased CFVR had a 10-fold greater risk for events compared to asymptomatic patients with preserved CFVR.Subsequent studies on larger sample sizes and longer follow-ups confirmed the prognostic value of vasodilator CFVR in HCM [54,64].Tesic et al. [64] subjected 150 HCM patients to CFVR evaluation with adenosine.During a median follow-up of > 7 years, 52% of patients with reduced CFRV experienced cardiac events compared to 9% of patients with preserved CFVR (P < 0.001).Patients with impaired CFVR were more often females, with more advanced functional class, had a greater need for diuretic therapy, greater LV wall thickness, larger LA, lower septal e', higher E/e', higher RV systolic pressures, higher baseline heart rate, and higher resting coronary flow velocity when compared to patients with preserved CFVR.At multivariable Cox analysis, CFVR ≤ 2 was the only parameter that independently predicted poor cardiac outcomes, with an odds ratio of 6.5 (confidence interval 2.8-16.3).
Ejection fraction, MR, E/e', and pulmonary systolic arterial pressure during exercise
During the last few years, many studies have brought attention to other SE imaging metrics that may help to better risk stratify patients with HCM.Peteiro and coworkers [56] examined the feasibility and prognostic value of comprehensive echocardiography at peak and early post-exercise treadmill SE.They found that several variables offered incremental prognostic value, namely peak exercise LVEF, peak stress WMAs, and post-exercise E/e' ≥ 14.Their results showed that the worst outcome corresponded to patients with WMAs at peak exercise (fixed or new) and raised post-exercise E/e'.The annualized hard event rate was 5.9% for patients with elevated post-exercise E/e' and peak WMAs compared to the 4.0% event rate for patients with elevated post-exercise E/e' and normal wall motion.Notably, they also showed that patients with exercise WMAs and elevated E/e' (≥ 14) during treadmill exercise have almost 5-fold higher annual hard event rates when compared to patients with exercise WMAs but low E/e' (5.9% vs. 1.2%).This is an important message since, even in the presence of WMAs during stress, the operator should not refrain from searching for additional predictors such as E/e' (Figure 5).In the same study, blunted ejection fraction (≤ 65%) and MR (moderate or severe) on exertion also identified HCM patients at higher risk [56].It is important to evaluate exercise-induced MR which is a dynamic phenomenon and is frequently associated with LVOT obstruction and systolic anterior motion in HCM patients [52].Data on its prognostic value is limited since only two small studies were available before Peteiro's work in 2021 [56].Exercise-induced MR was associated with adverse cardiovascular events in both studies [50,52], although in one of them [52] the authors questioned its real independent prognostic value.They stated that it may define a group of patients with either a severe LVOT obstruction or morphologic abnormalities who have a higher risk of cardiovascular events.In addition, similarly to LVOT obstruction, the different methods and definitions used for the evaluation of exercise MR limits our understanding of its functional and prognostic role in HCM.During exercise, approximately 1 out of 5 HCM patients with normal resting systolic pulmonary artery pressure (SPAP) develop PHT which, according to recent studies, may play a role in risk stratification (Figure 6) [65,66].Hamatani et al. [65] found that HCM patients with PHT during semi-supine exercise (stress SPAP ≥ 60 mmHg) have a higher cumulative incidence of HCM-related events.In a recent study, Re et al. [66] used a lower cut-off (stress SPAP > 40 mmHg) and showed that exercise PHT is linked to higher risk for a composite endpoint comprising death, heart transplantation, aborted SCD, nonfatal myocardial infarction, stroke or HF hospitalization.
Future perspectives
The ABCD protocol is a new and promising SE methodology that provides comprehensive information on the different vulnerabilities of HCM patients.This approach makes it possible to uncover concealed myocardial ischemia, pulmonary congestion due to diastolic dysfunction, preload reserve and contractile reserve impairment, coronary microcirculatory dysfunction, and cardiac autonomic dysfunction easily and systematically [67].The same approach proved to be feasible and effective in refining phenotypes and risk classification in patients with chronic coronary syndromes [68].Large-scale prognostic validation of the protocol and other additive steps in HCM has started recently as a specific subproject of the SE 2030 study endorsed by the Italian Society of Echocardiography [67].
Conclusion
The precise prognostic evaluation of HCM patients requires a detailed echocardiographic structural and functional assessment.Several echocardiographic variables contribute to patients' risk stratification and have been incorporated in the clinical algorithms.Resting parameters with a well-established prognostic value are the degree of LVH, LV systolic and diastolic function, LA size and function, and LVOT obstruction.Some of these may worsen during exercise, such as diastolic dysfunction and MR, thus providing further information on patient outcomes.Other echocardiographic parameters such as WMAs or PHT have no prognostic relevance at rest but their presence on effort provide meaningful information on future adverse events.The impaired coronary microvascular function is a powerful prognostic indicator in patients with HCM and can only be assessed echocardiographically by CFVR measurement in the LAD during dipyridamole or adenosine stress.
Figure 1 .
Figure 1.22-year-old HCM patient with extreme LVH.Parasternal long-axis (panel A), parasternal short-axis (panel B), and apical four-chamber (panel C) end-diastolic still-frame images demonstrate asymmetrical hypertrophy with a maximal LV wall thickness of 37 mm (green lines).The interventricular septum (IVS) shows a typical reversed curvature septal morphology (panels A and B).Ao: aorta; RA: right atrial chamber
Figure 2 .
Figure 2. Severe LA dilation in a 29-year-old HCM patient with a pathogenic intronic variant in the cardiac myosin binding protein C gene.LA anteroposterior diameter in the parasternal short-axis view is 57 mm (panel A), indexed LA volume after biplane measurement in 4-chamber (panel B) and 2-chamber (panel C) views is 96 mL/cm 2
Figure 3 .
Figure 3. 66-year-old man with symptomatic obstructive hypertrophic cardiomyo and a pathogenic variant in the cardiac Troponin I gene.End-systolic frame in apical 3 chamber view showing LVOT turbulence and functional MR (Panel A, upper row, white arrow).Continuous-wave Doppler recording of the LVOT in apical 5-chamber view revealing late-peaking and concave-tothe-left Doppler spectrum with midsystolic flow acceleration peaking at 4.9 m/s, yielding a peak dynamic LVOTG of 97 mmHg (panel A, lower row).Tissue velocity imaging shows reduced septal and lateral e' velocities indicating severe diastolic dysfunction (panel B)
Figure 4 .
Figure 4. 60-year-old woman with midventricular hypertrophy and apical aneurysm (asterisks).Baseline echocardiography was ineffective in defining the contours of the aneurysm (panel A).Echocardiography with a contrast agent provided better bedside visualization of the apical aneurysm (panel B, white arrows)
Figure 5 .
Figure 5.A 66-year-old nonobstructive female HCM patient with dyspnea and chest paint on effort and negative coronary angiography.Exercise SE revealed worsening diastolic dysfunction indicated by increased E/e' ratio (panel A) and falling LV end-diastolic volume (EDV) at peak exercise (panel B). bpm: heart beats per minute
Figure 6 .
Figure 6.Increasing MR (panel A) is associated with worsening pulmonary hypertension in a 59-year-old woman with nonobstructive HCM during exercise SE.Estimated SPAP rose from resting 38 mmHg to 55 mmHg at low peak workload (panel B).TR: tricuspid regurgitation
Abbreviations
AbbreviationsAF: atrial fibrillation CMR: cardiac magnetic resonance E/e': early LV inflow velocity to early tissue Doppler annulus velocity
Table 1 .
Rest and stress echocardiographic predictors of outcomes in patients with HCM | 2023-11-03T15:09:34.953Z | 2023-10-31T00:00:00.000 | {
"year": 2023,
"sha1": "a39faa54ec3c84b88c50ff3264b9e79134901e5b",
"oa_license": "CCBY",
"oa_url": "https://www.explorationpub.com/uploads/Article/A101210/101210.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d44c9037bb0bf3da27091b23c34ec86eb6f38e95",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
218488953 | pes2o/s2orc | v3-fos-license | Ligand and Solvent Selection for Enhanced Separation of Palladium Catalysts by Organic Solvent Nanofiltration
Organic solvent nanofiltration (OSN) has been widely applied to separate and recycle homogeneous catalysts, but the influence of ligand and solvent selection on the performance of OSN is not fully understood. Here we prepared four palladium (Pd) catalysts by combining palladium acetate with four ligands of different molecular weights. Morphological and functional properties of the Pd catalysts were characterized by TEM, FTIR, and NMR. OSN experiments were conducted in a lab-scale dead-end filtration rig. Two commercial OSN membranes, PuraMem S600 (PS600) and DuraMem 500 (D500), were used to separate the Pd catalysts from different organic solvents (toluene, isopropanol, butanol/water, and methanol) that are specified to be compatible with, respectively. For both membranes, the pure solvent permeance was positively related to the degree of membrane swelling induced by the solvent. The solvent permeance decreased significantly after the addition of a solute, as a result of membrane fouling and concentration polarization. For the PS600 membrane, the Pd rejection in any solvent was closely correlated to the molecular weight of the ligand, which agrees with the pore-flow model. For the D500 membrane, on the other hand, there was no conclusive link between the Pd rejection and the type of ligand. The one-way analysis of variance (ANOVA) confirmed that the separation processes in PS600 and D500 membranes were controlled by different transport models. The findings shed light on the selection of ligand and solvent in OSN in order to enhance the separation of homogeneous catalysts.
INTRODUCTION
Homogeneous catalysis by transition metal complexes offers many advantages over heterogeneous catalysis, such as high catalytic activity, high selectivity, and negligible mass transfer limitations (De Smet et al., 2001;Van Leeuwen, 2004). Such superior performance is due to the ability of transition metals to complex with a wide range of ligands such as dendrimers, polymers and polyhedral oligomeric silsesquioxanes (POSS) (Erkey, 2011). However, the application of homogenous catalysis in the chemical industry is still scarcer compared to its heterogeneous counterpart. One major reason is the difficulty in separating and recycling the catalyst from the reaction products. Conventional downstream separation processes such as distillation, precipitation, and extraction, require intensive energy, deactivate the expensive catalyst and generate metal-rich waste streams, which are unfavorable from both economic and environmental perspectives (Cole-Hamilton, 2003;Vural Gürsel et al., 2015).
Organic solvent nanofiltration (OSN) has been developed as an attractive approach for the separation and recycling of homogeneous catalysts over the past decades (Janssen et al., 2011). OSN uses solvent-resistant membranes to separate molecules based on size exclusion, charge interaction, and solutemembrane affinity (Shen et al., 2018). Since the catalyst particles are usually relatively larger than the products, OSN allows the catalyst to be retained in the retentate. OSN can be operated without any additives, phase transition or thermal input, allowing for a direct recycle of the active catalyst from the reaction mixture (Dreimann et al., 2016). OSN can combine continuous catalytic process with catalyst recovery, achieving significant economic efficiency and process intensification (Marchetti et al., 2014). A typical example of OSN applications in homogeneous catalysis is the recovery and reuse of the high-value palladium (Pd) catalyst in carbon-carbon cross-coupling reactions (e.g., Heck, Sonogashira, and Suzuki reactions) (Nair et al., 2002;Datta et al., 2003;Pink et al., 2008;Tsoukala et al., 2012;Peeva et al., 2013;Ormerod et al., 2016).
However, in some cases, the catalyst itself is about the same size as the product, making it difficult to be separated by OSN. The ligands, which are used to stabilize the catalyst, have the added benefit of changing the size and shape of the catalyst (Janssen et al., 2011;Vural Gürsel et al., 2015). Increased retention of a catalyst by adding ligands has been reported for numerous homogeneous reactions (Brinkmann et al., 1999;Dijkstra et al., 2002;Fang et al., 2011;Kajetanowicz et al., 2013) and the solvent-solute-membrane interactions are found to play a fundamental role in determining the flux and catalyst rejection characteristics (Ormerod et al., 2013(Ormerod et al., , 2016. Nevertheless, the influence of ligand properties (such as structure and molecular weight) on the transport mechanisms of homogeneous catalyst in OSN is still not fully understood.
This study aims to fill this knowledge gap by attaching a homogeneous Pd catalyst to four ligands with different molecular weights and geometries and investigating their separation in four solvents by two OSN membranes. Performance indicators including permeance and solute rejection were measured to determine the effect of ligands and solvents on catalyst rejection and to understand the mechanisms underlying these effects. This work will serve as a benchmark for evaluating OSN performance in dealing with complex homogeneous catalysis systems. The data will help identify the suitable combination of ligand and solvent for use in order to achieve effective catalyst separation and recycle. It will also shed some light on the transfer mechanisms of homogeneous catalyst through OSN membranes on a molecular level.
Materials
Palladium(II) acetate (Pd(OAc) 2 ) was used as the precursor for preparing active Pd catalysts. Four ligands were chosen for use in this study, namely 1,3-Bis(diphenylphosphino)propane (dppp); 1,2-Bis(diphenylphosphino)benzene (dppBz); Tri(otolyl)phosphine (P(o-tol) 3 ); and 2-Dicyclohexylphosphino-2 ′ ,4 ′ ,6 ′ -triisopropylbiphenyl (XPhos). Their structures and molecular weights are listed in Table 1. Each of the ligands has well-documented applications for Pd-catalyzed coupling reactions (Sigma, 2013;Li et al., 2014). All chemicals were purchased from Sigma-Aldrich, UK, and were used without further purification. Four solvents were chosen for use in this study, namely toluene, butanol, isopropanol, and methanol. The physicochemical properties of the solvents are summarized in Table 2. HPLC grade solvents were obtained from Fisher Scientific, UK. Deionized water was produced by an ELGA deionizer from PURELAB Option, USA. Butanol/water mixture (ratio 5:1) was used as a representative of aqueous solvent mixtures. Two commercially available OSN membranes, PuraMem S600 (PS600) and DuraMem 500 (D500) were purchased from Evonik, UK. Specifications of the membranes are summarized in Table 3. The PS600 membrane has a molecular weight cut-off (MWCO) of 600 Da and is compatible with non-polar solvents. The D500 membrane has an MWCO of 500 Da and is compatible with polar solvents and aqueous solvent mixtures. Considering the membrane solvent compatibility (Tables 2, 3), PS600 was used for OSN experiments in toluene, isopropanol, and methanol whereas D500 was used for OSN experiments in isopropanol, butanol/water, and methanol.
Characterization
Physicochemical properties of the catalysts were investigated by Fourier transform infrared (FTIR) spectroscopy. 0.08 mmol of Pd(OAc) 2 and 0.16 mmol of a given ligand were dissolved in 40 mL of isopropanol. The mixtures were stirred in a Carousel 12 Plus Reaction Station (Radleys, UK) under a nitrogen atmosphere at room temperature until they were visibly homogeneous. Then the solvents were removed from the mixtures by using a rotary evaporator (Buchi Rotavapor R-215, Switzerland). The resulting powders were stored in sealed glass vials. FTIR spectra of the powders were obtained by a Spectrum 100 spectrometer with a universal ATR sampling accessory (PerkinElmer, USA). Each FTIR spectrum had 32 scans with 4 cm −1 resolution in the region of 650-2,000 cm −1 .
Morphologies of the catalysts were characterized using a JEM-2100Plus transmission electron microscope (TEM) (JEOL, USA). The TEM samples were prepared by dispersing a small amount of the catalyst in ethanol, sonicating for 15 min, and placing one drop of the suspension onto 400 mesh copper grids.
The formation of the Pd-ligand complexes was further determined by nuclear magnetic resonance (NMR) spectroscopy. Standard solutions of the free ligands were prepared by dissolving 12 mg of each ligand in 0.5 mL of deuterated chloroform. Standard solutions of each Pd-ligand complex (0.08 mmol of Pd(OAc) 2 and 0.16 mmol of a given ligand) were prepared by mixing them in 5 mL of butanol. Phosphorus-31 (31P) NMR spectra were recorded by an Avance III 500 MHz NMR spectrometer (Bruker, UK) using the solvent suppression technique. (Hansen, 2007). The overall solubility parameter of the butanol/water mixture (ratio 5:1) was calculated using the lever rule as follows:
Solvent
The interactions between different solvent-membrane pairings were determined by swelling experiments. The membranes were cut into small pieces (∼2 × 2 cm), weighed for an initial dry mass, and soaked in different pure solvents. The membranes samples were periodically removed from the solvent to measure the wet mass until no further increase in membrane mass was observed. The mass swelling degree (S) was calculated using Equation 1, where the initial dry mass of the membrane is m dry and the swollen membrane mass is m wet . The swelling test was repeated three times for each membrane and the average (Evonik, 2017).
OSN Experiments
OSN experiments were conducted in the dead-end mode using a stainless-steel pressure cell (Sterlitech Corporation HP4750, USA). The cell has a membrane active area of 14.6 cm 2 , a maximum processing volume of 300 mL, and a maximum pressure of 69 bar. The precondition procedure was implemented by allowing 200 mL of a pure solvent to permeate through the membrane until a steady solvent flux was achieved. The membrane was preconditioned in a water bath at 25 • C, pressurized with nitrogen at 40 bar, and stirred by a magnetic stirrer at 300 rpm. The permeate was collected in a measuring cylinder and was continuously weighed by an electronic balance (EK-300i, A&D Weighing, USA) under the measuring cylinder. The permeance (P) was determined using Equation 2, where Q is the permeate flow, p is the applied pressure and A is the membrane active area.
Once preconditioned in a solvent, the membrane was used to filter 40 mL of the same solvent containing a given catalyst. The experimental conditions were the same as the preconditioning procedure. The Pd concentrations in the feed and the permeate were measured by atomic absorption spectrometry (AAS) (PerkinElmer AAnalyst 100, USA). Samples were diluted with 4methylpentan-2-one until the Pd concentration in the samples fell within the calibration curve as drawn from the standard solutions. Each AAS measurement was carried out in triplicate and the average and the standard deviation were calculated afterwards. Experimental data were statistically analyzed by oneway analysis of variance (ANOVA) using the OriginPro 2020 software (OriginLab, USA). The significance level was set at 0.05.
FTIR
The FTIR spectra of the Pd precursor, the ligands, and the Pd-ligand complexes are illustrated in Figure 1A. The pure Pd(OAc) 2 has characteristic peaks at 1,600 and 1,400 cm −1 corresponding to the C=O and C−O stretching (Pretsch et al., 2009). The Pd-ligand complexes exhibit characteristic peaks at the comparable wavenumbers to the pure substances (Daasch and Smith, 1951;Jayamurugan et al., 2009). As for the Pd(OAc) 2 + dppp complex, the phosphorus-aryl bond at 1,590 cm −1 becomes more discrete and exhibits a larger peak when mixed with Pd(OAc) 2 . This can be explained as both overlapping with the more intense Pd stretching, and as a change in the local densities of the phosphorus bond, hence it can be determined that the Pd and ligand are interacting on a molecular scale (Pretsch et al., 2009). As for the Pd(OAc) 2 + dppBz complex, the peak of the ligand's phosphorus-aryl bonds became more distributed, suggesting that interactions between the Pd and the ligand have resulted in increased stretching of these aryl bonds. As for the Pd(OAc) 2 + P(o-tol) 3 complex, transmittance peaks occur at 1,360 and 1,318 cm −1 that are attributed to a phosphorus-oxygen double bond stretch, which confirms the ability of P(o-tol) 3 to bond to the Pd(OAc) 2 in-situ (Pretsch et al., 2009). The spectra of Pd(OAc) 2 + XPhos complex again allow for the determination of Pd-ligand bonding. The wider distribution of the phosphorus-aryl bond stretches when mixed in solution with Pd(OAc) 2 , showing significant changes in the electron distribution and thus confirming interactions between XPhos and Pd(OAc) 2 . Hence, there appears to be sufficient data to evidence that each ligand utilized in this study bonds to the Pd precursor in-situ. Figures 1B-F are the TEM images of the Pd precursor and the Pd-ligand complexes, illustrating how the ligand molecules affect the size and shape of Pd catalyst. Usually, the ligand binds to the metal catalyst and activates it by changing its oxidation state. The TEM results show that the ligand also affects the crystallinity and the tendency of Pd catalyst to form clusters and aggregates. The TEM images of the ligands alone can be found in Figure S1. The pure Pd(OAc) 2 has the smallest particle with an average diameter of 2 nm. The Pd(OAc) 2 particles tend to be evenly distributed in a solvent. However, when ligated with dppp and dppBz, the particles appear to agglomerate and form large clusters, significantly increasing the size of the catalyst. P(o-tol) 3 and XPhos interact with the Pd(OAc) 2 particles in a different way than dppp and dppBz. Instead of forming irregular clusters, they form large spherical particles with Pd(OAc) 2 , which are over 100 orders of magnitude larger than the pure Pd(OAc) 2 particles. Hence, the different ligands result in different sizes and shapes of the Pd-ligand complex, and therefore allow a comparison between their separation performances. Figure 2 reports the 31P NMR spectra for the four free ligands and their palladium complexes. Phosphine ligands in their free form are characterized by a single peak in the low-frequency area of the spectrum between −10 and −50 ppm. For the Pd(OAc) 2 + dppp complex, the 31P NMR spectrum presents two additional peaks in the downfield region at 30.66 ppm and 54.86 ppm. The presence of these peaks could be interpreted as an interaction of the free ligand with both the palladium and the solvent. As reported in the literature, a Pd 0 complex is spontaneously formed from Pd(OAc) 2 and a bidentate phosphine such as dppp. dppp is then oxidized to the hemioxide dppp(O), which can appear as a peak at around 50 ppm (Amatore et al., 2001). For the Pd(OAc) 2 + dppBz complex, two different peaks at 9.2 and 32.4 ppm are shown in the spectrum, indicating that the two phosphorus atoms, although symmetrical in their free form, experience different magnetic fields as they interact differently with palladium. For the monodentate P(o-tol) 3 , the free ligand peak at −29.21 ppm is shifted downfield to 57.23 ppm when the Pd(OAc) 2 + P(o-tol) 3 complex is formed. Another small peak at 11.9 ppm appears in the spectrum which could be attributed to a conformational change of the ligand in the system. It has been reported that the P−C bonds of P(o-tol) 3 could rotate and rearrange in two different conformations which then affect their interaction with the metal center (Widenhoefer et al., 1996).
XPhos is a monodentate bulky biaryl ligand which has an intense
Frontiers in Chemistry | www.frontiersin.org singlet at −13 ppm. This peak is shifted to a broad singlet at 45 ppm in the spectrum of Pd(OAc) 2 + XPhos complex. This resonance can be assigned to the XPhos-ligated Pd II species Pd II (OAc) 2 (XPhos) (Wagschal et al., 2019). The NMR results confirm that all the four ligands can form Pd-ligand complexes with the Pd precursor in-situ.
Swelling
The structure and stability of a membrane can be significantly affected by swelling (Razali et al., 2017). Therefore, membrane swelling in a solvent was studied before the membrane was used for OSN. Figure 3 illustrates the mass swelling degree and the pure solvent permeance of PS600 and D500 membranes as a function of the Hansen solubility parameter listed in Table 2. For the same type of membrane, the order of pure solvent permeance mirrored the order of mass swelling degree. Specifically, the mass swelling degree of PS600 decreased from 0.43 to 0.30 as the Hansen solubility parameter increased from 18.2 to 29.6 MPa 0.5 , while its pure solvent permeance showed a similar decreasing trend from 2.28 to 0.36 LMH/bar. The mass swelling degree and the pure solvent permeance of D500 both decreased as the Hansen solubility parameter increased from 23.6 to 27.3 MPa 0.5 and then increased sharply as the Hansen solubility parameter increased further to 29.6 MPa 0.5 . These results suggest a strong correlation between the degree of swelling and solvent permeance of OSN membranes, which is in good consistency with previous studies that found swelling caused the formation of larger channels in the polymer matrix and thus increased solvent permeance (Dijkstra et al., 2006;Marchetti et al., 2014;Shen et al., 2018). Figures 4A,B plot the solvent permeance of PS600 and D500 membranes against different ligands. When comparing to FIGURE 2 | 31P NMR spectra of (A) dppp and Pd(OAc) 2 + dppp, (B) dppBz and Pd(OAc) 2 + dppBz, (C) P(o-tol) 3 and Pd(OAc) 2 + P(o-tol) 3, and (D) XPhos and Pd(OAc) 2 + XPhos. Figure 3, both membranes showed that the pure solvent permeance was greater than the solvent permeance of Pd catalyst, indicating a negative impact of the molecular weight of the solute on the permeance. This is likely caused by fouling of the membrane by the solute particles in conjunction with concentration polarization and an increase in osmotic pressure (Davey et al., 2016). According to the ANOVA results showed in Tables S1, S2, for both membranes the effect of the ligand on solvent permeance of Pd catalyst was not statistically significant (P > 0.05). Regardless of the ligand type, the solvent permeance of Pd catalysts was in the same order as that of the pure solvent permeance. This suggests that, unlike the ligand, the solvent had a significant effect on the permeance. This conclusion was further supported by the ANOVA results (Tables S3, S4) where the P-values were far below 0.05. For the PS600 membrane, toluene had the highest permeance (0.97-1.98 LMH/bar), while isopropanol and methanol had similarly low permeance (0.13-0.22 LMH/bar). For the D500 membrane, by contrast, methanol exhibited the highest permeance (2.96-5.87 LMH/bar), isopropanol had much lower permeance (0.41-0.69 LMH/bar) and butanol/water mixture had the lowest permeance (0.11-0.16 LMH/bar). In summary, both isopropanol and methanol appeared unsuitable for use with PS600 membrane due to the low permeance, while the butanol/water mixture not suitable for D500 membrane.
Pd Rejection
Figures 4C,D displays the Pd rejection of PS600 and D500 membranes as a function of ligands. For PS600, the addition of a ligand in all cases resulted in greater Pd rejection than just the use of pure Pd(OAc) 2 . When dissolved in isopropanol and toluene, the Pd rejection of PS600 showed a clear positive correlation against the molecular weight of the solute particles. The Pd rejection in toluene, particularly, increased from 59.6% using the pure Pd(OAc) 2 to >99.5% rejection using a Pd(OAc) 2 + dppBz complex. When dissolved in methanol, the results again showed that the use of ligand can be beneficial, however, there was no linear correlation between Pd rejection and molecular weight, with rejection peaking at 93.1% using a Pd(OAc) 2 + P(o-tol) 3 complex. This may be because the molecular weight is not the only factor that impacts Pd rejection. Other factors such as the shape of particles and the degree of agglomeration should also be considered. The P-values from the ANOVA results (Tables S5, S7) demonstrated that the effect of the ligand on Pd rejection was significant, while the effect of solvent was insignificant. Based on the above observations, it can be deduced that Pd separation by PS600 was governed by a pore-flow model based on size exclusion (Geens et al., 2006;Stawikowska and Livingston, 2012;Marchetti et al., 2014). The catalyst clusters observed in the TEM images could explain the high rejections observed using dppp and dppBz since the larger clusters were rejected more readily than the smaller particles in the pore-flow model. Conversely, D500 yielded little conclusive evidence for or against the use of ligands to increase catalyst rejection. In each solvent, high rejections (>90%) were achieved with the pure Pd(OAc) 2 solution, whereas the lowest rejections in isopropanol and methanol were observed when a ligand was used, suggesting that the use of ligand does not increase the rejection by this membrane. The P-values in Tables S6, S8 showed that for D500, the effect of the ligand on Pd rejection was not significant, while the effect of solvent on Pd rejection was significant. This is the exact opposite of PS600. Therefore, solute transport across D500 appears to be governed more by the solutiondiffusion model which is mainly determined by solvent polarity and polymer swelling (Silva et al., 2005;Ben Soltane et al., 2013;Marchetti et al., 2014). Transport across the membrane is hence determined by the solubility of the Pd-ligand complexes in the membrane surface, and it is, therefore, likely that the molecular weight of these complexes has little bearing on their rejection profiles.
It is clear from the rejection data that the rejection of an OSN membrane is not uniquely based upon a molecular weight difference between the solute and the manufacturer's stated MWCO since each Pd-ligand complex should achieve >90% rejection in both membranes, regardless of the solvent. The results support previous studies which found that the MWCO of a membrane may not give sufficient information on its separation performance (Toh et al., 2007;Marchetti et al., 2014;Xu et al., 2017). Hence, it should be considered that the ligation of the Pd catalyst changes its chemical composition such that its transport across the membrane is altered.
CONCLUSIONS
Firstly, this study found through TEM that the addition of ligand molecules to the Pd catalyst increases the molecular weight and produces "clusters" of molecules due to the agglomeration of organo-Pd molecules. FTIR and NMR data shows evidence for the formation of Pd-ligand bonds through changes in the spectra of the ligands in the presence of a catalyst.
Secondly, this study tested the permeance of two commercially available OSN membranes, namely PS600 and D500, in various pairings of ligand and solvent. The pure solvent permeance of both membranes was positively correlated with the swelling degree because swelling enlarged the channels in the polymer matrix. The addition of a solute, regardless of its type, decreased the solvent permeance due to membrane fouling and concentration polarization. The ANOVA results revealed that the ligand had an insignificant effect on the permeance, while the solvent had a significant effect on the permeance.
Finally, this study evaluated the Pd rejection of PS600 and D500 membranes in different ligands and solvents. The PS600 membrane exhibited a strong positive correlation between the Pd rejection and the molecular weight of the solute, with the maximum rejection of 99.5% observed with a Pd(OAc) 2 + dppBz complex dissolved in toluene, much higher than the rejection of pure Pd(OAc) 2 . This suggests that solute transport across PS600 was relatable to a pore-flow model since particle agglomeration contributed to the high rejection. By contrast, the D500 membrane showed no conclusive link between the rejection and the molecular weight. In fact, the lowest rejections in isopropanol and methanol were observed when a ligand was used. Hence, it is believed that the transport mechanisms of the D500 membrane align with the solution-diffusion model more closely than with the pore-flow model. The ANOVA results showed that the effect of the ligand on Pd rejection was significant for the PS600 membrane, while the effect of solvent on Pd rejection was significant for the D500 membrane. These observations support our arguments that PS600 and D500 membranes are governed by different transport models.
Overall, our results confirmed the utility of OSN in the separation of homogeneous catalysts and suggested a positive correlation between rejection and solute molecular weight for membranes that follow a pore-flow model. In the future, it would be important to investigate the impact of other properties of ligand (e.g., electronic structure and conformational properties) and solvent (e.g., surface tension and viscosity), in order to draw a conclusive link between the rejection and the type of ligand and solvent. It would also be important to investigate the use of other common organometallic catalysts, such as rhodium-based molecules, to characterize the impact of ligand addition on the rejection of these catalysts by OSN.
DATA AVAILABILITY STATEMENT
The raw data of this work were deposited to FigShare for permanent storage (https://doi.org/10.6084/m9.figshare. 10299047.v1). Readers can download and reuse the data for research purpose with an acknowledgment to the authors.
AUTHOR CONTRIBUTIONS
JS and KB performed the experiments and analyzed the data with help from IA. JS wrote the manuscript with input from all authors. JS and EE conceived the study. All authors read and approved the manuscript. The open-access publication fee was paid by the Bath Open Access Fund. | 2020-05-05T13:04:18.796Z | 2020-05-05T00:00:00.000 | {
"year": 2020,
"sha1": "4302643a18de715327815a18be4578aad809eb9f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fchem.2020.00375/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4302643a18de715327815a18be4578aad809eb9f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
24250760 | pes2o/s2orc | v3-fos-license | In-vitro analysis of Quantum Molecular Resonance effects on human mesenchymal stromal cells
Electromagnetic fields play an essential role in cellular functions interfering with cellular pathways and tissue physiology. In this context, Quantum Molecular Resonance (QMR) produces waves with a specific form at high-frequencies (4–64 MHz) and low intensity through electric fields. We evaluated the effects of QMR stimulation on bone marrow derived mesenchymal stromal cells (MSC). MSC were treated with QMR for 10 minutes for 4 consecutive days for 2 weeks at different nominal powers. Cell morphology, phenotype, multilineage differentiation, viability and proliferation were investigated. QMR effects were further investigated by cDNA microarray validated by real-time PCR. After 1 and 2 weeks of QMR treatment morphology, phenotype and multilineage differentiation were maintained and no alteration of cellular viability and proliferation were observed between treated MSC samples and controls. cDNA microarray analysis evidenced more transcriptional changes on cells treated at 40 nominal power than 80 ones. The main enrichment lists belonged to development processes, regulation of phosphorylation, regulation of cellular pathways including metabolism, kinase activity and cellular organization. Real-time PCR confirmed significant increased expression of MMP1, PLAT and ARHGAP22 genes while A2M gene showed decreased expression in treated cells compared to controls. Interestingly, differentially regulated MMP1, PLAT and A2M genes are involved in the extracellular matrix (ECM) remodelling through the fibrinolytic system that is also implicated in embryogenesis, wound healing and angiogenesis. In our model QMR-treated MSC maintained unaltered cell phenotype, viability, proliferation and the ability to differentiate into bone, cartilage and adipose tissue. Microarray analysis may suggest an involvement of QMR treatment in angiogenesis and in tissue regeneration probably through ECM remodelling.
Introduction Cells interact with the surrounding environment through receptors and ion channels which transmit chemical, mechanical and electrical signals. In this context, electromagnetic fields (EMF) interfere with cellular pathways and tissue physiology [1]. Cell-EMF interaction can occur through charged molecules and proteins in the cell membrane that alters the flow of ions or rearranges the distribution of the membrane receptors, or via direct field penetration inside the cell [2].
There is evidence that the manipulation of the electromagnetic environment on biological systems favours wound healing process, reduction of inflammatory state, angiogenesis and extracellular matrix (ECM) synthesis [3]. In fact, EMF regulate a variety of cell functions including promotion and inhibition of cellular proliferation [4,5], cellular viability [6,7], differentiation [8][9][10], cellular migration and motility [11][12][13], inflammatory response [14,15] and gene expression profiles [16,17]. As a consequence, therapeutic application of EMF has undergone a raising interest in medicine. By contrast, the mechanisms of action of EMF in biological tissues are only partially known [18].
Quantum Molecular Resonance (QMR) stimulation is a technology already applied for surgical and medical purposes. QMR creates quanta of energy able to break the molecular bonds without increasing the kinetic energy of the hit molecules, thus without raising in the temperature and limiting the damage to the surrounding tissue. QMR technology exploits no-ionizing high-frequency waves in the range between 4 and 64 MHz at low intensity delivered through alternating electric fields. The effect of QMR stimulation relies on the induction of more frequencies at the same time, where the fundamental wave is at 4 MHz and the subsequent ones increase in harmonic content until 64 MHz with related decreasing amplitudes.
QMR finds clinical application in bipolar coagulators or electrosurgery devices [19,20]. For this kind of applications, the molecular resonance generator works on the combination of four frequencies in the range of 4-16 MHz. The first experimental study testing QMR effects described in a rat model of thoracotomy a less severe tissue damage than standard electrocautery [21].
Since now, few data are available on the mechanism of interaction between QMR and cells. Dal Maschio and colleagues [22] provided the description of the behavior of muscle fibers exposed to QMR, where the changes of membrane potential and the variations of free calcium concentration strictly followed the time course of electrical field application and removal. Moreover, the effectiveness of molecular quantum resonance in reducing edema after total knee arthroplasty in a clinical trial has been reported [23].
Our work aimed at understanding how QMR acts on human bone marrow-derived mesenchymal stromal cells (MSC).
The use of MSC for tissue healing and in regenerative medicine was extended in the last decade [24], but current research on MSC aims not only to the development of clinical protocols of cellular therapy or regenerative medicine but also to provide experimental models that can inform about molecular mechanisms such as inflammation, angiogenesis and apoptosis [25].
Three main criteria were proposed by the International Society for Cellular Therapy (ISCT) for MSC definition [26]: adherence to plastic under standard culture conditions; expression of CD105, CD73, CD90 and lack of expression of HLA-DR, together with the lack of hematopoietic and endothelial surface markers CD14, CD45, CD34, CD11b and CD31 [27]; in vitro differentiation potential into osteocytes, chondrocytes and adipocytes under appropriate culture conditions [28]. Despite of attempts for establishing generally acceptable minimal criteria for defining human MSC by immunophenotyping, the functional capability to differentiate along the classical tri-lineage mesodermal pathways remains one fundamental characteristic of this cell type.
MSC represent an ideal model to study the effects of high frequency EMF and electric current. MSC exhibit remarkable plasticity given their ability to transdifferentiate or undergo rapid alteration in phenotype, thereby giving rise to cells possessing the characteristics of different lineages. Moreover, there are evidences that endogenous bone marrow-derived MSC could be recruited and mobilized to sites of injury [29]. As a consequence MSC can be used in various clinical conditions in which tissue repair is needed, or in which these cells are believed to act through their anti-inflammatory and immunomodulatory activities.
In the present study, we used a broad evaluation approach to study the effects of QMR on human MSC at different levels of investigation. Cell cultures were exposed to distinct QMR settings and times of treatment. We assessed the maintenance of MSC identity and then therefore performed viability and cellular proliferation assays to assess additional information about. Finally, we investigated the transcriptional profile of MSC after QMR stimulation.
MSC isolation and ex-vivo expansion
MSC were isolated from cells obtained through the washouts of discarded bone marrow collection bags and filters of healthy donors, 2 male and 4 female (median age: 34.5 years. . After two washing steps with 200 ml saline solution and centrifugation at 2.000 rpm for 10 min, the collected nucleated cells were seeded in toto at the density of 1x10 5 cells/cm 2 in lowglucose Dulbecco's modified Eagle's medium (DMEM) with GlutaMAX TM and pyruvate (Gibco, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS, Qualified Australian, Gibco, Thermo Fisher Scientific) and 1% penicillin/streptomycin (P/S, Sigma-Aldrich, St Louis, MO, USA). Cultures were incubated at 37˚C in a humidified atmosphere with 5% CO 2 . Non-adherent cells were removed after 72 hours and fresh medium was added, then the culture medium was changed every 3-4 days. At 80% confluence, MSC were washed with Dulbecco's phosphate-buffered saline (D-PBS, Sigma-Aldrich), harvested using 10X TrypLE Select (Gibco, Thermo Fisher Scientific) and sub-cultured at a density of 1.500 cells/cm 2 . The cultures were observed with an inverted light microscope Axiovert 40 CFL (Carl Zeiss, Oberkochen, Germany) and the images were acquired by using an Axio-Cam Mrm camera system (Carl Zeiss).
Cellular model and QMR stimulation protocol
MSC cultures were exposed to QMR using an experimental QMR generator supplied by Telea (Telea Electronic Engineering, Sandrigo, VI, Italy). The QMR generator setup was the following: alimentation: 230 V~50/60 Hz; maximum power in input: 250 VA; power in output: 45 W/400 O. The prototype enhanced alternating electric currents characterized by high-frequency waves and low intensity. The fundamental wave was at 4 MHz and the subsequent ones increased in harmonic content until 64 MHz with related decreasing amplitudes. The stimulations were delivered through the raise of effective powers in output (4-45 W) corresponded to increase in value to the nominal powers employed as QMR settings.
The cellular model and QMR delivery system were composed of a pair of custom made spheroidal electrodes (anodes) of 35 mm-diameter and by an electrode (cathode) constituted of a metallic plate. The electrodes were placed inside two Petri dishes and supported by a polyvinyl chloride component to allow the direct contact of the electrode with the surface of the culture medium. The cathode was positioned below the Petri dishes ( Fig 1A).
The experimental setup was planned in order to reproduce in-vitro the therapeutic conditions in terms of timing and powers. Based on medical reports and on long company experience, the most effective settings with positive follow-up were selected for MSC cultures experimentation.
MSC at passages 4-6 were seeded in 35 mm-diameter Petri dishes in duplicate per condition (Greiner Bio-One, Kremsmünster, Austria) and after 72 hours from initial seeding, complete medium was changed. Cells were subjected to 10 minutes/day of QMR stimulation for 4 consecutive days with rest period of 24 hours between them. The same MSC cultures exposed to the first QMR cycle of treatment were reseeded for the second one and treated under identical conditions ( Fig 1B). Two different QMR settings corresponding to 40 and 80 nominal powers were applied. Controls were kept in parallel as sham-exposed controls with electrodes presence in cell media and without QMR exposition.
MSC phenotype characterization
MSC phenotype was characterized by flow cytometry before and after QMR stimulation. Briefly, 1x10 5 cells were incubated with the following monoclonal antibodies: CD90-FITC, CD105-PE, CD45-ECD, HLA-DR-APC (all purchased from Beckman Coulter, Brea, CA, USA) and CD73-PC7 (Becton Dickinson, Franklin Lakes, NJ, USA) for 15 minutes at room temperature protected from light. At least 20.000 events were acquired on a CYTOMICS FC500 flow-cytometer and data were analysed by Kaluza software (both Beckman Coulter). The expression of each marker was assessed as percentage (%) of positive cells and as relative median fluorescence intensity (rMFI), this latter defined as the ratio between the median fluorescence intensity of the marker and its specific negative control.
Multilineage differentiation
After two cycles of consecutive stimulations MSC differentiation potential was tested. Samples were harvested and reseeded in 24-well plates (Falcon, Corning Life Sciences, NY, USA) in the presence of circular 13 mm-diameter and 0.2 mm-thickness coverslips (Nunc, Thermo Fisher Scientific) at the density of 4.000 cells/cm 2 . Differentiation was induced at semi-confluence with specific differentiation media for 21 days with StemPro Adipogenesis, Osteogenesis and Chondrogenesis kit, (Gibco, Thermo Fisher Scientific). Fresh medium was added every 3 days and the respective controls were maintained in parallel with standard expansion medium. The first cycle of treatment started after media renewal on day 3 (black arrows) and the second one on day 10 (blue arrows). Cultures were stimulated 10 minutes/ day for 4 consecutive days at 40 or 80 nominal powers. Sham-exposed controls were kept in parallel. https://doi.org/10.1371/journal.pone.0190082.g001 To detect the formation of lipid droplets, cells were fixed in 10% formalin for 5 minutes and stained with Oil Red O (Diapath, Martinengo, BG, Italy) according to manufacturer's instructions.
The presence of calcium deposit as an expression of osteogenic induction was analysed with Alizarin red staining. The samples were washed with D-PBS and fixed in ice-cold 70% ethanol at 4˚C for 1 hour. Then, they were incubated for 15 minutes with 0.02 g/ml of Alizarin red solution (Sigma-Aldrich) at room temperature. Finally, several washes were performed with deionized water.
To verify chondrogenic differentiation, cells were fixed in 10% formalin for 5 minutes and stained with 1 g/l in 0.1 M HCl of Alcian blue for 2 hours at room temperature. At the end of the staining specific for acidic polysaccharides, the coverslips were rinsed extensively with deionized water.
After each staining, the coverslips were mounted on microscope slides using Kaiser's glycerol gelatine pre-warmed at 37˚C. The acquisition of images was obtained by AxioCam Erc 5s camera system (Carl Zeiss).
Quantification of cellular proliferation
Cellular proliferation was determined by WST-1 assay (Sigma Aldrich). At the end of two consecutive cycles of QMR treatment, cells were harvested and seeded in 96 well-plates (Falcon, Corning Life Sciences) at the density of 2.000 cells/well. After 72 hours WST-1 was added and incubated for 3 hours at 37˚C. Finally, the plates were read at 450 nm with a spectrophotometer (SpectraCount, Packard Instrument Company Inc, Meriden, CT, USA). Data were expressed as percentage (%) of proliferation on the control.
cDNA microarray analysis
The RNA derived from five different MSC samples exposed to one cycle of QMR stimulation and their corresponding MSC controls were extracted using RNeasy plus mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. The total RNA quantification was obtained with NanoDrop UV-VIS Spectrophotometer (Thermo Fisher Scientific). The quality of RNA was determined using Agilent 2100 Bioanalyzer system with Eukaryote Total RNA Nano kit (Agilent Technologies, Santa Clara, CA, USA). The samples were processed according to protocol "Agilent One-Color Microarray-based Gene Expression Analysis (Low Input Quick Amp Labeling)" with Human GE 4x44K V2 Microarray Kit (Agilent Technologies).
Microarray slides were detected with Agilent scanner through ScanControl software. Row data from microarray images were extracted by Agilent Feature Extraction software. Afterward data were subjected to a pre-processing step using open-source program Bioconductor that employs the Limma package with R language [30].
Quantitative real-time PCR
MSC cultures were exposed or not to QMR at 40 nominal power for one cycle and the total RNA was extracted using RNeasy Plus Mini Kit (Qiagen) following the manufacturer's instructions. Quality and quantity were determined using Nanodrop UV-VIS Spectrophotometer (Thermo Fisher Scientific). cDNA was synthesized starting from 800 ng of total RNA, using the iScript cDNA synthesis kit (Bio-Rad, Hercules, CA, USA) according to the manufacturer's instructions.
The obtained cDNA was diluted 1:10 and the quantitative real-time PCR experiments were performed using SsoFast EvaGreen Supermix with Low Rox (Bio-RAD) on ABI 7500 Real-Time PCR System (Applied Biosystem, Thermo Fisher Scientific). Primers used for the amplification were validated and purchased from Bio-RAD. The protocol consisted of 30 seconds at 95˚C, 40 cycles of 5 seconds at 95˚C and an elongation step of 32 seconds at 60˚C, followed by a final melting step to evaluate the quality of the product. Each gene was tested in three replicates and six independent experiments were performed. Data acquisition was obtained by SDS v1.2 software (Applied Biosystem, Thermo Fisher Scientific) and the relative expression was determined using 2 -ΔΔCt method [31] with TBP and YWHAZ as reference genes.
Statistical analysis
To analyse the differences between the experimental settings and the sham-exposed controls after both QMR cycles, data were analysed by one-way ANOVA followed by Dunnett's multiple comparison post-hoc test.
Quantitative real-time PCR data were analysed by paired t-test comparing the ΔCt values. Statistical analysis was performed using GraphPad Prism version 5.01 (GraphPad Software Inc, La Jolla, CA, USA). Differences between samples were considered statistically significant at p<0.05.
For the cDNA microarray analysis, differentially expressed genes (DEG) between treated (40 or 80 nominal power) cells and control cells were elaborated using Limma package with Bayes' empirical method, taking into account the provenience of batches (paired test). Differences between conditions (treated cells versus control cells) were considered significant after Benjamini & Hochberg correction at p<0.05. To analyse the best enrichment of gene lists, the ToppGene Suite and Ingenuity Pathway Analysis (IPA) computational tools were applied. It was considered a q-value<0.01 with Benjamini & Hochberg's false discovery rate (FDR) correction for ToppGene Suite and a p-value<0.01 with Bonferroni-Hochberg correction for IPA analysis.
Evaluation of MSC identity after QMR stimulation
In order to evaluate modification in MSC identity after the first and second cycle of treatment with QMR, we analysed morphology, expression of surface markers and multi-differentiation potential. MSC morphology was observed daily before and after QMR treatments at the different settings. The cells conserved their canonical fibroblast-like spindle-shaped aspect during all the time of the experiments (Fig 2A and 2B). Typical phenotypic MSC expression of CD90, CD105 and CD73 was constantly >95%, while that of CD45 and HLA-DR constantly lower than 2% (Fig 2C). Moreover each marker showed similar rMFI of expression without statistical significance between treated and untreated samples at the different settings (S2 Fig). To investigate if cell cultures lost their in vitro mesenchymal differentiation potential after two cycles of QMR stimulation, we induced cells to differentiate down into osteogenic, adipogenic and chondrogenic lineages by using defined media components and conditions (Fig 3). QMRtreated and sham-exposed MSC samples were able to multi-differentiate after 21 days of induction, being positive to Alizarin Red, Oil Red O and Alcian blue specific staining for osteogenesis (Fig 3A and 3B), adipogenesis (Fig 3C and 3D) and chondrogenesis (Fig 3E and 3F), respectively. No qualitative differences were observed between different QMR treatments and controls at this level. Similar results were detected between one (Fig 3A, 3C and 3E) and two cycles (Fig 3B, 3D and 3F) of stimulation.
MSC viability and proliferation after QMR stimulation
Cellular viability was quantified by flow cytometry at the end of each cycle (Fig 4A). Viability was not affected by QMR; indeed, more than 95% of cells were alive similarly to the controls, with low variability between the different MSC batches and settings. These results confirmed the morphological observations. MSC proliferation was not affected by QMR showing no significant differences between controls and QMR-treated samples at the different settings and times (Fig 4B).
Microarray gene expression analysis after QMR treatment
Based on previous results, we studied the effect of QMR on MSC at transcriptional level by performing cDNA microarray experiments after one cycle of treatment (Day 7). cDNA microarray data pre-processing step reduced the initial number of transcripts from 28,000 to about 12,600. After that, samples were grouped by the similarity of gene expression profiles (doi: 10. 6084/m9.figshare.5702137). Not surprisingly, results of clustering showed that samples grouped mainly according to the donor's provenience and not by QMR treatments, as a result of inherent biological variability between analysed MSC batches.
DEG analysis was applied to identify the differences between QMR-treated and shamexposed MSC samples. Three out of the 16 samples (15 samples + 1 technical replicate) did not meet quality control criteria and therefore were discarded from the subsequent analysis. More transcriptional changes were identified for 40 nominal power than 80 ones. According to a cut-off corrected p-value<0.05, 411 up-regulated and 987 down-regulated genes were found when using 40 as nominal power (Fig 5A). At 80 nominal power, 163 genes were found up- regulated while 199 down-regulated (Fig 5B). In both cases, most of the DEGs showed a very low fold change.
Fig 4. Cellular viability and proliferation after QMR treatment.
A) Histograms represent the % of cellular viability after two cycles of QMR treatment at the different settings compared to the sham-exposed controls determined by flow cytometry. Data were shown as mean ± SD of three independent experiments; B) Percentages of cellular proliferation on the controls were obtained by WST-1 assay after 72 hours. Data were represented as mean ± SD of n = 6 independent experiments. No statistical differences were found between conditions. https://doi.org/10.1371/journal.pone.0190082.g004 To investigate the biological processes and biofunctions in response to QMR stimulation, gene enrichment analysis was performed using ToppGene Suite and IPA tools (Fig 6). The main biological processes up-regulated by 40 nominal power were related to cellular and tissue development, cellular differentiation and vascular system development (Fig 6A and 6C). Positive regulation of protein phosphorylation, vesicle mediated-transport, positive regulation of metabolic processes (Fig 6A), cellular morphology and cell-to-cell interaction biofunctions (Fig 6C) were found down-regulated by the QMR stimulation. Cellular proliferation and movement processes are equally significantly enriched in both gene dataset. The treatment with 80 nominal power showed an enrichment of up-regulated genes related to extracellular matrix organization and down-regulated genes corresponding to membrane protein intracellular domain proteolysis. The latter were identified using ToppGene Suite because IPA did not evidence relevant enrichment lists (S1 Fig).
Assessment of gene expression after 40 QMR stimulation in quantitative real-time PCR
To confirm the gene expression modulation revealed by cDNA microarray analysis, quantitative real-time PCR was carried out in MSC cultures treated at 40 nominal power for one QMR cycle. As shown in Table 1, genes were involved in pathways related to cellular and tissue development, like ECM remodelling, angiogenesis, cellular migration and regulation of actin filaments. Differentially expressed genes obtained by 80 QMR treatment were not further investigated, due to the lower fold change and significance compared to 40 setting.
Our results from six independent experiments, revealed significant increased expression of MMP1, PLAT and ARHGAP22, while A2M gene showed significant decreased expression compared to the controls. By contrast SLIT2, CORO1B, SHC1 and FN1 were not found modulated by QMR treatment, partially confirming cDNA microarray data (Fig 7).
Discussion
We analysed the effects of QMR treatment on MSC in vitro at cellular and molecular level.
At the cellular level, we have observed that the treatment with QMR maintained unchanged cell morphology and viability, cell phenotype at least based on cell markers analysis and cell proliferation.
It was evidenced that EMF and electric fields had the capacity to modify cell physiology and signalling pathways altering ion channels, transport protein activation and intracellular ionic concentration [1,32]. In particular, some authors suggested that EMF affect early stages of differentiation and reduce the time of differentiation [33,34]. Moreover, Teven and colleagues [35] demonstrated that high-frequency pulsed EMF stimulation augmented osteogenic differentiation. We observed that the ability of MSC exposed to QMR to generate mesodermal tissues in vitro was unaltered by the treatment.
To investigate a possible effect of QMR at the transcriptional level, we performed gene expression analysis. As expected for donor-derived cells, cDNA microarray analysis revealed high variability between the different MSC batches. This observation explained the low number of highly significant DEGs between different QMR conditions and controls. DEG analysis also revealed that MSC exposed at 40 QMR setting underwent to more transcriptional changes, suggesting that the treatment at this nominal power is more effective than 80 QMR. In both cases, the relatively low amplitude of the changes confirms the phenotypic observations. The reason because in our experimental setting the 40 stimulation was more effective than 80 remain unclear. In literature there are open questions regarding the mechanism of action of EMF [36]. Since a possible mechanism of action could be related to structural vibrations of electrically polar molecules or larger structures, it is likely that a molecule and a biological system could be more responsive to a particular intensity of stimulation in function of its polarity but further studies are necessary to clarify the issue. The gene set enrichment analysis of DEG, performed to understand which main biological processes were involved, revealed that QMR stimulation on MSC cultures affected a lot of different biofunctions. In fact, we found transcriptionally modulated genes related to development processes, regulation of phosphorylation, regulation of cellular pathways including metabolism, kinase activity and cellular organization. The most represented enrichment lists among up-regulated genes were related to cardiovascular system development as also observed by Serena et al [8] with the electrical stimulation of human embryonic stem cells. Sheikh and colleagues [37] showed that electric fields induced the regulation of endothelial antigenic response via MAPK/ERK pathway activation. In particular, our gene by gene analysis also revealed that 40 up-and down-regulated genes were involved in cellular and tissue development processes such as ECM remodelling, angiogenesis, cellular migration and regulation of actin filaments.
The most representative genes for each category were further validated by quantitative realtime PCR on MSC exposed to 40 nominal power after a single QMR cycle. Overall, 50% of them comprising ARHGAP22, MMP1, PLAT and A2M were found significantly modulated compared to controls. ARHGAP22 is a gene expressing a RhoGAP cytoplasmic protein involved in angiogenesis and in the negative regulation of rearrangement of actin filaments through the inhibition of Rac1 [38,39]. This datum is interesting since some frequencies produced by QMR treatment are inside the endogenous range that affects actin and microtubule filaments [40].
Interestingly, differentially regulated MMP1, PLAT and A2M genes are involved in the ECM remodelling through the fibrinolytic system that is also implicated in embryogenesis, wound healing and angiogenesis [41].
PLAT is a serine protease that converts plasminogen into plasmin where the latter activates other proteases including MMP1 [41]. Neuss and collaborators [42] demonstrated that MSC were able to secrete enzymes involved into this biological pathway and our results showed its promotion by stimulated MSC. In particular, the positive regulation of the two enzymes PLAT (upstream protein) and MMP1 (downstream protein) was in agreement with the negative regulation of the inhibitor of proteases A2M. B&H q-value <0.01); C) Comparative analysis of up-regulated (green bar) and down-regulated (red bar) functional gene enrichments using IPA software with significant enrichment (dotted line) for -log2 (B-H p-value) >2. Proteases participate in the regulation of angiogenesis through a modulation of an extremely complex process [43] whereas extracellular proteolysis is a requirement for new blood vessel formation. Therefore, matrix metalloproteinases as well as plasminogen activatorplasmin systems play an important role during angiogenesis [44,45]. Their release allows the bioavailability of factors stored in ECM reservoir [46][47][48] and PLAT is able to activate PDGF-C [49]. Other studies demonstrated a direct induction of angiogenic factors using electric current [50][51][52].
In conclusion, our data suggests that in our model QMR-treated MSC maintained unaltered cell phenotype, viability, proliferation and the ability of MSC to differentiate into bone, cartilage and adipose tissue. cDNA microarray analysis may suggest an involvement of some genes after treatment in angiogenesis and in tissue regeneration probably through ECM remodelling. In the present study, donor-to-donor variability may have limited the robustness of microarray data to detect subtle modulation in gene expression profile. However, real-time PCR data validated changes detected in the highly regulated genes in QMR-treated MSC, relatively to the lower setting tested. Further studies are necessary to confirm our findings both at the protein and at functional level on different cellular models. | 2018-04-03T06:13:54.232Z | 2018-01-02T00:00:00.000 | {
"year": 2018,
"sha1": "f702133483cf8db6b8089cde3038683bc4de330c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0190082",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f702133483cf8db6b8089cde3038683bc4de330c",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
220686540 | pes2o/s2orc | v3-fos-license | Wasserstein Routed Capsule Networks
Capsule networks offer interesting properties and provide an alternative to today's deep neural network architectures. However, recent approaches have failed to consistently achieve competitive results across different image datasets. We propose a new parameter efficient capsule architecture, that is able to tackle complex tasks by using neural networks trained with an approximate Wasserstein objective to dynamically select capsules throughout the entire architecture. This approach focuses on implementing a robust routing scheme, which can deliver improved results using little overhead. We perform several ablation studies verifying the proposed concepts and show that our network is able to substantially outperform other capsule approaches by over 1.2 % on CIFAR-10, using fewer parameters.
I. INTRODUCTION
Todays computer vision systems mostly rely on large deep neural networks (DNNs). Sophisticated methods have been proposed to train structures hundreds of layers deep, achieving superhuman performance on speech and image processing tasks [1]- [3]. All of today's DNN architectures use convolutional layers (CNNs) [4], which have the advantage of local connectivity due to the filter kernels being shifted over the image, implementing a translational invariance of features with respect to the feature positions. However, the networks still need to learn different filters for various object orientations and sizes, which also means that all of these variations need to be included in the dataset. This issue is often tackled by using data augmentation techniques such as, rotating, flipping and resizing the image. Since most of the objects in image datasets are 2D projections of 3D objects, data augmentation is limited to a small set of possible augmentations if no 3D model of the underlying object is available. Capsule networks (CapsNets) try to solve this by learning equivariant representations on a part or object level, i.e. the networks try to learn an object representation independent of its orientation and size [5], [6]. CapsNets fundamentally rely on routing schemes to select and combine different capsules for classification. These routing schemes assess a capsule according to a pre-defined criterion and assign a weighting factor to each capsule to indicate the strength of its presence in the routing result. This principle allows for the specialization of capsules, but also introduces the problem of incorrect routings,leading to wrong classification results. Recent CapsNet approaches perform well on simple datasets, where the objects are clearly separable from the background, but have difficulties if the images also contain background information [7], [8]. To a certain extent, this can be solved by using a DNN as a pre-processing stage for the CapsNet [9], [10]. Unfortunately, CapsNets still fail to achieve competitive results for large and complex datasets, partly due to the bad scalability of the capsule architecture to many classes. Therefore, fundamental changes in the used architectures need to be introduced to make CapsNets applicable to a larger set of problems. In this paper, we propose a new Wasserstein Capsule Network architecture (WCapsNet), which focuses on efficiency and scalability, making CapsNets applicable to a wide class of computer vision problems. We propose an architecture that uses a critic CNN trained with a Wasserstein objective to solve the problem of capsule routing. This routing joins the multiple levels of the WCapsNet architecture [11], and enables the specialization of the feature detectors across multiple abstraction levels. To train the critic networks, we propose an approximation scheme for the Wasserstein objective, suitable for capsule routing. This highly dynamic WCapsNet architecture implements a parameter efficient classification network. Furthermore, we introduce a vector non-linearity suitable for the WCapsNet architecture. The non-linearity acts on the direction of the capsule vectors and tilts them toward strong components. To validate the proposed Wasserstein routing and the vector non-linearity, we perform several ablation studies presented in Section V-C. Our proposed WCapsNet architecture offers an efficient and scale-able approach for image classification and improves the interpretability of DNNs, by offering possibilities to identify the most relevant parts of the networks for specific input classes. We substantially outperform other capsule approaches by over 1.2 % on CIFAR-10, and show that the architecture is able to deliver a good performance for a more complex dataset like CIFAR-100, without having large computational overhead.
II. RELATED WORK
The first capsule architecture used for classification [7] works well for relatively simple datasets, but fails to achieve competitive results for more complex data [8]. Improvements in terms of classification performance have been achieved by using additional DNN architectures as a pre-processing stage for the CapsNets [9], [10]. Several papers proposed improve-ments to the routing, using unsupervised routing-algorithms, but failed to consistently achieve good performances across datasets [12]. Recently, other approaches for solving the dynamic routing problem have been proposed. In particular, supervised methods, such as neural networks, are used for an improved weight assignment [13], or to generate attention maps which are combined with a binary gating function, trained with the Straight-Through estimator [14], [15]. Less classification focused papers have shown the usefulness of using capsules as parts for object reconstruction in 2D and also for 3D point clouds. With stacked capsule-autoencoders achieving state-of-the-art results for unsupervised classification [6], [16]. A different approach of finding equivariant representations is to explicitly include the invariances in the convolutions [17]. This approach generalizes the translation equivariance of standard convolutions used in computer vision, to convolutions invariant with respect to any transformation from a specific symmetry group, leading to equivariance on a feature, rather than a part or object level.
III. WASSERSTEIN CAPSULES NETWORK (WCAPSNET)
We propose a Wasserstein Capsule Network (WCapsNet) using a Wasserstein-critic network to dynamically select features from specialized capsules. We subdivide the network into different levels which are comprised of several capsules. After each level, a critic network assesses the capsules and passes the result of the routing to the next level. This allows the network to dynamically adapt to an input image across multiple levels of depth and abstraction. The levels of WCapsNet can be grouped into two parts, the (i) feature extraction levels, and the (ii) final prediction level, as shown in Figure 1. Each of the feature extraction levels, consists of N independent capsule blocks c nijk , where i and j are the x and y position of a capsule vector with elements k, and n is the index of the capsule block. The routing scheme connecting the levels relies on the weighting factors produced by a Wassersteincritic and performs a weighted sum over the different capsules. For the feature extraction levels, the critic assesses each block n of capsule vectors jointly and assigns a single weight b n to the whole capsule block c nijk , sharing the same weight across all vectors i and j of the 2D map. The capsule blocks consist of a Dense Block, containing several Dense Layers [3], followed by a capsule transition layer (CapsTrans). The CapsTrans layer consists of a batch normalization operation, a ReLU activation function and a 1×1 convolution reducing the vector dimension after the Dense Blocks [18], [19], followed by a vector non-linearity. We propose a vector non-linearity, which is designed to improve the learning behavior for the WCapsNet architecture. The non-linearity tilts the vector into the direction of the strongest vector components and suppresses weak ones. It is presented in more detail in Section III-A. For the final prediction, in the last level, a critic assigns a separate weight b nij to every capsule vector c nijk . Furthermore, a projection matrix W is used to project the capsule vectors to the one-hot encoded class basis. The weights assigned by the Wasserstein critic are then combined with the projections, using a weighted sum to create the final class prediction of the network. The capsule vector of the last level with the largest weight is passed to the decoder network (see Fig. 1), to reconstruct the input image. The loss of the decoder network consisting of a single fully connected layer and several transposed convolution layers is propagated through the whole network and can therefore modify the capsule vectors to achieve improved reconstruction performance.
A. Capsule transition
The capsule transition layer (CapsTrans), consists of a transition layer applied to the output of the Dense Blocks and a vector non-linearity. The transition layer uses a batch normalization operation, a ReLU activation function and 1×1 convolution, which we will refer to as a combined conv+ operation (see Fig 1). It produces the vectors x k , where k is the vector dimension. The transition layer is followed by a batch normalization operation and the vector non-linearity, creating the capsule vectors c k . For the batch normalization before the non-linearity, the parameters are shared among all CapsTrans layers of the level. In the case of the squash nonlinearity [7], the function shrinks short vectors close to zero length and long vectors to a value bounded by one. Since the WCapsNet architecture uses a vector basis projection to recover the class of the input image, we propose an alternative vector nonlinearity, that improves the learning behavior of the network. The non-linearity rotates the capsule vectors in the direction of their largest positive components, suppressing weak and attenuating strong elements. We use a softmax function to change the direction of vector x, which we refer to as tilt operation, where ⊙ indicates an element-wise multiplication. Both nonlinearities are empirically compared in Section V.
B. Wasserstein Objective
The Wasserstein or Earth-Movers distance is an optimal transport distance that is used to approximate distributions. It is defined as: where (P r ,P g ) denote the set of all joint distributions γ(x, y), with the marginals P r and P g . Since finding the supremum is an intractable problem for most cases, an approximate solution is used. Therefore, a neural network representing a Lipschitz function f (x), is trained to maximize the difference between the expectations for samples from both distributions. Approximating the supremum, we The CapsTrans layer consists of the combined conv+ operation, detailed in Section, and the proposed tilt vector non-linearity follow by a batch normalization operation bn (see Section III-A). Each level is followed by a critic network assessing the different capsules and prediction weights b for the capsules. The input for the next level is constructed performing a weighted sum using b. The weights produced by the last critic serve as weighting factors for the prediction vectors and are used to extract the best capsule from the last level.
network called critic or discriminator. Here the critic has the task of distinguishing samples from the original distribution of real images, and the inferred distribution of fake images.
To use the Wasserstein distance for a different task such as routing, we first need to find a way to select samples from the distributions we want to distinguish. This requires the occurrence of a specific result or property if samples from at least one of the distributions are present, i.e. a correct or an incorrect classifier prediction. If one can define such a property and therefore distinguish the samples, the corresponding task is defined by the way the critic can influence this property. For our case this means how the routing affects the classification result.
C. Wasserstein-Routing
For routing, the task of the Wasserstein-critic f is to identify the best capsules c for the given input sample m. This means that we group the capsules into two distinct distributions, the "good" p(c (m) ), and the "bad" capsules h(c (m) ). Since we do not want to assign a specific input to a capsule, the distributions p(c (m) ) and h(c (m) ) are not known a priori. This makes it hard to define a Wasserstein loss for this objective. However, we can approximate the loss by distinguishing successful routings and failed ones, using the approximate distributionsp(c (m) ) andh(c (m) ). In our approximation, a successful routing is marked by a correct prediction for which the capsules were selected fromp(c (m) ), and a failed one by a wrong prediction of the network with capsules selected from h(c (m) ). The critic can influence the outcome of the predictions by assigning correct or incorrect weighting factors to the different capsules. If the correct capsules are selected, the prediction is more likely to be correct. This means the critic decides whether a capsules belongs to p(c (m) ) or h(c (m) ) by assigning a value f (c (m) ) to the capsule, which we will refer to as fitness. The fitness score of capsule block n, f (n) (c (m) ), relative to the fitness of other capsule blocks then reflects the probability of the capsule belonging to the distribution p(c (m) ) of the correct capsules. Capsules with a low fitness can then be assigned to h(c (m) ). According to the Wasserstein framework, the critic has to be able to assess single capsules, without comparing the capsule blocks among each other. This constraint limits the amount of available information for the critic, but also comes with the advantage of being less prone to overfitting and having less computational overhead for the routing.
Loss approximation: The critic network f produces a fitness value for each capsule sample c (m) . Over several samples in a mini-batch, the approximate Wasserstein lossL WS for a single class, N capsules, M input samples and one critic can be defined as:L wherep andh are the approximated distributions.
To construct these expectation values, we first need to collect the fitness value for each capsule block c (m) n and input sample m, Then a weighting function is applied to the fitness values to create the actual capsule weights b n . The weights b n are calculated by either applying a softmax function to a (m) n along the capsule dimension, or normalizing the votes according to where n b The value of F s should be maximal in case of a correct prediction, while F ns should be minimal, magnifying the difference for the fitness values between correct and incorrect capsules. Since both the target t and the prediction vector p are from [0, 1], we can define the correctness of a classification via the cosine distance of the one-hot target vector t (m) and our prediction vector p (m) : where θ is the angle between both vectors. We can now select the weight the contributions for the objective, using cos(θ) (m) .
For the correct predictions we assume that the contribution from the selected capsules F s belongs to the "good" capsules p(c (m) ) and the contribution from the not selected capsules F ns belongs to the "bad" capsulesh(c (m) ). For the case of an incorrect prediction, the only valid assumption is to assign the contribution from the selected capsules F s to the "bad" capsulesh(c (m) ).
To normalize the loss contributions to be invariant with respect to the amount of correct and incorrect predictions and retrieve expectation values, we calculate normalization factors for a mini-batch of size M : Finally, we can construct the expectation values of Eqn. 4 for a single level, using the approximated distributions, For E c∼h [f (c)] we divide the contributions by a factor of two to balance the expectation losses. This imbalance is rooted in the unknown correct capsule assignment for incorrect predictions. For the critic in the last layer, the x and y position are treated as independent capsules i.e.ñ = n × i × j. This leads to n × i × j values a (m) n in Eqn. 5.
D. Routing
The routing relies on the weighting factors b n , produced by the critic network. The inputc for the next level l + 1 is then calculated by performing a weighted sum over the capsules c l n of level l :c where n is the capsule, i and j index the location and k the dimensionality of the capsule vector.
E. Prediction
In the last layer the critic generates weights for each x and y position. This results in a weight vector b nij . To create the prediction, we first project the capsule vectors c nijk , with the vector elements k, to the one-hot basis with elements r = 1 . . . N Classes + 1, using the transformation matrix W kr . The weighted sum of all projected vectors then provides the final prediction for the network,
F. Regularization and loss function
Since the optimization of a WCapsNet is prone to fall into local optima, we need to employ noise injection and dropout to regularize the training. When selecting capsule blocks the gradient in backpropagation through the selected block is larger than for the other blocks. This leads to better representations within this block, consequently leading to the block being selected more frequently and the routing may collapse. To counteract this issue of selecting always the same capsule we use an additive Gaussian noise from N (0, 0.5) for the fitness values. We scale the noise with the maximum of the fitness values max(a (m) n ), such that the noise is always in the same order of magnitude as a (m) n . Since using this noise on all values impairs the convergence of the critics, we apply it to 5 % of the fitness values. This provides a good trade-off between sampling the distributionsp andh and sufficient convergence of the critics and prevents the routing from collapsing. To further regularize the training we employ an additional dropout of 0.1 to our weighting factors b n and a dropout of 0.3 before the projection matrix W. The training has multiple objectives, therefore the total loss L tot for the network consists of multiple loss contributions: where L CE is the cross entropy loss for the prediction of the network,L WS is the Wasserstein loss from the routing process, L R is the reconstruction loss for the decoder, and L 2 the regularization loss. The corresponding weighting factors are λ WS , λ R and λ WD . We employ the L 2 weight decay loss to all convolution layers except for the ones used in the CNNs of the Wasserstein critics.
IV. NETWORK ARCHITECTURE
The proposed WCapsNet architecture has an exponentially decreasing number of capsules per level, to reflect that complex objects are composed of many different less complex parts. This is also reflected in the dimensions in the capsule vectors. Here the dimension is incremented for the first levels and again decreased for the last level. Decreasing the dimensions in the last level avoids overfitting, since the network needs to generalize to objects. A decoder structure is used to reconstruct the input image, using the best capsules of the last layer as an input.
A. WCapsNet architecture details
The WCapsNet uses an initial 3 × 3 convolution with 24 channels (InitConv in Figure 1). The result is then passed to the first level of independent Dense Blocks. Contrary to the usual DenseNet architecture as presented in [3], we reduce the spatial dimension of the input within the first layer of the Dense Blocks, rather than in the transition layers. Since the Dense Blocks need the input of the block for concatenation, we downsample the size of the input using a shortcut convolution layer with a kernel size equal to its stride (see Figure 2). This decreases the computational complexity of the model, and does not show significant drops in performance in our experiments. For the experiments we use a 4 level WCapsNet, with N = 16-8-4-2 capsule blocks. The number of Dense Layers per capsule was fixed to n D = 6 for all networks. Other parameters used in the WCapsNets are shown in Table I.
B. Critic CNN
Since the critic in the last level needs to provide a separate weight for each individual capsule vector, whereas the other critics do a block wise weighting, two different architectures are implemented. (i) The feature extraction critic architecture is used for all levels except for the last one. It consists of 3 × 3 convolutions with a stride of s = 2, followed by a ReLU activation function and a dropout layer with a dropout rate of r = 0.3. We increase the number of channels per layer as the height and width decreases. For layer j the number of used channels is n ch = j · k critic . In our experiments we use k critic = 32 for the convolutions. The number of layers of each critic depends on the size of the input. This means layers are added until the size is downsampled to one and we receive a single value as our output.
(ii) The critic for the last level has the same structure, but uses 4 layers of 1 × 1 convolutions with a stride of 1, therefore the output has the same size as the input, providing height×width fitness values. To limit the critic outputs and restrict the values to the interval [0, 1], we apply a batch normalization followed by a sigmoid function to all output values. The convolution kernels of the critic CNNs use spectral normalization on the weights, ensuring the Lipschitz criterion of f [20]. The gradient from the critic to the capsules is stopped during training, so the critic cannot modify the capsule blocks.
C. Decoder and reconstruction loss
The decoder network has the task of reconstructing the input based on the selected capsule vector. The gradients from the reconstruction are propagated through the whole network and can therefore influence the capsule vectors, leading to better representations. For our experiments we use the best vector of the last level in the decoder structure. We add the vector position of the extracted capsule vector by concatenating the vector with x and y coordinates, normalized to [-1,1]. The decoder structure consists of one fully connected layer, creating a 2D patch a quarter of the original input size large. Now we apply two transposed convolution layers with a stride of two to create the decoder output. The convolution layers use 32 for the first and 64 channels for the second convolution. Each of the convolution layers employs a batch normalization operation and a ReLU activation before the convolution. We use a Mean Squared Error (MSE) loss to train the network to reconstruct the input image based on the best capsule vector.
V. EXPERIMENTS
We conduct several experiments evaluating the WCapsNet architecture. We perform ablation studies for the proposed routing scheme and the tilt vector non-linearity. Therefore, we train WCapsNet on a image classification task using several standard image datasets. Furthermore, we investigate the voting in more detail for the MNIST dataset. We analyze the capsule weighting factors b n (see Equation 6 and 7) for different classes across multiple levels. The networks use the parametrization of Table I.
A. Datasets and training setup
We select 5 benchmark datasets to evaluate our WCapsNet architecture.
B. Training settings
Since the architecture uses Dense Blocks, we use the training setup of DenseNet as presented in [3]. We use a stochastic gradient descent optimizer with a Nesterov momentum of 0.9, using a batch size of 64. For the CIFAR datasets, we use a base learning rate of 0.1 and decay the learning rate after 150, 200 and 250 epochs by a factor of 0.1. The dropout rate in the Dense Blocks is set to zero. For MNIST and SVHN we train the network for a maximum of 40 epochs and decay the learning rate after 20 and 30 epochs by a factor of 0.1. We used a weight decay scaling factor of λ WD = 10 −4 and a scaling factor of λ WS = 0.2 for the Wasserstein loss and λ R = 0.1 for the reconstruction loss.
C. Ablation studies
To verify our WCapsNet architecture, we perform several ablation studies. We investigate different variants of the routing scheme, different weighting functions and vary the vector nonlinearity of the network. All our results were generated using early stopping using the train/validation splits mentioned in Section V-A. a) Weighting functions: We compare the results of the network using either Eqn. 6 or 7 as a weighting function for the routing weights. The results in Table II show that the softmax weighting function achieves slightly better results than simple normalization of the votes. b) Investigation of different vector non-linearities: We compare the tilt vector non-linearity to the squash nonlinearity, using a softmax weighting function for the routing. The first variant represents the baseline only using the squash non-linearity. For the second variant we use the tilt nonlinearity. The results in Table III show, that the tilt non- linearity outperforms the squash non-linearity especially for more complex tasks as CIFAR-100. This indicates that the tilt non-linearity improves the network behavior. c) Comparing different routing variations: We compare different variants of training the routing networks. The first variant does not stop the gradient before the weighting factors b n , and therefore uses the cross-entropy CE and the Wasserstein loss WS to train the critic networks. The second variant stops the gradient from the cross-entropy loss, and is only trained using the Wasserstein loss. The third variant does not use a Wasserstein objective to train the routing networks, this means that the routing weights are adjusted using only the cross-entropy loss. The fourth variant uses random routing weights drawn form a uniform distribution which is then normalized such that n b n = 1. The last variant uses a uniform weight distribution which means that all weights are set to b i = 1 N . As we can see in Table IV, the variant using
D. Image Classification
In Table V we compare WCapsNets, to other capsule architectures using the best results of our experiments. The results show, that WCapsNets can substantially outperform other capsule approaches on CIFAR-10, while having a fraction of the parameters. The classification performance of the WCapsNet on CIFAR-100 is lower compared to large stateof-the art CNN architectures, but comes close in performance to older DNN architectures like VGG-19.
E. Network evaluation
We evaluate the routing weights b n assigned by the critics for each level of the network. The distribution of weighting factors shows the degree and type of specialization of each capsule. The evaluation of the prediction vectors provides information about the assignability of a feature to a specific class, and therefore indicates the complexity of the features in each level. The results of MNIST shown in Fig. 3 for the average per class weighting factors b n show that the network does specialize the capsules to specific classes. Capsules in deeper levels are more likely to specialize to a larger degree, whereas in the first levels only slight changes in the weighting are present. This supports our assumption that capsules in the first levels represent parts of objects which occur across multiple classes. For the third level, which shows substantial specialization, we see that capsule block two is specialized to detect the number one, whereas capsule three has a large weighting factor if a five or nine is present. The routing weights of the last level contain a periodicity which is related to the x and y positions, but also contains a lot of inter class variation between the weighting factors for the same position. However, the specialization of the capsules is not as large as one might expect. This might be caused by the optimization process, since high routing weight specialization can cause temporary drops in performance during training.
VI. CONCLUSION AND OUTLOOK
We propose a capsule network architecture (WCapsNet), which can dynamically adapt to the input image. The dynamic routing procedure relies on a neural network, called critic, that is trained with an approximate Wasserstein objective. We propose an approximation scheme for the Wasserstein loss suitable for solving the task of routing. Furthermore, we propose a direction dependant vector non-linearity suited for the proposed capsule architecture. WCapsNets offers a new and scale-able approach for image classification and improves the interpretability of classification results, by offering a possibility to analyze capsule weights at multiple levels. The classification results show that WCapsNets are able to achieve less than 6.6% of error on CIFAR-10 outperforming other capsule approaches. Furthermore, WCapsNets are able to achieve good performance on CIFAR-100, which was not feasible with previous capsule architectures that relied on vector length based classification rather than vector projections. We analyze the routing weights for the proposed Wassersteinrouting and visualize the capsule specializations after each level. For future research we would like to explore different methods for training the WCapsNet architecture to achieve a higher degree of specialization within the capsules, and apply WCapsNets to a supervised segmentation tasks leveraging its benefits in more realistic applications. | 2020-07-23T01:01:22.482Z | 2020-07-22T00:00:00.000 | {
"year": 2020,
"sha1": "5399c9426d0421386feca82618c6c4c6a3332c78",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5399c9426d0421386feca82618c6c4c6a3332c78",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
119294737 | pes2o/s2orc | v3-fos-license | The new model of fitting the spectral energy distributions of Mkn 421 and Mkn 501
The spectral energy distribution (SED) of TeV blazars has a double-humped shape that is usually interpreted as Synchrotron Self Compton (SSC) model. The one zone SSC model is used broadly but cannot fit the high energy tail of SED very well. It need bulk Lorentz factor which is conflict with the observation. Furthermore one zone SSC model can not explain the entire spectrum. In the paper, we propose a new model that the high energy emission is produced by the accelerated protons in the blob with a small size and high magnetic field, the low energy radiation comes from the electrons in the expanded blob. Because the high and low energy photons are not produced at the same time, the requirement of large Doppler factor from pair production is relaxed. We present the fitting results of the SEDs for Mkn 501 during April 1997 and Mkn 421 during March 2001 respectively.
Introduction Introduction Introduction Introduction
Blazar is a subclass of active galactic nuclei (AGNs) that oriented at a small angle with respect to the line of sight.
They always emit non-thermal radiation from radio to gamma-rays, even up to TeV energy [1] . The spectral energy distribution (SED) of blazars have two peaks: the first one is in the radio / UV range even up to soft X-ray range and the second one is in the X-ray /gamma-ray range.
The first peak component of the SED is usually attributed to synchrotron emission from relativistic electrons. The leptonic and hadronic models are two main scenarios to explain the second peak component.
In the leptonic model, gamma-rays are produced by electron inverse-Compton (IC) scattering of photons which come from internal or external emission region.
Hadronic model assumes that the ultra-high energy protons lead to gamma ray emission after interaction and decay of secondary particles [2][3][4] . Two subclass of hadronic model are Proton-Initiated Cascade (PIC) model and Synchrotron Proton Blazar (SPB) model. PIC model is pion photoproduction by energetic protons with subsequent synchrotron-pair cascades initiated by decay products (photons and e ± ± ± ± ) of the mesons [5][6]. The SPB model is another attractive possibility for production of high-energy γ-ray, where high -energy protons are accelerated and emit subsequent synchrotron radiation.
The SPB model requires ultra high energy protons and strong magnetic fields in the small emission region [7] .
The discovery of strong TeV variability in two Received; accepted doi: † Corresponding author (email: xy-gao@ynao.ac.cn) blazers, PKS 2155-304 [8] and Mrk 501 [9] , on timescales as short as a few minutes, implies a compact emission region that moves with a large bulk Lorentz factor of Γ > 50 towards the observer assuming a homogeneous onezone model. However, such high values of the bulk Lorentz factor are in contradiction with constrains derived from other observational evidence. [10,11] Furthermore, one-zone models are unable to fit the entire spectrum, the low-energy synchrotron component being long variability timescale which is generally attributed to large emitting region.
In the paper, we present a new model, unifying small and large scales emission region: we consider that the
The The The The Model Model Model Model
We assume the high energy emission region to be a blob near the black hole with relativistic protons, electrons and a uniform field. We need three parameters to describe the blob. They are the radius R i ≤ δcΔt i , where Δt i is the timescale of high energy radiation variability, the uniform magnetic field B i and the Doppler factor δ .
We assume that the electrons and protons in the blob are accelerated to form the standard energy distribution follow as: cutoff energy. In order to produce TeV emission dominated by the proton synchrotron, the magnetic field is limited by [7] , (2) G cm The cutoff energy γ p, max and γ e, max are given by [7] , ( where η is the so-called gyro-factor charactering the rate of proton acceleration and remains a rather uncertain model parameter. In the case of diffusive shock acceleration in the blazar jets the parameter η is expected to be larger than 10 [12] .
The coefficient of proton synchrotron emission is , where the primed parameters are in blob frame, ν' p =δ -1 ν p , P(ν p ' ,γ) is the spectral distribution of synchrotron radiation emitted by a proton of energy γ and is given by and ; The emission intensity in a uniform field is given by [11][12] , , is the absorption coefficient of gamma-ray photons by pair production, and τ γγ = 2k'(ν p ' )R is the optical depth in the blob. There are two possible soft photons for pair production: the radiation intrinsic to the blob and the external ambient radiation. We assume that N e0 <<N p0 and the absorption of the photon produced by the electrons to gamma-rays can be ignored to keep τ γγ ≤1.
The observed SED of the proton synchrotron is given by .
We assume that the blob moves away the black hole with a constant δ and expands to the radius R f. The magnetic field in blob decreases as , the proton synchrotron is restrained and the high energy electrons produced by acceleration mechanisms or proton-photon cascades dominates the radiation of the blob. The electrons emit the synchrotron radiation to contribute the low-energy spectrum of the SED and inverse Compton (SSC) radiation to supply the highenergy spectrum. We also adopt the standard electron energy distribution with the form: , and take the variability timescale Δt f of low energy radiation to limit the radius R f ≤ δ cΔt f . We then calculate the emissive spectra of the electrons to fit the observed SED of low energy component.
Calculating the gamma-ray spectrum we have considered the absorption of Intergalactic Infrared Background (IIB) for gamma-ray emission . We adopt the absorption coefficient of the infrared intergalactic radiation with a new empirical calculations derived by Stecker & De Jager (1998) [13] . [14] have modified the SED of Mrk421 in 2001 Match using IC model, while Pian et al. [15] fitted the SED of Mrk501 with inhomogeneous SSC model. But they cannot fit the high energy spectra very well. There always exists a turn up tail. We use new model to fit their SEDs. The finally fitted spectra for two epochs of X-ray (RXTE) [11] . The solid line in lower energy band is due to synchrotron of electrons. April 16 (high state)1997. The solid line in lower energy band is due to synchrotron of electrons. The simultaneous data is taken by the Beppo-SAX instrument [12] The values of magnetic field B, emitting radius R and Doppler factor δ are shown in table 1. Table 1 We can also estimate the ratio of proton and electron numbers from the two peak luminosities of the observed SED. The peak luminosities are given by L h N p0 B 2 i R 3 i ∝ m -2 p for the protons and L l N e0 B 2 f R 3 f m -2 e for the ∝ electrons respectively. We can get, We take into account the SSC contribution of the electrons to the gamma-rays in the expanded blob. From the fitting results we think that the tail of high energy band could be the SSC contribution of the electrons. We also estimate the ratio between protons and electrons.
Conclusions Conclusions Conclusions Conclusions and and and and Discussion
The low state SED in X-ray band shown in Fig 2 is not fitted very well. In fact, these data points have big error bars due to fast flux variation. They contribute small right-weight in fitting the SED.
Our model does raise some questions in producing the observed nearly simultaneous X-ray/γ-ray variations of Mrk 421 [16,17] . However, there have been regular multiwavelength campaigns for Mrk421 in the last several years. These campaigns reveal a rather loose correlation between the X-ray and TeV γ-ray fluxes [18,19] .
For Mrk501, the multiwavelength observations lack a sufficiently long baseline to make a quantitative assertion about the statistical significance of a X-ray/TeV correlation [20] . There has even been evidence of an "orphan" TeV flare for the blazar 1ES 1959+650 [21] , a transient γ-ray event that was not accompanied by an | 2019-04-12T19:33:18.779Z | 2009-07-09T00:00:00.000 | {
"year": 2009,
"sha1": "4d5fe76f8ca04dc74e71b6ce6857f672b3fac4c2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0907.1537",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4d5fe76f8ca04dc74e71b6ce6857f672b3fac4c2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52067951 | pes2o/s2orc | v3-fos-license | Millimeter-wave Extended NYUSIM Channel Model for Spatial Consistency
Commonly used drop-based channel models cannot satisfy the requirements of spatial consistency for millimeter-wave (mmWave) channel modeling where transient motion or closely-spaced users need to be considered. A channel model having \textit{spatial consistency} can capture the smooth variations of channels, when a user moves, or when multiple users are close to each other in a local area within, say, 10 m in an outdoor scenario. Spatial consistency is needed to support the testing of beamforming and beam tracking for massive multiple-input and multiple-output (MIMO) and multi-user MIMO in fifth-generation (5G) mmWave mobile networks. This paper presents a channel model extension and an associated implementation of spatial consistency in the NYUSIM channel simulation platform. Along with a mathematical model, we use measurements where the user moved along a street and turned at a corner over a path length of 75 m in order to derive realistic values of several key parameters such as correlation distance and the rate of cluster birth and death, that are shown to provide spatial consistency for NYUSIM in an urban microcell street canyon scenario.
Abstract-Commonly used drop-based channel models cannot satisfy the requirements of spatial consistency for millimeterwave (mmWave) channel modeling where transient motion or closely-spaced users need to be considered. A channel model having spatial consistency can capture the smooth variations of channels, when a user moves, or when multiple users are close to each other in a local area within, say, 10 m in an outdoor scenario. Spatial consistency is needed to support the testing of beamforming and beam tracking for massive multiple-input and multiple-output (MIMO) and multi-user MIMO in fifthgeneration (5G) mmWave mobile networks. This paper presents a channel model extension and an associated implementation of spatial consistency in the NYUSIM channel simulation platform [1], [2]. Along with a mathematical model, we use measurements where the user moved along a street and turned at a corner over a path length of 75 m in order to derive realistic values of several key parameters such as correlation distance and the rate of cluster birth and death, that are shown to provide spatial consistency for NYUSIM in an urban microcell street canyon scenario.
I. INTRODUCTION
Recent work reveals that global mobile data consumption will experience a vast increase over the next few years [3], [4]. MmWave communication is regarded as a promising technique to support the unprecedented capacity demand because of the availability of ultra-wide bandwidths. Accurate channel modeling for mmWave frequencies has been an important area of study recently, since the mmWave channel, when combined with directional antennas, has vastly different characteristics from omnidirectional microwave channels [5]- [7]. Many statistical and deterministic channel models such as METIS [8], NYUSIM [6], [9], MiWEBA [10], 3GPP [11], [12], 5GCM [13], and mmMAGIC [14], have been proposed over the past few years.
Most of the existing statistical channel models are dropbased, where all parameters used in one channel realization are generated and used for a single placement of a particular user. Then, a subsequent simulation run of the drop-based channel model results in an independent sample function for a different user, and at a completely different, arbitrary location, even if the same distance between the transmitter (TX) and receiver (RX) is considered [1], [6], [15]. Drop-based models are popular because of their simplicity in Monte Carlo simulations [16]. The NYUSIM channel model generates static channel impulse responses (CIRs) at a particular distance/location, or across the manifold of a 2-D antenna structure, but cannot generate dynamic CIRs with spatial or temporal correlation based on a user's motion within a local area [1], [6], [17]. In other words, CIRs of two closely spaced locations are generated independently, although one would expect the CIRs to be highly correlated if the users were truly close to one another [15]. It stands to reason, and is borne out by measurements, that two close users, or a user moving in a small area, should experience a somewhat consistent scattering environment [2]. Thus, spatial consistency has become a critical modeling component in the 3GPP Release 14 [11]. Challenges exist for drop-based models to be spatially consistent, since nearly all temporal and spatial parameters would need to vary in a continuous and realistic manner as a function of small changes in the user's location.
Lack of measurements poses a challenge to accurate spatially consistent channel modeling, especially for mmWave frequencies. Using field measurements to create and validate the mathematical channel models is one way to ensure accuracy and to gain theoretical insights. The NYUSIM channel model uses realistic large-scale and small-scale parameters for various types of scenarios, environments, and antenna patterns based on massive datasets from measurements at 28, 38, and 73 GHz in urban, rural, and indoor environments [4], [6], [15]. Local area measurements were conducted in a street canyon at 73 GHz over a path length of 75 m, where the receiver moved from a non-line-of-sight (NLOS) environment to a line-ofsight (LOS) environment [18]. The measurements [18] provide a basis for the proposed model with spatial consistency. The NYUSIM channel model simulator operates over a wide range of carrier frequencies from 800 MHz to 100 GHz [1], [6], and provides temporal and 3-D spatial parameters for each MPC, and generates accurate CIRs, power delay profiles (PDPs) and 3-D angular power spectrums. The spatial consistency extension proposed here allows the simulator to use additional parameters such as the velocity, location, and moving direction of a user to reproduce realistic CIRs received by the moving user with spatial consistency. This paper presents a modified channel coefficient generation procedure for spatial consistency under the framework of the NYUSIM channel model [2], and compares the simulation results with 73 GHz measured data from the street canyon measurements [18], and is organized as follows. Section II overviews existing models that consider spatial consistency, and provides current approaches for channel tracking. Section III describes the impact of spatial consistency on the NYUSIM channel model, and describes the modified generation procedure for spatial consistency. Section IV presents the actual channel transitions and resulting CIRs when a user moved in a street canyon based on the measurements. Conclusions are presented in Section V.
II. EARLY RESEARCH ON SPATIAL CONSISTENCY
Due to the requirements of mmWave communications in mobile and vehicle-to-vehicle (V2V) communications [19], modern channel models and simulation techniques must adequately characterize changing environments, and generate continuous channel realizations with statistics that are lifelike and usable for accurate simulation for beamforming and other MAC and PHY level design. Channels can be categorized as stationary channels or non-stationary channels based on the rate of the change of the propagation scenarios. Channel modeling and simulations for non-stationary channels where the scattering environment changed significantly, is studied in [20]. This channel model and simulation method emulated the time-variant nature of a real channel, and realized channel variations in a single channel realization. The channel modeling approach in [20] can also be extended to stationary channels where the channel parameters are renewed over time while still fulfilling the stationary condition [20]. Spatial consistency represents the smooth variations for the stationary channels when a user moves, or when multiple users are located closely, in a local area over 5-10 m.
At microwave frequencies, early statistical CIR models for correlated multipath component amplitudes over one meter local areas due to the small-scale movement were developed from 1.3 GHz measurements, and associated channel simulators, SIRCIM/SMRCIM, were developed based on this model considering spatial and temporal correlation [21]. Specifically, the simulators considered the motion, the corresponding Doppler spread, and the resulting phase shift on individual multipath components over a local area [22], [23]. SIRCIM/SMRCIM simulators were implementations of spatial consistency, before the term was even coined.
Generally, the small-scale spatial autocorrelation coefficient of the received signal voltage amplitude decreases rapidly over distance, and the correlation distance of individual MPC amplitude is only a few to a few tens of wavelengths. The correlation distance of the received signal voltage amplitude in a wideband (1 GHz) transmission is only 0.67-33.3 wavelengths (0.27-13.6 cm) at 73 GHz, depending on antenna pointing angle with respect to scattering objects [1], [24]. Furthermore, the amplitudes of individual MPCs in an 800 MHz bandwidth decorrelate over 2 and 5 wavelengths (2.14 and 5.35 cm) at 28 GHz in LOS and NLOS environments, respectively [25], [26]. Spatial consistency, however, is different from small-scale spatial correlation. Spatial consistency refers to the similar and correlated scattering environments that are characterized by large-scale and small-scale parameters in the channel model [2], [15]. The large-scale parameters have a much longer correlation distance of 12-15 m [11] since the scattering environment does not change dramatically in [18]. The yellow star is the TX location, blue dots represent LOS RX locations, and red squares indicate NLOS RX locations. North represents 0 • . a local area. Small-scale fading measurements [24] support the hypothesis of spatial consistency extending well beyond one meter since the amplitude of individual MPC, and the total received power varied smoothly and continuously over 0.35 m (the longest distance measured) [25].
Both statistical and deterministic channel models need to be spatially consistent for use in studying adaptive signal processing for mobile scenarios. Statistical channel models rely on large-scale parameters (shadow fading, the number of time clusters, the number of spatial lobes, delay spread, and angular spread) and small-scale parameters (time excess delay, power, AOA and AOD for each MPC) from measurements [15], whereas deterministic channel models rely on geometry and ray-tracing techniques to acquire the channel information [8], [10].
A. Deterministic Channel Models with Spatial Consistency
Spatial consistency is easier defined and maintained in deterministic channel models, since the locations of the scatterers in the environment are identified in the site-specific channel models [13]. The powers, angles, and delays of MPCs can be easily calculated from the relative change of locations of the RX and scatterers based on geometry, generally through the use of ray tracing [27].
The MiWEBA channel model [10] is quasi-deterministic at 60 GHz, and uses a few strong MPCs obtained from raytracing techniques. Several relatively weak statistical MPCs are added to ray-tracing results to maintain some randomness in the channel. The METIS map-based channel model [8] is also a channel model that uses ray-tracing techniques to acquire large-scale parameters for a specific environment, and uses the map-based large-scale parameters, along with measurement-based statistical small-scale parameters [8].
B. Statistical Channel Models with Spatial Consistency
For statistical (e.g. stochastic) channel models, spatial consistency is a challenge since they tend to be drop-based, and cannot generate a time-evolved CIRs in a local area. Thus, geometric information and correlation statistics are necessary for these models to obtain the proper correlated values of large-scale and small-scale parameters for closely spaced locations. 5GCM [13] proposed three approaches for spatial consistency. The first approach uses spatially correlated random variables to generate small-scale parameters such as excess delays, powers, and angles. Users located nearby share correlated values of small-scale parameters. Four complex Gaussian identically and independently distributed (i.i.d) random variables on four vertices of a grid having a side length equal to the correlation distance are generated first. Then, spatially consistent uniform random variables at any location within the grid are formed by interpolating from these four Gaussian random variables [13]. The problem with this method of ensuring spatial consistency is that the system needs to store the values of random variables for grids around the user in advance, and requires a large storage space [14].
The second approach is the geometric stochastic approach [13]. In this approach, large-scale parameters are precomputed for each grid having a side length equal to the correlation distance of the corresponding large-scale parameter. The small-scale parameters are dynamically evolved both in the temporal and spatial domain, based on the time-variant angle of arrivals (AOAs) and angle of departures (AODs), and cluster birth and death [28].
The third approach, the grid-based geometric stochastic channel model (GGSCM) [13], uses the geometric locations of scatterers (i.e., clusters). A cluster is defined as a group of rays coming from the same scatterer, and these rays have similar angles and delays. The angles and delays of the cluster and multipath components in the cluster can be translated into the geometrical positions of the corresponding scatterers. Thus, the time evolution of angles and delays can be straightforwardly computed from the relative changes of the user position, and have very realistic variations.
MmMAGIC channel models have adopted the three aforementioned spatial consistency approaches, and set the first approach as default since this approach has a more accurate realization of the mmWave channel [14].
The COST 2100 model is also a geometry-based stochastic channel model [29], and introduces a critical concept, the visibility region. The visibility region refers to a region both in time and space where a group of multipath components is visible to the user. The multipath components in the visibility region constitute the CIRs experienced by the user.
III. SPATIAL CONSISTENCY EXTENSION FOR NYUSIM CHANNEL MODEL
As discussed earlier and in [2], [6], [15], the large-scale and small-scale parameters should vary continuously as a function of the user location in a channel realization over a local area. Under the framework of the NYUSIM channel model [9], a spatial consistency extension is proposed for the NYUSIM channel model and associated simulator [2]. A spatial exponential filter is applied to make large-scale parameters spatially correlated within the correlation distance of these parameters. The modeling of time-variant small-scale parameters is motivated by the stochastic geometry approach [28] and the CIR generation procedure in 3GPP Release 14 [11]. The large-scale path loss is made time-variant, and the shadow fading is made spatially consistent over a local area. Thus, the NYUSIM channel model is extended from a static drop-based to a dynamic time-variant channel model which fits well the natural evolution of NYUSIM and other dropbased statistical models.
Two distances should be clarified first, the correlation distance and update distance. The correlation distance determines the size of the grid that maintains spatial consistency of channel conditions. The CIRs of the user moving beyond the correlation distance, or multiple users separated beyond correlation distance, can be regarded as independent. Each largescale parameter has its own particular correlation distance, and the correlation distance varies according to scenarios and frequencies. For example, the correlation distance of a largescale parameter in the UMi scenario is shorter than the one in the RMa scenario because of the higher building density. Thus, extensive propagation measurements for various scenarios and frequencies are necessary to provide the accurate values of the correlation distances of large-scale parameters. 3GPP Release 14 [11] provides that the correlation distances of large-scale parameters in the LOS and NLOS UMi scenarios were 12 m and 15 m, respectively. Some 73 GHz measurements in a LOS street canyon scenario suggest that the correlation distance of large-scale parameters at 73 GHz is 3-5 m [28]. From the local area measurements at 73 GHz that will be introduced in the Sec. IV, the correlation distance of the number of time clusters is 5-10 m. The update distance is the time interval in which the model updates the small-scale parameters for MPCs and renews a CIR. Since the small-scale parameters are time-variant and not grid-based, the update distance should be much shorter than the correlation distance of large-scale parameters to ensure accurate modeling while sampling at an arbitrary time or distance within a local area [22]. 3GPP Release 14 suggested that the update distance should be within 1 m. During each update, the channel can be considered as static. That is to say; the update period is 2 s when the user moves at 0.5 m/s; the update period is 0.2 s when the user moves at 5 m/s. 1 m is set as update distance in the NYUSIM channel model for simplicity.
The details for the generation method of each large-scale and small-scale parameter are described below.
• Time-variant path loss: The path loss varies smoothly as the user moves in a local area since the shadow fading is spatially consistent. The path loss is obtained from the close-in (CI) path loss model with 1 m free space refer-ence distance [6], and calculated in every update period based on the locations of the moving user. The path loss and shadow fading is critical to the evaluation of massive MIMO and multi-user MIMO system performance, and has a large impact on the received power.
• LOS/NLOS transition: LOS/NLOS condition (LOS probability) determines the value of path loss exponent and shadow fading. Thus, the path loss in LOS and NLOS scenarios are much different. The NYUSIM channel model, as a stochastic model, models the LOS probability as a distance squared model [24]. The conventional NYUSIM channel model generates the LOS/NLOS condition independently in each simulation. The LOS/NLOS does not change during each simulation. A spatial exponential is applied to make the LOS/NLOS condition spatially correlated based on the correlation distance of LOS probability [2]. Furthermore, when the LOS/NLOS condition changes, the values of the corresponding parameters will change during the simulation. For Monte Carlo simulations, a statistical spatially correlated LOS/NLOS condition map for a local area is reasonable enough to evaluate the system capacity. However, in the real-world transmission, the information of LOS/NLOS condition in the channel state information (CSI) would be very important for the base station to decide the transmission scheme.
• The number of time clusters, the number of spatial lobes, the number of MPCs in each time cluster: These three parameters are large-scale parameters that are precomputed for each grid since the surrounding scatterers do not change rapidly within the correlation distance of large-scale parameters.
• Cluster birth and death: This concept, first presented in [28], demonstrates the time evolution of time clusters. When the user moves across grids from location A to location B in the real world, the time clusters appear at the location A may disappear at the location B since the clusters observed at A become very weak. The extension for NYSUIM channel model generates grid-based largescale parameters including the number of time clusters. Thus, the clusters of A should be discarded, and the clusters of B should be generated gradually during the movement. This procedure can be modeled as a Poisson process with a rate of cluster birth and death. The probability of the occurrence of cluster birth and death is denoted as where t 0 is the most recent update time, and λ c is the mean rate of cluster birth and death per second. This rate varies according to the scenarios, and can only be obtained from field measurements. The birth and death always happen to the weakest cluster at the location. If the numbers of clusters in two grids, A and B, are the same, only the replacement from an old to a new cluster will occur. The weakest cluster of A will be replaced by the weakest clusters of B as one moves from grid A to grid B. Note that when the cluster birth and death occurs, only one cluster of A and one cluster of B will be involved. If the number of clusters in the two grids are not the same, the cluster birth or death will occur alone. This gradual replacement of time clusters ensures spatial consistency in the NYUSIM channel model.
A. Measurement Environment and Procedure
Local area measurements were conducted at 73 GHz using a null-to-null RF bandwidth of 1 GHz [18] to study the spatial consistency and provided the reference values of several parameters such as correlation distance of largescale parameters. Table I provides the specifications of the measurement system [18]. The measurements were conducted in a street canyon (18 m wide) between 2 and 3 MetroTech Center in downtown Brooklyn, NY. During the measurements, the TX and RX antennas were set to 4.0 m and 1.5 m, respectively, to emulate the heights of an access point and a user terminal, respectively. TX and RX locations are shown in Fig. 1, where RX moved from the location RX81 to the location RX96 (NLOS to LOS). The T-R separation distance varied from 81.5 m to 29.6 m. Specifically, the T-R separation distance of NLOS locations (RX81 to RX91) varied from 81.5 m to 50.8 m; the T-R separation distance of LOS locations (RX92 to RX 96) varied from 49.1 m to 29.6 m. The distance between two RX locations was 5 m. Note that the TX antenna pointing angle was the direction that resulted in the strongest received power at the starting location: RX81, and was fixed during the measurements. For each TX-RX combination, the RX swept five times in the azimuth plane. Each sweep was 3 min, and the interval between sweeps was 2 min. The RX antenna swept in half power beamwidth (HPBW) step increments (15 • ). A power delay profile (PDP) was recorded at each RX azimuth pointing angle, and the measurements at each location resulted in at most 120 PDPs (some angles did not have a detectable signal above the noise floor). The best RX pointing angle (the direction where the RX got the maximum received power) in the azimuth plane was selected as the starting direction for the RX azimuth sweeps (elevation remained fixed for all RXs), at each RX location measured [18].
B. Measurement Data Processing and Analysis
24 directional PDPs (HPBW step increments in the azimuth plane, 360/15 = 24) [18] of one sweep at each location were combined to form one omnidirectional PDP to better illustrate spatial consistency. The denoising was done before this synthesis with a threshold of 20 dB below the peak power of each directional PDP. All 16 omnidirectional PDPs were aligned only for illustration purposes, and the time excess delays of these PDPs are shown in the Fig. 2. As the RX moved towards TX, the received power increased, and the number of time clusters also increased from 1 up to 6.
To study the correlation distance of large-scale parameters, the PDPs of the first six NLOS RX locations were studied. The PDPs are shown in Fig. 3. The number of time clusters is abstracted into the Table. II based on the time-clustering algorithm described in [26]. Thus, the correlation distance of the number of time clusters is about 5-10 m; the correlation distance of delay spread is also about 5-10 m. The same results of correlation distance can be found from the rest PDPs at other LOS and NLOS locations in Fig. 2.
These local area measurements also showed the impact of LOS/NLOS condition on resulting PDPs. When the RX moved from RX91 to RX92, the visibility condition changed from NLOS to LOS. The PDPs at RX91, RX92, RX93 are shown in Fig. 4. The received power of RX92 was much stronger than that of RX91, and there were more MPCs at RX92 than at RX91. These results indicate that the LOS/NLOS condition is particularly critical to CIRs and cannot be generated independently for nearby locations as is currently done in conventional statistical channel models. The spatially consistent LOS/NLOS condition would help to predict the CIRs more accurately.
V. CONCLUSION
The spatial consistency extension for the outdoor NYUSIM channel model has been presented in this paper. The generation procedure of both large-scale parameters and smallscale parameters was modified to make these parameters spatially consistent and time-variant. Spatially correlated random variables were applied to characterize the grid-based largescale parameters; A geometry-based approach was applied to obtain the time-variant small-scale parameters such as timevariant AODs and AOAs, and time cluster birth and death. The static large-scale path loss of drop-based simulations was transformed into a time-variant parameter. The local area measurements in a street canyon were also presented and analyzed in this paper, which indicated that the correlation distance of the number of time clusters and delay spread is about 5-10 m in a UMi street canyon scenario. More field measurements should be conducted to obtain parameters for spatial consistency in various scenarios. The modern channel models with spatial consistency will help in the design of beam tracking and beamforming in system level, and will estimate the channel more accurately for transient simulations. | 2018-08-21T19:32:03.000Z | 2018-08-21T00:00:00.000 | {
"year": 2018,
"sha1": "44550c4057f59238e88b58014a9c2f5a284c08c9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.07099",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aff1acd27da771d243cf0d30b1e27fc627c63969",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
211084985 | pes2o/s2orc | v3-fos-license | TULIP: a randomised controlled trial of surgical versus non-surgical treatment of lateral compression injuries of the pelvis with complete sacral fractures (LC1) in the non-fragility fracture patient—a feasibility study protocol
Introduction Lateral compression type 1 (LC1) pelvic fractures are the most common type of pelvic fracture. The majority of LC1 fractures are considered stable. Fractures where a complete sacral fracture is present increases the degree of potential instability and have the potential to displace over time. Non-operative management of these unstable fractures may involve restricted weight bearing and significant rehabilitation. Frequent monitoring with X-rays is also necessary for displacement of the fracture. Operative stabilisation of these fractures may be appropriate to prevent displacement of the fracture. This may allow patients to mobilise pain-free, quicker. Methods and analysis The study is a feasibility study to inform the design of a full definitive randomised controlled trial to guide the most appropriate management of these injuries. Participants will be recruited from major trauma centres and randomly allocated to either operative or non-operative management of their injuries. A variety of outcome instruments, measuring health-related quality of life, functional outcome and pain, will be completed at several time points up to 12 months post injury. Qualitative interviews will be undertaken with participants to explore their views of the treatments under investigation and trial processes. Eligibility and recruitment to the study will be analysed to inform the feasibility of a definitive trial. Completion rates of the measurement instruments will be assessed, as well as their sensitivity to change and the presence of floor or ceiling effects in this population, to inform the choice of the primary outcome for a definitive trial. Ethics and dissemination Ethical approval for the study was given by the South West—Central Bristol NHS Research Ethics Committee on 2nd July 2018 (Ref; 18/SW/0135). The study will be reported in relevant specialist journals and through presentation at specialist conferences. Trial registration number ISRCTN10649958
ABSTRACT Introduction Lateral compression type 1 (LC1) pelvic fractures are the most common type of pelvic fracture. The majority of LC1 fractures are considered stable. Fractures where a complete sacral fracture is present increases the degree of potential instability and have the potential to displace over time. Non-operative management of these unstable fractures may involve restricted weight bearing and significant rehabilitation. Frequent monitoring with X-rays is also necessary for displacement of the fracture. Operative stabilisation of these fractures may be appropriate to prevent displacement of the fracture. This may allow patients to mobilise pain-free, quicker.
Methods and analysis
The study is a feasibility study to inform the design of a full definitive randomised controlled trial to guide the most appropriate management of these injuries. Participants will be recruited from major trauma centres and randomly allocated to either operative or non-operative management of their injuries. A variety of outcome instruments, measuring health-related quality of life, functional outcome and pain, will be completed at several time points up to 12 months post injury. Qualitative interviews will be undertaken with participants to explore their views of the treatments under investigation and trial processes. Eligibility and recruitment to the study will be analysed to inform the feasibility of a definitive trial. Completion rates of the measurement instruments will be assessed, as well as their sensitivity to change and the presence of floor or ceiling effects in this population, to inform the choice of the primary outcome for a definitive trial. Ethics and dissemination Ethical approval for the study was given by the South West-Central Bristol NHS Research Ethics Committee on 2nd July 2018 (Ref; 18/SW/0135). The study will be reported in relevant specialist journals and through presentation at specialist conferences. Trial registration number ISRCTN10649958
InTRoduCTIon Background
The Trauma Audit and Research Network (TARN) database indicate increasing numbers of pelvic ring fractures. In the financial year 2015/16, TARN recorded 6407 pelvic ring fractures in England and Wales of which half were associated with high-energy trauma. Fractures associated with a side or lateral compression force are the most common; a subgroup of these are called lateral compression type 1 (LC1). LC1 fractures make up approximately 60% of pelvic ring fractures, 1 2 which equates to approximately 3800 patients a year within England and Wales. A proportion of pelvic fractures are sustained as a result of simple trips or falls and these are generally in the older person where bone quality is frequently poor. Stabilisation of fractures in elderly patients presents technical problems due to the difficulty in achieving adequate fixation in osteoporotic bone. The mortality
Strengths and limitations of this study
► This is the first randomised multicentre study to investigate the treatment of high-energy unstable LC1 fractures. ► We are collecting a range of outcome measures at several time points to identify the most appropriate primary outcome for a definitive study. ► Qualitative interviews will provide valuable insights to identify challenges with recruitment and followup and inform the future definitive study design. ► Results of the TULIP feasibility will inform the design and conduct of a future multicentre Randomised Controlled Trial.
Open access
Box 1 detailed study objectives Standards of Reporting Trials diagram, reporting screening, recruitment, randomisation compliance and include allocation proportions by centre. 2. To confirm the recruitment rates and percentage of eligible patients who agree to take part. 3. To collect outcome data at fixed time points post injury to collate the completeness and spread of the data at different time points post injury. 4. To identify the outcome measure to be used as the primary outcome on the basis of completeness of data, sensitivity to change over time, the presence of floor or ceiling effects and patient acceptability. 5. To develop and refine methods for the collection of resource use data relating to both management pathways. 6. To explore patient and staff views of randomisation, treatment and trial processes using qualitative interviews.
To produce a Consolidated
during index hospital admission associated with LC1 fractures ranges from 5.1% to 8.6%. 1 2 LC1 fracture patterns are a heterogeneous group of injuries, divided into those involving a complete or an incomplete fracture of the sacrum with or without an injury to the anterior pelvic ring.
The majority of LC1 fractures are considered stable enough to allow rehabilitation without later displacement. Numerous studies have shown complete sacral fractures to be present in 32%-50% of LC1 fractures. [3][4][5] The combination of a complete sacral fracture and either unilateral or bilateral pubic rami fractures increases the degree of potential instability. Unstable LC1 fractures of the pelvis have a tendency to displace significantly over time. 6 Bruce et al 5 reported 32% of patients with a complete sacral fracture and unilateral pubic rami fractures, and 68% of patients with a complete sacral fracture and bilateral rami fractures, went on to have significant displacement.
This, potentially unstable, subgroup of LC1 fractures may still be managed non-operatively. Patients would usually be allowed to mobilise as able although they may be advised to restrict the amount of weight they put through the injured side and will require walking aids provided by a physiotherapist. They also require frequent X-rays to monitor for any progression in fracture displacement. Patients with LC1 fractures are reported to spend up to 16 days in hospital following their injury 7 and require significant rehabilitation following their discharge from acute care. 8 These injuries can have significant implications for patients. Hoffmann et al 9 showed that even at 24 months postinjury, patients had not returned to their preinjury functional abilities. Aprato et al 8 found that 60% of the costs following pelvic injury were attributed to health-related work absence.
It may therefore be appropriate to surgically stabilise this subgroup of more severe, potentially unstable, LC1 fractures. This involves the insertion of metalwork to prevent displacement of the fractures. While patients will still require walking aids, their ability to mobilise may be improved. Tosounidis et al 7 carried out a non-randomised study comparing surgical versus non-surgical management of LC1 fractures. They found that patients had significantly decreased pain at 72 hours and were able to mobilise, pain-free, quicker following surgery. They also demonstrated a shorter length of stay in patients undergoing surgical fixation. However, Hagen et al 10 in a retrospective study looking at patients' pain, narcotic use and mobility following surgical stabilisation of lateral compression fractures found no significant difference in these parameters between surgically and non-surgically treated groups.
Other advantages of treating these fractures surgically include a lower risk of fracture displacement and avoiding the risks associated with immobility, including chest or urinary tract infection, thrombosis and pressure sores. The disadvantages of treating LC1 fractures surgically are the risk of general anaesthesia, the physiological impact of surgery, the small risk of surgical site infection and of damage to the nerves that supply the bladder, bowel or leg muscles. As well as improving patients' pain levels and functional abilities, surgery has been shown to provide economic benefits by reducing length of hospital stay and input required from healthcare professionals, which may outweigh the additional costs of surgery.
Rationale
A survey on the management of LC1 fractures, 11 although not specific to unstable LC1 fractures, indicated significant variation of practice in managing these fractures and agreement between surgeons was only achieved for one third of case studies.
Both Hagen et al 10 and Tosounidis et al 7 concluded that a randomised controlled trial of surgical versus nonsurgical management of LC1 fractures was needed. Currently there is no level 1 evidence available to guide clinicians as to the optimum management of these patients.
Aim and objectives
The overarching aim is to perform a definitive trial to establish whether surgical or non-surgical management of unstable LC1 fractures is most appropriate. The aim of this feasibility study is to allow us to plan a full definitive trial by measuring recruitment, retention and follow-up rates and explore participant and staff views of the trial processes. Study objectives are shown in box 1.
METhodS And AnAlySIS Trial setting
This multicentre trial will take place in 9 NHS Major Trauma Centres (MTC) which specialise in the treatment of pelvic injuries over 33 months. There are 22 MTCs across the UK currently where all patients with unstable pelvic injuries will be referred and assessed.
Eligibility
All patients over 16 years of age presenting with an LC1 fracture including a complete sacral fracture will be assessed for inclusion in the study. A log of all patients meeting these criteria will be maintained. Patients will be excluded if they meet one of the following criteria: ► Unable to be randomised within 72 hours of having capacity to comprehend the study information following arrival at the major trauma centre. ► Fragility fractures resulting from low-energy trauma (fall from less than standing height). ► Presenting medical condition which precludes surgical intervention. ► Unable to provide informed consent.
Recruitment
Patients eligible for inclusion in the study will be identified by their surgeon who will make the patient aware of the study and seek their agreement to consider participating. The study will then be fully discussed with the patient by a member of the research team at each site. Patients will be provided with a written information sheet explaining the purpose of the study and the treatments under investigation. They will be allowed sufficient time to consider the information provided and patients who agree to participate in the study will be asked to provide written consent. Patients who decline to participate in the study will be recorded on the screening log together with reasons for declining where provided.
To understand patient perceptions of the recruitment process, all patients that are approached regarding their potential participation in the study will be asked to complete a short questionnaire regardless of whether they consent to participate in the feasibility study. Patients will be asked to complete these questionnaires immediately following confirmation of their decision on participating in the study. Where this is not possible a copy of the questionnaire will be sent in the post by the local research team. Responses to these questionnaires will be confidential and patients will be identified only by their screening ID. The results of this questionnaire will be analysed as an ongoing process to help inform and develop the approach of further patients. Figure 1 shows the flow of participants through the trial.
Allocation and blinding
Patients will be randomly allocated to the treatments on a 1:1 basis using a web-based randomisation procedure hosted by Bristol Randomised Trials Collaboration (a registered Clinical Trials Unit) with concealment prior to consent, but no blinding of participants or clinical staff to the allocation of treatment pathway. The trial statistician is responsible for producing the allocation sequence, stratified by recruiting centre and minimised on Injury Severity Score as an indicator of multiple injuries (<16 or>=16).
Surgical management
Surgical management will involve fixation of the pelvic fracture by a specialist pelvic surgeon at the earliest opportunity. As surgical fixation of these fractures is performed regularly in all participating centres the method of fixation and choice of implant will be left to the operating surgeon. Postoperative management and rehabilitation will be left to the discretion of the treating surgeon. Details on the surgery and subsequent rehabilitation will be collected as part of the study.
Non-surgical management
Non-surgical management will be left to the discretion of the treating surgeon. Any decision on restricted weightbearing will be left to the treating surgeon. Rehabilitation including Physiotherapy and Occupational Therapy will follow usual practices. Details on the rehabilitation will be collected as part of the study.
outcomes
To assess the feasibility of the study design we will assess participant numbers such as recruitment rate, including numbers of patients meeting inclusion criteria and reasons for exclusion or declining where appropriate Compliance rates with allocated treatment and any reasons for not being able to comply. We will also look at follow-up rates, withdrawals, including reasons for withdrawal where appropriate in accordance with the Consolidated Standards of Reporting Trials diagram. We will look at the outcomes measures that are expected to be used in the full trial with particular interest in data completion rates, evidence of sensitivity to change (whether the score change over time) and whether the outcomes have ceiling or floor effects. The following patient reported outcomes will be tested for use in a definitive study: Measures at baseline and follow-up Iowa Pelvic Score: A measure specific to outcomes following pelvic injury. 12 Shows good construct validity when compared with the physical component of the SF-36. This is also the preferred pelvic specific outcome measure by patients 13 and the study patient advisory group. Oxford Hip Score 14 : A functional score for patients following hip injury and/or surgery. While not pelvic specific, the activities and symptoms included were felt to be relevant by our patient group.
EQ-5D-5L 15 : A standardised instrument of health status. ICECAP-A 16 : A measure of capability for the general adult population for use in economic evaluation. It focuses on well-being in the broader sense, not just health status.
Brief Pain Inventory 17 : Originally developed to measure pain in patients suffering from cancer. It has since been used in a variety of conditions. It allows patients to rate the severity of their pain as well as its influence on their psychological health and activity All participants will complete these questionnaires at baseline, 2 and 6 weeks, 3 and 6 months following randomisation. Participants recruited in the first 12 months of the study will also complete questionnaires at 9 and 12 months following randomisation. Baseline data will be collected at recruitment. Participants will be able to complete their follow-up questionnaires in person, when attending an outpatient appointment, online or by post. Standard care for participants with these injuries would be for clinical review in an outpatient clinic at 6 weeks, 3 months and 12 months (see table 1).
At these time points, in addition to the questionnaires, participants will complete a Timed Up and Go assessment. 18 This is an assessment of a participant's physical walking ability and involves being timed to stand from a chair, walk a distance of 3 m and return to sit in the chair. Where possible this will be completed by an assessor blinded to the participant's treatment allocation. Completeness of this assessment will be recorded to inform the appropriateness of its use in a definitive trial.
Data obtained as part of the study will be entered on to a secure password protected online REDCap database.
Study duration
Recruitment will continue for 18 months. Follow-up for 6 months with 6 months for analysis.
Economic evaluation
The economic feasibility will focus on data collection to inform the economic evaluation to be done alongside the definitive trial. As well as the EQ-5D-5L and ICECAP-A we will record length of stay in both study arms, along with time spent in theatre and implants used (surgical arm only). Use of specific primary, community and social care services will be assessed by patient reported resource use questionnaires at 6 weeks, 3, 6, 9 and 12 months.
Open access
Qualitative study To inform the conduct of the definitive trial, we will invite up to 20 consented participants (10 from each treatment arm and across all sites) to take part in a semi-structured telephone interview by the qualitative researcher after they have completed the 6-month follow-up questionnaire. The interviews will explore their experience of the trial, their treatment and recovery, and acceptability of the outcome measures. A purposive sample will be selected to reflect maximum variation in socio-demographics, age and ethnicity. Topic guides for the interviews will be developed from the literature, team discussions and input from the PAG. Ten participating healthcare professionals (surgeons, research nurses and clinical nurse specialists) will be invited to take part in a telephone interview evaluating their experiences of treatment and views of trial processes.
Safety reporting
Only serious adverse events will be reported for this study comparing two treatments in common clinical practice. A serious adverse event is any untoward medical occurrence that: ► Results in death. ► Is life-threatening. ► Requires inpatient hospitalisation or prolongation of existing hospitalisation. ► Results in persistent or significant disability/ incapacity. ► Consists of a congenital anomaly or birth defect.
Serious adverse events which are expected with these injuries are: ► Wound complications/infections. ► Neurovascular injury. ► Thromboembolic events. ► Chest infection. ► Metal work/implant failure/loosening and non/ mal-union. Secondary operations to prevent infection, mal-union, non-union or for symptoms related to the metalwork may also be expected.
Any unexpected serious adverse events will be recorded and reported to the Sponsor and Ethics Committee.
Sample size
This feasibility study is designed to produce estimates of the parameters required to plan a definitive trial, together with enough data on outcome measures to show whether or not the ceiling effect on the Iowa instrument is likely to be a problem in the definitive trial. If 120 patients are screened as eligible and 40% agree to take part, then this will allow us to estimate the recruitment rate of 40% with a 95% CI of 31%-49% which is within 10% in either direction. Forty complete sets of data should be enough to show when a ceiling effect starts to occur although this will rely on a visual inspection of the data at each time point. If 60 sets of data are collected this will allow greater precision. data analyses Quantitative data analysis As this is a feasibility trial no formal statistical testing will be carried out. Instead the analysis will focus on reporting data that will be used for planning and for assessing the feasibility of the definitive trial.
Feasibility parameters with 95% CIs will be provided using the exact binomial method. The spread of the data and ceiling effects will be documented for all outcome variables using histograms for single time points and box plots to compare over time. Calculation of the area under the curve over time is the likely primary method of analysis for the definitive trial, and the feasibility analysis will investigate whether this would produce a sufficiently complete data set or whether it would be better to focus on a particular time point. The 95% CI for the effect sizes for all potential outcome measures will be calculated to ensure that a future trial can be planned appropriately.
The future economic evaluation is likely to present results in cost/Quality Adjusted Life Year (QALY) terms reporting within trial and lifetime horizons. The economic feasibility work will focus on establishing the appropriate methods for collecting the outcomes, both costs and utilities, which will be of interest in the future economic evaluation, with analysis therefore limited to assessment of completeness and descriptive statistics.
Qualitative data analysis
With informed consent, all interviews will be digitally recorded, transcribed, anonymised and analysed using thematic methods of building codes into themes and sub-themes using the process of constant comparison (facilitated by NVIVO software: QSR International). This aspect is important to understand the acceptability of trial processes, including randomisation, treatment pathways and other outcome questionnaires for the definitive trial.
Patient and public involvement/patient advisory group A patient advisory group (PAG) has been involved in the development of the study and advising on study design. The PAG have been particularly involved in the selection of appropriate outcome measures and reviewing patient facing materials including the information sheets. The group will continue to provide advice throughout the study and their advice on any changes which may improve recruitment and the study will be actively sought. A representative of the group will sit on the Trial Steering Committee (TSC) to feedback the advice of the group to the committee. The PAG will also be actively involved in any publication and dissemination of results at the end of the study.
dissemination
The findings of the study will be presented locally at each participating site and to the general orthopaedic community at national orthopaedic conferences. The findings will also be submitted for publication in an open access | 2020-02-13T09:23:47.293Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "70a7866044fe6da2b528e0e67797ac06cf5dbe03",
"oa_license": "CCBY",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/2/e036588.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d33b7550e2ff521c1db0430f6598e3e92427723f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231698441 | pes2o/s2orc | v3-fos-license | The HI-halo mass relation at redshift $z \sim 1$ from the Minkowski functionals of 21 cm intensity maps
The mean and the scatter of the HI content of a dark-matter halo as a function of the halo mass are useful statistics that can be used to test models of structure and galaxy formation. We investigate the possibility of constraining this HI-halo mass relation (HIHMR) from intensity maps of the redshifted 21 cm line. In particular, we use the geometry and topology of the brightness-temperature isocontours in a single frequency channel as quantified by the Minkowski functionals. First, we generate mock maps from a large N-body simulation considering the impact of thermal noise and foreground removal. We then use the Fisher information formalism to forecast constraints on a parametric model for the HIHMR. We consider a 20,000 deg$^2$ survey (originally proposed for dark-energy science) conducted with the Square Kilometre Array Phase 1 (SKA-1) MID observatory operating in single-dish mode. For a channel bandwidth of 2 MHz, we show that an integration time of a few$\,\times\,10^4$ s per pointing is sufficient to image the smoothed HI distribution at redshift $z \simeq 1$ and to measure the HIHMR in a nearly optimal way from the Minkowski functionals. Tighter constraints on some of the parameters can be obtained by using also an independent measurement of the mean HI density. Combining the results from different frequency channels provides exquisite constraints on the evolution of the HIHMR, especially in the central frequency range of the data cube.
INTRODUCTION
Cold-gas reservoirs in galaxies provide the raw fuel for star formation. Assessing how they vary across different galaxy populations and environments is of paramount importance to constrain models of galaxy assembly and evolution.
Neutral hydrogen (HI) makes up the bulk of the cold gas. In the post-reionisation Universe, HI can be found almost exclusively within self-shielded clouds inside galaxies and galaxy clusters. The total HI content of dark-matter (DM) haloes is thus a simple descriptive statistic that can be used to compare theoretical models with observations. It gives the HI mass in a halo by summing up the contributions from central and satellite galaxies (as well as from diffuse gas). A strong theoretical prejudice is that this quantity should mainly depend on the halo mass. In fact, gas-accretion rates (both from the intergalactic medium and via galaxy mergers) are expected to be regulated by the halo masses and so are also several processes that deplete the HI content (like the efficiency of galactic winds or photoionisation). Extra dependencies on top of the halo mass will appear as scatter around the mean relation. Measuring the ampli-★ E-mail: bspina@astro.uni-bonn.de tude of this scatter provides an empirical test of the assumption that halo mass is the main driver of the HI content.
The HI-halo mass relation (HIHMR) has been the subject of many studies based on (post-processed) hydrodynamical simulations (Davé et al. 2013;Villaescusa-Navarro et al. 2014;Crain et al. 2017;Villaescusa-Navarro et al. 2018;Ando et al. 2019) and semianalytical models of galaxy formation (Kim et al. 2017;Zoldan et al. 2017;Baugh et al. 2019;Spinelli et al. 2020). In parallel, observational constraints at redshift 2.2 5 have been obtained for damped Lyman-absorbers (Barnes & Haehnelt 2014). On the other hand, in the local Universe, extensive investigations have been carried out to elucidate the HIHMR of HI-rich galaxies from the Arecibo Fast Legacy ALFA Survey (ALFALFA, Giovanelli et al. 2005). For instance, Paul et al. (2018) have predicted the HIHMR by assuming that there exists a scaling relation between the HI content and the optical properties of a galaxy. By cross matching the ALFALFA HI sources with the optical group catalog from the Sloan Digital Sky Survey, Ai & Zhu (2018) have estimated the total HI mass for rich galaxy groups containing eight members or more. Combining a similar technique with information about the HI-weighed clustering of the ALFALFA sources, Obuljen et al. (2019) have constrained the parameters of an analytical model for the HIHMR. A direct measurement of the relation has been recently performed by Guo et al. (2020) who stacked the ALFALFA spectra of the members of the SDSS groups.
At intermediate redshifts (0.1 2.2), we have very little information on the HI content of DM haloes as the 21 cm line is too faint to observe individual galaxies with reasonable integration times. A way out is to detect the collective emission from galaxies that occupy large 'voxels' in our past-light cone defined by the angular resolution and the bandwidth of the observations (21 cm tomography). The principal challenge for these intensity-mapping experiments is the subtraction of Galactic and extra-galactic foregrounds that are orders of magnitude brighter than the 21 cm signal. In order to filter them out, their properties and the instrumental systematics need to be characterized to extremely precise levels.
There is increasing interest in developing large-scale surveys that map the intensity of the 21 cm emission from neutral hydrogen to probe unprecedented volumes. Such experiments are expected to provide pivotal contributions to cosmology (e.g. Barkana & Loeb 2005;Loeb & Wyithe 2008;Monsalve et al. 2019) and, at higher redshifts, to our understanding of the epoch of reionisation (e.g. Yue & Ferrara 2019).
Information about the HIHMR is key for predicting the signal of 21 cm intensity-mapping experiments. In full analogy with the halo model for dark-matter (e.g. Cooray & Sheth 2002) and galaxy clustering (e.g. Scoccimarro et al. 2001), the HI distribution on large scales can be described in terms of its halo-occupation properties. By using the halo model to fit a compilation of data at low and high redshift, Padmanabhan et al. (2015) have estimated that the amplitude of the intensity-mapping signal at intermediate redshifts is between 50 and 100 per cent uncertain. Switzer et al. (2013) have obtained the first observational constraint on the amplitude of 21 cm fluctuations at 0.8. There are some difficulties, however, in fitting this result together with the low-redshift studies and the Damped Lyman-systems (DLAs) data into a consistent picture .
In this work, we take the opposite step and investigate the possibility of inferring the HIHMR from the 21 cm intensity maps. However, in order not to weaken the constraining power of these experiments for cosmology, we do not use the power spectrum and focus on the geometrical and topological properties of the highly non-Gaussian maps as quantified by the Minkowski functionals (MFs) of their isocontours. Several morphological indicators have been already discussed in the 21 cm literature to characterise the growth of HII bubbles during the epoch of reionisation, namely, the genus curve (Hong et al. 2010), the MFs (Gleser et al. 2006;Chen et al. 2019), and the Minkowski tensor (Kapahtia et al. 2018). Here, we apply one of these methods to the post-reionisation Universe in order to constrain a parametric model for the HIHMR (mean and scatter). We use mock data based on a large N-body simulation and the Fisher information matrix to derive the constraints on the model parameters. The rationale behind our method is rather simple: since DM haloes of different masses trace the underlying mass-density distribution differently, the detailed morphology of the brightnesstemperature maps should reflect the HIHMR.
We analyse here two-dimensional maps corresponding to individual frequency channels in a tomographic data cube. The main reason for this choice is that we can robustly estimate the covariance matrix of the data without having to run hundreds of high-resolution N-body simulations. Our method, however, can be straightforwardly generalised to three-dimensions by directly measuring the MFs in the full data cube.
The proposed approach requires rather long radio integration times as it is necessary to image the redshifted 21 cm signal with a sufficient signal-to-noise ratio to measure the MFs reliably. In order to reduce the observing time, we smooth the maps with an isotropic Gaussian filter before measuring the MFs. We compute how the constraints on the HIHMR vary with the observing time for an array of radio telescopes used in single-dish mode. As a reference case, we consider the Square Kilometre Array Phase 1 (SKA-1) MID observatory. We provide a proof of concept of the method by only considering mock data at 1 but, of course, there is no particular difficulty to repeat the investigation for other redshifts.
The paper is organised as follows. In section 2, we introduce the HIHMR and explain how we use it to generate mock intensity maps starting from a large N-body simulation. The definitions of the MFs and their application to the 21 cm intensity maps are presented in section 3. Our implementation of the Fisher matrix formalism is described in section 4. Our results are presented in section 5 and discussed in section 6. Finally, in section 7, we briefly summarize the philosophy behind our approach and our main findings.
MOCK 21 CM INTENSITY MAPS
In the post-reionisation Universe, HI is mostly found within the DM haloes that host galaxies and galaxy clusters. However, it is a matter of fact that HI intensity-mapping experiments will survey large fractions of the sky. Therefore, in order to forecast the constraining power of the forthcoming 21 cm experiments for cosmology and astrophysics, we need to simulate the large-scale structure of the Universe over sizeable volumes while resolving low-mass structures and accounting for the physics that regulates hydrodynamics, feedback, and radiative transfer in the interstellar medium. Fulfilling all these requirements together is prohibitive for state-of-the-art software and computing facilities. A possible way forward is to operate at a simpler level of understanding by combining high-resolution N-body simulations with a statistical description of the HI content of DM haloes. In this section, we describe how we construct mock HI intensity maps following this approach. Finally, we illustrate how we account for the different sources of noise in the intensity maps.
N-body simulation
We use the M D -P (MDPL) simulation (Klypin et al. 2016) that assumes a ΛCDM background and considers a set of cosmological parameters which is compatible with the fit in Planck Collaboration et al. (2014). The simulation evolves = 3840 3 DM particles of mass p = 1.51 × 10 9 ℎ −1 M (where the Hubble constant is written as 0 = 100 ℎ km s −1 Mpc −1 ) in a (periodic) cubic box of comoving side = 1 ℎ −1 Gpc. DM haloes are identified using a standard 'friends-of-friends' algorithm with a linking length of ℓ = 0.2 / 1/3 . Only haloes containing at least 20 particles are considered in this work.
HI-halo mass relation
A common working hypothesis is that mass is the main driver for the HI content of a DM halo. In this case, it makes sense to introduce the HIHMR HI ( ) that gives the mean HI mass found in a DM halo of mass (Pontzen et al. 2008 (1) The HIHMR is a scale-invariant power law of slope > 0 with an exponential cutoff at small halo masses (i.e. for min ). This suppression reflects the fact that low-mass haloes cannot selfshield from the UV background and gas cooling is inhibited in them (e.g. Rees 1986;Efstathiou 1992). The parameter > 0 determines how sharp the cutoff is while 0 fixes the overall normalisation of the HIHMR -note that HI ( min ) = 0 / . For our forecasts at = 1, we assume the fiducial values 0 = 1.5 × 10 10 ℎ −1 , min = 6.0 × 10 11 ℎ −1 M , = 0.53, and = 0.35 which provide an accurate description of the Illustris TNG simulation (Villaescusa-Navarro et al. 2018, where the local HI density is obtained by subtracting the molecular fraction estimated with a chemical-equilibrium model from the total hydrogen abundance) and are also in agreement with the current observational estimates for the HI density (see e.g. Crighton et al. 2015;Hu et al. 2019).
In reality, many other factors influence the HI content of a DM halo, for instance, hydrodynamic processes as well as radiative and mechanical feedback from star formation and accretion onto compact objects. The clustering properties of HI-rich galaxies in the present-day Universe suggest that halo spin also plays a role in determining the cold-gas content (Papastergis et al. 2013). We treat these secondary dependencies beyond halo mass in a statistical way as scatter around the HIHMR. Therefore, we assume that, at fixed halo mass, HI is a random variable that follows a lognormal distribution whose mean is given by equation (1). This is equivalent to say that ln[ HI /(1 ℎ −1 )] (at fixed ) is a Gaussian random variable with mean ln[ HI ( )/(1 ℎ −1 )] − 2 /2 and standard deviation . We use the fiducial value of = 1 as it approximately matches the scatter measured in the semi-analytical models analysed in Spinelli et al. (2020).
From haloes to brightness temperature
We associate a HI mass to each DM halo by randomly sampling the corresponding lognormal distribution. We then use the Cloud-In-Cell mass-assignment scheme to build a map of the HI density on a regular Cartesian grid with 210 3 cells. In order to account for redshift-space distortions, we deposit the HI at the location where x is the actual comoving position of the halo, v is its peculiar velocity along the line of sight, and ( ) denotes the Hubble parameter in the background. Note that this neglects both the relative position and the relative motion of the neutral hydrogen with respect to the halo centre of mass. Eventually, we compute the brightness temperature using where c,0 is the critical density of the Universe at redshift = 0 and 0 denotes the Hubble constant.
Frequency bandwidth and angular resolution
We now perform a mock 21 cm tomography of the simulated data. For each pointing of the radio telescope, spectroscopic information is collected by partitioning the total receiver bandwidth into a number of frequency channels. The signal-to-noise ratio of radio images depends critically on the channel bandwidth (see section 2.5). By indicating with the radial comoving distance in the background cosmological model, a channel bandwidth Δ at redshift corresponds to the radial separation where For a rest-frame frequency of rest = 1420.406 MHz, the cell size of the Cartesian grid we use to sample the HI density corresponds 1 to Δ 1 MHz at = 1. In order to produce synthetic maps with a larger Δ , we average at a fixed position on the sky over the corresponding length scale Δ . This way, we can produce 210 (1 MHz/Δ ) non-overlapping intensity maps of the 21 cm signal at = 1. One example with Δ = 2 MHz is shown in the top-left panel of figure 1. Wherever some HI is found along the line of sight, a brightness temperature fluctuation of a few mK is recorded.
In order to account for the finite angular resolution of the instrument, we convolve the two-dimensional maps with the telescope beam. For a single dish of diameter , we assume a Gaussian beam with full width at half maximum 2 and perform the convolution in Fourier space. After setting = 15 m (for SKA-1 MID) and rest = 21.16 cm, this corresponds to an isotropic Gaussian smoothing in the plane of the sky with standard deviation where a is the angular-diameter distance to redshift in the background (in a flat universe, (1 + ) a = ). It follows that, at = 1, FWHM ≈ 1.94 deg and Σ ≈ 33.1 ℎ −1 Mpc. Once convolved with the beam, the typical 21 cm signal from = 1 assumes values of the order of 0.1 mK (see the top-right panel of figure 1 for an example).
Thermal noise
The output of a radio telescope is contaminated by thermal noise. For single-dish observations, the noise is Gaussian to good approximation. Based on the radiometer equation, the rms noise fluctuation associated with an integration time (per pointing) pix is where sys is the total system temperature. For an antenna of SKA-1 MID, the system temperature can be obtained by summing up several components (Square Kilometre Array Cosmology Science Working Group et al. 2018) Thermal noise is generated at the pixel level (central panel, see section 2.5) and added to the signal (centre-right). Finally, some smoothing is applied as a data processing technique to enhance the signal-to-noise ratio (bottom-left). The smoothed noise map and the signal-to-noise map are shown in the bottom-central and bottom-right panels, respectively. where spl ≈ 3 K and CMB ≈ 2.73 K denote the spill-over and the cosmic-microwave-background contributions, respectively. The Galaxy contribution can be modelled as and the receiver-noise temperature as For the 21 cm line emitted at = 1, we obtain sys = 26.22 K. The rms noise is thus much larger than and integration times of a few days per pointing are needed to see the HI signal emerge above the noise at the pixel level (for a narrow bandwidth of Δ = 2 MHz).
We now imagine to build a pointed map. Assuming that multiple antennas can be used simultaneously, the total time required to map the large-scale structure of the Universe over the solid angle Ω surv is then where dish is the number of available dishes (each with beam feedhorns, typically 1 or 2) and Ω pix 2 pix is the solid angle covered by a single pixel. In order to satisfy the Nyquist-Shannon theorem and produce a properly sampled map, pix must be smaller than FWHM /2. In practice, it is usually chosen to be between FWHM /7 and and FWHM /3 (Marr et al. 2015).
As an example, we estimate the time that would be needed to map the HI distribution in the MDPL box at = 1. The simulation box subtends a solid angle Ω surv 591 deg 2 and we use 42 2 pixels to cover the whole area. Assuming to observe with all the 197 antennas of the SKA-I MID telescope, we find obs 9 pix (for beam = 1), i.e. of the order of a month for Δ = 2 MHz.
Foreground removal
Reaching the necessary sensitivity is only one of the difficulties that we need to face in order to map the large-scale structure of the Universe in 21 cm. Actually, the major challenge is the presence of Galactic and extra-galactic foregrounds that are several orders of magnitude brighter than the HI signal. In particular, extragalactic point sources and synchrotron emission from the Milky Way give major contributions. At first, it might seem impossible to separate these components. However, foregrounds should vary smoothly as a function of frequency (at a given position on the sky) while the fluctuating 21 cm signal is expected to be poorly correlated in frequency space. It has been suggested that this difference could be exploited to separate the 21 cm signal from the foregrounds (e.g. Shaver et al. 1999;Di Matteo et al. 2002;Oh & Mack 2003;Zaldarriaga et al. 2004). Although the concept is very promising, complex details need to be taken into account in practical implementations like, for instance, dealing with frequency-dependent beam shapes.
In spite of the difficulties, several foreground-cleaning techniques have been proposed and tested against simulated data (see e.g. Wolz et al. 2015, for a recent review). In addition to separating the foregrounds, these methods usually introduce unintended consequences such as removing large-scale power from the 21 cm signal. Following Alonso et al. (2017) and Cunnington et al. (2019), we assume that the foreground subtraction erases all the Fourier modes of with comoving radial wavenumber where is a dimensionless parameter of order unity. Basically, FG denotes the minimum wavenumber for which foregrounds are separable from the signal. It is difficult to assign a precise value to (which is method dependent), however, assuming 0.1 appears to be a reasonable estimate (Cunnington et al. 2019). For the cut-off scale at = 1, we therefore adopt FG ≈ 0.01 ℎ Mpc −1 . This value is only slightly larger than the fundamental frequency f = 2 / associated with our simulation box. We thus only erase the Fourier modes with = 0 and = f . Note that subtracting the k = 0 mode has the important consequence of setting the mean value of (over the whole simulation box) to zero (from the original value of approximately 0.2 mK in the beam-convolved maps, cf. the top-right and centre-left panels in figure 1). This, of course, makes the sensitivity requirement for detecting the signal even more demanding.
Smoothed maps
The middle-left panel in figure 1 shows the 21 cm signal that one would obtain after the foreground removal but in the absence of thermal noise. In this particular map, has a mean value of -0.0065 mK and an rms scatter of 0.035 mK. The adjacent panel on the right-hand side shows a realisation of the thermal noise (assuming pix = 2.46 × 10 5 s) with an rms scatter of 0.038 mK. Comparing at face value these two maps gives the wrong impression that 68 hours of integration time might not be enough to image the line emission coming from = 1. This is because the 21 cm signal has been smoothed by the telescope beam while we have generated the thermal noise at the pixel level (as we are assuming to have a different pointing per pixel). The observed map (i.e. the sum of the signal and the noise) is shown in the middle-right panel of figure 1. In order to reduce the impact of the fine-grained noise on the measurement of the MFs, we further smooth the observed map with a two-dimensional Gaussian filter that has the same width as the telescope beam (we adopt Gaussian smoothing for simplicity although Wiener filtering would be a nearly optimal choice given that the signal is only weakly non-Gaussian, see section 6). In the bottom panels, we show, from left to right, the smoothed total map, the smoothed noise map (in which the rms scatter drops to 0.007 mK), and the corresponding signal-to-noise ratio (which assumes the peak value of 2.3). It is evident that the smoothed observed map (bottom-left) presents the same characteristic features of the original one (middle-left) and can be used to study the morphological properties of the 21 cm signal. This is the subject of the next section.
Basics
A digital image can be thought of as a polyconvex set formed by the finite union of compact and convex subsets in R (its pixels). For such a system, the field of integral geometry provides a family of morphological descriptors known as MFs (or quermassintegrals or intrinsic volumes). Basically, these functionals (Minkowski 1903) measure the size and the connectivity of subsets of R in terms of different quantities. In dimensions, there exist + 1 MFs that we denote with the symbols 0 , . . . , (as frequently done in the mathematical literature for the intrinsic volumes). Hadwiger's completeness (or characterisation) theorem (Hadwiger 1957) states that the linear combinations of the MFs, F = (where ∈ R and the index runs from 0 to ), are the only functionals that satisfy the following properties: i) additivity, where is an element of the group of translations and rotations in R ; iii) conditional continuity under the Hausdorff measure. In this sense, the MFs completely characterize the morphology of a compact subset of R . All the notions above can be generalised to non-Euclidean spaces of constant curvature, like the celestial sphere (see e.g. Schmalzing & Gorski 1998, and references therein).
The MFs have been first introduced in cosmology by Mecke et al. (1994) and Schmalzing & Buchert (1997, see also Gott et al. (1990)) in order to characterize the large-scale distribution of galaxies. Subsequently, they have been used to measure the morphology of temperature anisotropies in the cosmic microwave background (e.g. Schmalzing & Gorski 1998;Novikov et al. 1999). More recently, MFs have been employed to study the morphology and characterize the different stages of cosmic reionisation (Gleser et al. 2006;Chen et al. 2019).
Application to 21 cm maps
Although it would be possible to compute the MFs of in the three-dimensional data cube, we perform here a simpler experiment and compute the MFs of two-dimensional maps corresponding to one frequency channel. The main motivation for this choice is that we can more easily estimate the covariance matrix for the measurements without having to generate hundreds of highresolution simulation boxes (see also section 4). To this end, we do not build a mock light cone by stitching together different snapshots of the MDPL simulation but only analyse the simulation output at = 1. As already mentioned in section 2.4, this provides us with 210 (1 MHz/Δ ) non-overlapping intensity maps of the 21 cm signal at = 1. Although these maps are not fully independent (due to correlations along the line of sight), we use them to derive the expected signal in one channel (i.e. the average over the different maps) and the corresponding covariance matrix (see section 4 for further details).
Let us consider the brightness-temperature map corresponding to a frequency channel and chose a threshold value . The excursion set = { ∈ Ω : ( ) > } collects all the points where exceeds the threshold. We define the MFs of the excursion set as Figure 2. In order to investigate the impact of thermal noise on the MFs, we consider here three intensity maps obtained at = 1 (with Δ = 2 MHz) from the same region of the sky presented in figure 1 but obtained varying the integration time per pixel (from top to bottom, pix = 9.82 × 10 3 s, 2.46 × 10 5 s, ∞).
The left set of figures shows the regions above (dark blue) and below (light blue) three brightness-temperature thresholds (from left to right, = −0.04 mK, 0 mK, 0.04 mK). The right set of figures shows the corresponding MFs for the selected threshold values (triangle, star, and circle symbols) and as a function of (black solid lines). As a reference, we also show the mean (coloured solid lines) and the rms scatter (shaded regions) of the MFs extracted from the 105 volume slices (with transverse size 2 and line-of-sight thickness corresponding to Δ = 2 MHz) that fill the MDPL simulation box at = 1.
where dΩ is the surface element of , dℓ is the length element of its (smooth) boundary = { ∈ Ω : ( ) = }, and is the local geodesic curvature of . In simple words, 0 gives the surface area covered by the excursion set while 1 is the perimeter of its boundary. Finally, in the flat-sky approximation used here, 2 coincides with the Euler characteristic of , i.e. the number of connected regions in the excursion set, + , minus the number of holes, − .
There exist efficient algorithms to compute the MFs for digitised images (e.g. Schmalzing & Buchert 1997;Schmalzing & Gorski 1998). We use the public code 2 3 which only exploits information within the image and does not require boundary conditions 4 as described in Mantz et al. (2008). Isolines are built with the 'marching square' algorithm which uses contouring cells obtained by combining 2 × 2 blocks of pixels. In consequence, given an input image with 2 pixels, the code analyses the central ( − 1) 2 pixels and neglects a narrow 'frame' with half-pixel thick-3 https://github.com/moutazhaq/minkfncts2d 4 Although our synthetic maps are extracted from simulations with periodic boundary conditions, we analyse them as one would do for observational data.
ness lying along the boundary (corresponding to an area of 2 − 1 pixels).
Some illustrative examples are provided in the left half of figure 2. We use the slice with Δ = 2 MHz presented in figure 1, observe it for three different observation times pix (rows, increasing from top to bottom), and consider three different temperature thresholds (columns, increasing from left to right). The dark regions in each sub-panel represent the excursion set (above threshold). The right half of the figure shows the MFs as a function of the threshold temperature. The triangle, star and circle highlight the values for the images displayed in the panels on the left. The black solid line, instead, shows how the MFs change when many more thresholds are considered. This trend is compared with the mean and the standard error obtained by averaging over the 105 slices contained in our simulation box (coloured lines and shaded regions). The usefulness of the MFs can be more clearly understood by connecting the different plots. When is low (left column), the whole observed domain is included in the excursion set with the exception of a few isolated holes. Therefore, 0 is large while 1 is small (as only the boundaries of the holes contribute to it) and is negative as in 'swiss-cheese' sets dominated by the holes. On the other hand, when approaches the median value of the brightness temperature (middle panels), 0 assumes intermediate values, 1 reaches a local maximum, and is close to zero as in 'sponge-like' structures where the regions above and below threshold are interconnected. Finally, when is large (right panels), only a few isolated regions form the excursion set. In this case, both 0 and 1 are low while is large and positive as in meatball topologies dominated by isolated connected regions. Although this trend holds true for every smooth temperature distribution, the morphological properties of are encoded in the precise shape of the ( ) curves.
It is interesting to investigate how the general pattern is altered Figure 3. Dependence of the MFs on the parameters of the HIHMR in the absence of thermal noise (i.e. assuming pix → ∞). We vary each parameter by ±50 per cent with respect to the fiducial value at = 1. Shown are the corresponding MFs averaged over the 105 volume slices that cover the MDPL simulation box (dotted and dashed lines). As a reference, we also show the results for the fiducial case (black solid lines) and their rms scatter (shaded region). The vertical dash-dotted lines indicate the set of thresholds we use in the final analysis presented in Section 4. We make use of slightly different values when we account for thermal noise.
by thermal noise. As pix is reduced (moving from the bottom up), more and more small-scale structures appear in the excursion sets that substantially modify the values of the MFs. In particular, their presence reduces the range of variability of 0 , shifts 1 towards larger values, and increases the extreme positive and negative values of 2 . The sensitivity of the MFs to the HIHMR is investigated in figure 3, in which we individually vary the parameters of the relation given in equation (1) by ±50 per cent and compare the results to the fiducial case and its standard error. Notice that each parameter modifies differently, and to a varying extent, the shape of the MFs and the location of their extremal points. In particular, increasing 0 has a similar effect as decreasing min and vice versa. The figure clearly shows that information on the HIHMR is encoded in the MFs. Modifying the parameters of the HIHMR also changes the mean HI density of the Universe and, through equation (3), the amplitude of . However, this systematic shift is lost due to the procedure of foreground subtraction.
The bottom panel of figure 3 shows that the MFs are basically unaffected by relatively large variations of the parameter . This reflects the fact that the comoving volume associated with a cylinder of radius Σ and height Δ is large enough to wash out any stochasticity in the HIHMR. For this reason, we do not consider any further in this work and only provide forecasts for the mean HI mass as a function of halo mass.
Selected observables
In practice, we need to select a finite number of threshold values with which to perform the measurements. Although, in principle, one might want to consider a large number of thresholds, there is an actual limitation due to the fact that we need to estimate the covariance matrix of the measurement errors. Since we can only use = 105 slices for this purpose, we need to limit the size of the data set. The smoothness of the black solid curves in figure 2 reveals that the MFs measured from the same map using different values of are strongly correlated. This effect is particularly evident for 0 and 1 . After long experimenting in the attempt to minimize these correlations (and get nearly optimal constraints on the HIHMR using a small number of thresholds), we end up picking two values of for 0 , two for 1 and five for 2 (as indicated in figure 3).
In order to simplify the notation, we combine all the measurements into a single seven-dimensional data vector M.
The parameters in equation (1) also determine the mean HI density,¯H I ( ), and thus the overall level of the fluctuations, . However, due to the foreground subtraction, we cannot measure the mean brightness temperature of the 21 cm fluctuations. This fact might weaken the constraints that the MFs can impose on the HIHMR. In fact, the differences between the curves in figure 3 would be larger if one could measure the shift in brightness temperature associated with the change in the mean HI density. In order to account for this missing piece of information, we combine the measurements of the MFs with observational constraints on the cosmic abundance of HI, conventionally parameterized as Ω HI ( ) =¯H I ( )/ c,0 (for a recent review see e.g. Péroux & Howk 2020). Current estimates at ∼ 1 are not fully consistent. In fact, while some studies of DLAs at ∼ 1 find Ω HI (6 ± 2) × 10 −4 (Rao et al. 2006(Rao et al. , 2017, other estimates are approximately a factor of three lower (Neeleman et al. 2016). Similarly, stacking the 21 cm emission from star-forming galaxies gives Ω HI < 3.7 × 10 −4 ). Attempts to combine data from low to high redshifts and fit the evolution of the HI abundance with a smooth curve give Ω HI ( = 1) = (6.1±0.4)×10 −4 (Crighton et al. 2015;Hu et al. 2019). We use this result to set constraints on the parameters of the HIHMR.
Minkowski functionals
Let us now imagine to fit the measurements of the MFs with a theoretical model M mod ( ) that depends on a set of tunable parameters, . Assuming that the measurement errors follow a multivariate Gaussian distribution, we can write the likelihood function L of the model parameters as where C denotes the covariance matrix of the data. In order to forecast the model constraints that can be set by future measurements, we use the Fisher-information formalism. We thus compute the Fisher matrix with elements This expression assumes that C is computed for the fiducial model and not varied with the model parameters (as routinely done in cosmological studies). We finally use F −1 as a proxy for the asymptotic covariance matrix of the estimates for . This matrix represents the main result of our study. It is not easy to build a theoretical model for the MFs of the 21 cm signal. For a zero-mean Gaussian random field over a twodimensional space, the MFs can be expressed in terms of the power spectrum (Adler 1981;Tomita 1990;Novikov et al. 2000), where is the threshold defining the excursion set, see Section 3.2, 2 0 = 2 is the variance of the Gaussian random field, and 2 1 = ; ; (where the index runs from 1 to 2 and we adopted Einstein summation convention) is the variance of its covariant derivative. For a weakly non-Gaussian field, they can be perturbatively expanded in terms of 'skewness parameters' (Matsubara 2003). However, in the general case, one must rely on numerical simulations. A convenient method for 21 cm tomography is to paint HI on top of DM haloes extracted from a N-body simulation as we have done to generate our mock data cubes. In this case, at fixed cosmology, the model parameters coincide with those of the HIHMR. Therefore, we replace M mod ( ) with the expectation value of our mock observationsM( ), i.e. averaged over sample variance and thermal noise.
We first consider the limit pix → ∞, in which thermal noise is irrelevant. In this case, we denote the MFs obtained from the th slice with the symbol ( ) where the index runs from 1 to 9. We compute the expectation values of the MFs as These quantities represent the mean measurement that would be obtained by averaging over the sample variance and many realisations of thermal noise. Of course, this is an abstract quantity which is useful to compute the Fisher matrix but cannot be measured in practice as we can only access one data set. Similarly, we approximate C with the sample covariance matrix of the slices: where all quantities are evaluated using the fiducial set of model parameters. The corresponding correlation matrix is shown in figure 4. Particularly strong correlations are noticeable in the 1 sector and among the first and last thresholds for 2 . Note that, by using equation (23), we are implicitly assuming that the measurements extracted from neighbouring slices are independent while there are certainly some correlations along the line of sight. Anyway, we have checked that considering only one slice every two or three does not change the overall structure ofĈ. Equation (23) gives an unbiased (but noisy, due to the fact that is finite) estimate of the covariance matrix. However, since matrix inversion is a highly non-linear operation, the resultingĈ −1 is a biased estimate of the precision matrix.
We make sure that the numerical evaluation of the partial derivatives in equation (18) gives stable results when the increment of the model parameters is changed. Before differentiating, we preventively smooth the function 2 ( ) using a Savitzky-Golay filter that irons out small-scale fluctuations appearing due to the fact that the Euler characteristics is integer valued (see the right panels in figure 2). This is a necessary step that makes the derivatives and the Fisher matrix meaningful. Our results are stable with respect to reasonable changes in the free parameters of the Savitzky-Golay method.
Considering the impact of thermal noise adds an extra level of complication. In order to extract the expected signal as a function of pix we proceed as follows. We consider 10 different values for pix and, for each of them, we generate = 30 different realisations of the thermal noise over the full MDPL box. We then combine them with the 21 cm signal obtained using a particular set of parameters and measure the MFs in each slice (frequency channel) of the foreground-cleaned data cubes. We denote the MFs obtained from the th slice and the th noise realisation with the symbol ( , ) . We compute the expectation values of the MFs as and substitute them for M mod ( ) in equation (18). However, a problem arises with the estimate of the covariance matrix. We find, in fact, thatĈ depends on the noise realisation, in particular when pix becomes small. This has important consequences as the calculation of the Fisher matrix requires inverting Γ −1Ĉ which prevents us from averaging the estimates of the covariance matrix over the noise realisations. We thus compute a Fisher matrix and produce forecasts for each noise realisation independently. We present our results in terms of the distribution of the parameter constraints.
Scaling with the survey area
So far, we have always considered a region of the sky with a linear transverse size that matches the comoving side of the MDPL simulation box. By considering only half or a quarter of this region, we have checked that the data covariance matrix defined in equation (23) scales proportionally to the survey area. Therefore, in order to consider a larger survey area, it is enough to correct the covariance matrix by the multiplicative factor = √︁ Ω surv /Ω MDPL .
Constraints from Ω HI
In order to compute the constraints coming from the measurements of Ω HI , we use a second Fisher matrix with 2 Ω HI the standard error of the (observed) HI cosmic abundance. The combined constraints from the MFs and Ω HI are obtained using (F + G) −1 .
RESULTS
In this section, we present the main results of our study: a Fisher forecast for the HIHMR. As a reference, we consider a (future) wide HI intensity-mapping survey conducted with the SKA-1 MID observatory. The experiment was originally designed in order to measure the power spectrum of the maps and subsequently constrain the dark-energy equation of state at a competitive level with respect to the forthcoming generation of galaxy redshift surveys (Bull et al. 2015).
The SKA-1 MID dark-energy survey
This intensity-mapping survey is expected to cover an area of the sky of Ω surv 20, 000 deg 2 (corresponding to ≈ 5.8) and a frequency range of 350 < < 1050 MHz (i.e. 0.35 < < 3.06). The total observation time should be of approximately 10,000 h. Using the same pixel size as in section 2.5, this set-up corresponds to pix = 9.5 × 10 4 s which should be enough to determine the HIHMR in a nearly optimal way.
Constraints from individual channels
Our Fisher forecasts are presented in figure 5. We first discuss the ideal situation corresponding to pix → ∞ which we use as a reference and represent with horizontal lines in the figure). By combining measurements of the MFs and Ω HI , all parameters of the mean HIHMR are constrained to better than 10 per cent (68.3 per cent credibility) as indicated by the orange dotted lines. Of course, these constraints progressively deteriorate by decreasing pix (orange curves). However, they change very little if pix 4 × 10 4 s and worsen significantly only if pix 10 4 s. Considering only the MFs (blue curves) deteriorates the constraints on and by nearly a factor of 2 while it does not affect the other parameters for large values of pix (blue dashed lines). On the other hand, for lower pix , the constraints on all parameters become worse.
An example of the joint (68.3 and 95.4 per cent) credible regions for all parameter pairs is shown in figure 6. This plot has been obtained from one particular realization of thermal noise with pix = 3.9 × 10 4 s. The dashed lines show the results obtained from the MFs while the shaded regions refer to the combination with Ω HI .
In figure 7, we show that the MFs and Ω HI are very informative regarding the halo occupation properties of HI. The posterior distribution of the mean HIHMR shows little scatter around the fiducial model. Uncertainties on HI are particularly small for halo masses 10 12 ℎ −1 M (which corresponds to a few min and identifies the haloes containing most HI) and increase as one moves away in both directions.
Constraints from the full data cube
So far, we have focused on a single frequency channel corresponding to a narrow redshift bin centered around = 1 and constrained the HIHMR using the MFs of the two-dimensional HI intensity map.
We want now to make use of the full data cube that the radio observations will provide. With this aim in mind, we imagine to repeat the measurement of the MFs in each frequency channel and then fit the redshift evolution of the model parameters. In fact, it is reasonable to expect that the parameters of the HIHMR should vary smoothly with redshift. In order to exemplify the power of this technique, we synthesize the data at 1 in table 1 of Villaescusa- Table 1. Fiducial values and forecast uncertainties for the model parameters that regulate the HIHMR. The top section gives the results from the Fisherinformation analysis for one channel at = 1. The bottom sections refer to fits of either second-order polynomials, ( ) = 0 + 1 ( − 1) + 2 ( − 1) 2 or constants, ( ) = 0 , to the measurements in the data cubes for 0.5 < < 1.5. The optimistic case (opt) treats all channels as independent data points while the pessimistic one (pes) considers only one channel every five.
and use them as the fiducial model for the evolution of the HIHMR in the redshift range 0.5 < < 1.5. This interval contains 189 channels with Δ = 2 MHz. We can measure the MFs in each of them and then fit the marginalised constraints on the individual model parameters with second-order polynomials of − 1. Results obtained by assuming independent data points are reported in table 1. Since the thickness of each channel along the line of sight is only a few times larger than the correlation length of the density field and redshiftspace distortions can also shift HI by comparable distances, it might be inappropriate to consider the results obtained from consecutive channels as independent. We thus repeat the analysis by considering only one channel every five. Of course, the uncertainty on the model parameters slightly increases in this case (see table 1). Note that, if we compare the results obtained for = 1 from the multi-channel fits to those presented in section 5.2, we note an improvement of the error bars up to a factor of ten.
DISCUSSION
It has been shown that the MFs are powerful data-analysis tools to characterise -and extract information from-various cosmological datasets. However, they are sub-optimal in the case of Gaussian random fields for which the whole statistical information is contained in the power spectrum. In fact, many of the practical issues associated with the MFs (dealing with a masked sky, modelling the impact of noise) are solved problems for inferences based on power spectra. A question then naturally arises: are the fluctuations in the 21-cm-brightness-temperature distribution departing from a Gaussian random field so that to justify the use of the MFs in our study? This question is particularly relevant given the large beam of the SKA-1 MID telescopes which corresponds to a transverse size of approximately 50 comoving Mpc at = 1 (that we use in combination with frequency channels that extend for nearly 13 comoving Mpc along the line of sight).
In figure 8, we show the probability density function (PDF) of the brightness temperature extracted from our simulation box in the absence of thermal noise (i.e. assuming pix → ∞). In order to facilitate visual comparison with the Gaussian case, we also plot a normal distribution with the same mean and standard deviation as the simulated data (solid line). The PDF of the brightness temperature is clearly asymmetric (with a skewness of 1.04 ± 0.01) and has heavier tails than a Gaussian distribution (with a kurtosis of 0.708 ± 0.006) It is also interesting to check how much the MFs in the mock maps depart from the expected values for a Gaussian random field. Analytical formulae have been derived for the MFs of a Gaussian random field on the Euclidean plane or on the two sphere (see section 4.1). However, important corrections need to be applied 5 when the domain of the random field is a small subset of the above mentioned spaces. Moreover, the overall normalisation of the curves ( ) depends on the power spectrum of the random field. For these two reasons, we compute the Gaussian MFs by averaging over many realisations obtained by shuffling the phases between the Fourier modes of the simulated data (in the absence of thermal noise, as in figure 3). Our results are shown in figure 9. While the surface area 0 ( ) covered by the excursion set is hardly distinguishable from the Gaussian case, larger differences are noticeable for the perimeter of the boundary 1 ( ) and for the Euler characteristic, in particular, for extreme values of the threshold parameter both on the low and high sides.
A related issue is whether the information encoded in the MFs on the large scales probed by the SKA-1 MID is enough to constrain all the parameters that influence the mean HIHMR. In fact, by reasoning in terms of the halo model (see Cooray & Sheth 2002, for a review), one expects the data to be fully in the so-called 'two-halo' regime where all the information about the HIHMR is collapsed into the linear bias parameter of the HI with respect to the mass. Effectively, this would mean that the exercise we are proposing tries to constrain several parameters from the measurement of a single quantity. This line of reasoning assumes that the bias relation between HI and matter fluctuations is linear on scales of 10-50 Mpc. It is well known, however, that this is only true to first approximation. Several non-linear corrections are needed to accurately model the spatial distribution of dark-matter haloes, especially when one considers statistics that are more sensitive to non-Gaussian features than the power spectrum. State-of-the-art models include, at least, extra terms that scale quadratically with the matter density or linearly with the tidal field (see e.g. Desjacques et al. 2018, for a review). Our results can thus be interpreted as providing evidence that the same corrections are needed to describe the non-Gaussian features of the HI distribution. Further evidence has been recently provided by Cunnington et al. (2021) in a study of the bispectrum of 21-cm intensity maps. Nevertheless, some level of degeneracy between the model parameters is indeed present in our results, as shown in figure 6. This is mostly broken when independent measurements of the HI abundance are considered in combination with the MFs.
SUMMARY
Determining the overall content and the spatial distribution of HI in the post-reionization Universe is pivotal to understand galaxy formation and evolution. An important step in this direction is the determination of the HIHMR which gives the mean and scatter of the total HI mass contained within a dark-matter halo of mass . In this paper, we have investigated the possibility of constraining parametric models of the HIHMR from 21 cm intensity maps at redshift 1. In particular, we have used the geometry and topology of the maps as quantified by the MFs of the brightness-temperature isocontours.
For practical reasons, we have considered a specific parameterization for the HIHMR in which the mean HI mass at fixed halo mass is given by equation (1) and a lognormal scatter of size is assumed (see section 2.2). By assuming a set of fiducial values based on previous numerical studies, we have generated mock data from a large N-body simulation and used the Fisher information matrix to derive the forecast constraints on the model parameters. As a reference case, we have considered the SKA-1 MID dark-energy survey at = 1 (conducted in single-dish mode) and assumed frequency channels of width Δ = 2 MHz.
Our main results can be summarised as follows.
(i) After subtracting the foregrounds, the 21 cm signal is of the order of a few mK, nearly 1000 times lower than the system temperature of the telescopes. In order to beat thermal noise and reach the sensitivity necessary for imaging the HI distribution, long integration times per pixel, pix , are thus required. By using the MFs of the 21 cm intensity map in a single frequency channel, we find that the parameters of the HIHMR can be measured with a signalto-noise ratio of one if pix 9.8 × 10 3 s. Nearly optimal error bars are obtained using pix a few ×10 4 s. This corresponds to a total observing time of 4 -5 days.
(ii) Information on the mean HI density, Ω HI , is lost during the foreground removal from the intensity maps. This loss can be restored by combining the MFs with independent measures of Ω HI . This addition slightly improves the constraints on some of the parameters that regulate the HIHMR, in particular on the slope defined in equation (1).
(iii) The mean HIHMR is very tightly constrained for haloes with 10 12 ℎ −1 M which contain most of the HI at = 1. Uncertainties grow larger as both larger and smaller halo masses are considered.
(iv) Combining measurements of the MFs in different frequency channels provides exquisite constraints on the redshift evolution of the HIHMR (both for the mean and the scatter), especially for redshifts that lie around the center of the data cube along the frequency axis. In this case, we forecast uncertainties on the model parameters that are an order of magnitude smaller with respect to those extracted from a single channel.
In this paper, we have used the SKA-1 MID dark-energy survey as an example of what can be achieved with forthcoming facilities. Observations at higher angular resolution with SKA precursors like the Hydrogen Intensity and Real-time Analysis eXperiment (HI-RAX, Newburgh et al. 2016) in the southern hemisphere and the Canadian Hydrogen Intensity Mapping Experiment 6 (CHIME, Bandura et al. 2014) in the northern hemisphere will probe length scales at which the HI-to-mass bias relation is more non-linear. Given long enough integration times, these experiments might be able to partially remove the degeneracy among the model parameters of the HIHMR and even constrain them more tightly. We will investigate this possibility in our future work. this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP). The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064.
DATA AVAILABILITY STATEMENT
The data underlying this article will be shared on reasonable request to the corresponding author. | 2021-01-26T02:15:37.003Z | 2021-01-22T00:00:00.000 | {
"year": 2021,
"sha1": "50270b82bf424b5c41c774fe266230364b075e17",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.09288",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e4beae095d73e746f50a709b60f417a8f96c98cf",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1550353 | pes2o/s2orc | v3-fos-license | Cloning and characterization of the ecto-nucleotidase NTPDase3 from rat brain: Predicted secondary structure and relation to other members of the E-NTPDase family and actin
The protein family of ecto-nucleoside triphosphate diphosphohydrolases (E-NTPDase family) contains multiple members that hydrolyze nucleoside 5’-triphosphates and nucleoside 5’-diphosphates with varying preference for the individual type of nucleotide. We report the cloning and functional expression of rat NTPDase3. The rat brain-derived cDNA has an open reading frame of 1590 bp encoding 529 amino acid residues, a calculated molecular mass of 59.1 kDa and predicted N- and C-terminal hydrophobic sequences. It shares 94.3% and 81.7% amino acid identity with the mouse and human NTPDase3, respectively, and is more closely related to cell surface-located than to the intracellularly located members of the enzyme family. The NTPDase3 gene is allocated to chromosome 8q32 and organized into 11 exons. Rat NTPDase3 expressed in CHO cells hydrolyzed both nucleoside triphosphates and nucleoside diphosphates with hydrolysis ratios of ATP:ADP of 5:1 and UTP:UDP of 8:1. After addition of ATP, ADP is formed as an intermediate product that is further hydrolyzed to AMP. The enzyme is preferentially activated by Ca2+ over Mg2+ and reveals an alkaline pH optimum. Immunocytochemistry confirmed expression of heterologously expressed NTPDase3 to the surface of CHO cells. PC12 cells express endogenous surface-located NTPDase3. An immunoblot analysis detects NTPDase3 in all rat brain regions investigated. An alignment of the secondary structure domains of actin conserved within the actin/HSP70/sugar kinase superfamily to those of all members of the NTPDase family reveals apparent similarity. It infers that NTPDases share the two-domain structure with members of this enzyme superfamily.
Introduction
The protein family of ecto-nucleoside triphosphate diphosphohydrolases (E-NTPDase family) contains multiple members that differ regarding tissue distribution, cellular location and substrate specificity. They hydrolyze nucleoside 5 0 -triphosphates and nucleoside 5 0 -diphosphates with varying preference for the individual type of nucleotide. Depending on subtype their catalytic site may be in ectoposition facing the extracellular medium or it may be located in the lumen of intracellular organelles. Whereas the surface-located members of the protein family are thought to be involved mainly in the control of ligand availability for P2 receptors, the functional role of the intracellular members is less defined [1,2].
The gene family has members within vertebrates, invertebrates, plants, yeast and protozoans but has not been identified in bacteria (references in [1, 3Y5]). Hallmarks of all E-NTPDases are five highly conserved sequence domains (Fapyrase conserved regions,_ [3,4,6]) that presumably are involved in the catalytic cycle. Interestingly, E-NTPDases share two common sequence motifs with members of the actin/HSP70/sugar kinase superfamily, the actin-HSP 70-hexokinase b-and g-phosphate binding motif [(I/L/V)X(I/L/V/C)DXG(T/S/G)(T/S/G)XX(R/K/C)] [3, 7Y9], with the DXG sequence strictly conserved.
In contrast to the members of the actin/HSP70/sugar kinase superfamily, mammalian E-NTPDases are membrane-anchored proteins with one or two transmembrane domains. NTPDase5 and NTPDase6 contain a single predicted N-terminal hydrophobic domain. They are located to the endoplasmic reticulum or Golgi apparatus respectively but they can also be released in soluble form from transfected cells [10Y12]. They preferentially hydrolyze nucleoside diphosphates. The two forms of NTPDase4, that differ by alternative splicing, have predicted N-and Cterminal transmembrane domains and were allocated to the Golgi apparatus (UDPase) [13] and to lysosomal/autophagic vacuoles (LALP70) [14,15], respectively. The Golgi enzyme hydrolyzes a number of nucleoside 5 0 -di and triphosphates but not ATP and ADP. NTPDase7 [16] is localized to intracellular organelles and hydrolyses a variety of nucleoside triphosphates with the exception of ATP. In mammals, four different surface-located subtypes of E-NTPDases have been cloned and characterized. They share a membrane topography with N-and C-terminal transmembrane domains: NTPDase1 [17Y20], NTPDase2 [8,21], NTPDase3 [9,22,23], and most recently NTPDase8 [24]. They all hydrolyze nucleoside tri-and diphosphates but differ significantly in catalytic properties [25].
Surface-located mammalian E-NTPDases have been cloned and characterized mainly from rat, human and mouse tissues. Since these enzymes vary regarding substrate preference and product pattern formation it is necessary to compare the catalytic properties of individual enzymes within the same species. We have previously cloned and characterized rat NTPDase1 and rat NTPDase2 [8,20]. Here we report the cloning and characterization of rat NTPDase3. We analyze its functional properties and tissue distribution in brain and compare some of the key properties of primary and secondary structure to those of other members of the gene family.
cDNA library screening
Total RNA from rat brain was isolated with Trizol reagent. For isolation of polyadenylated RNA from total RNA, oligo(dT)-cellulose was used according to the manufacturer's instructions. A cDNA library was synthesized with SuperScript \ II from 0.5 mg of mRNA with an oligo(dT) 18 primer in accordance with the manufacturer's instructions. Two mouse sequences from expressed sequence tag (EST) databases (GenBank accession numbers bf302156 and w46136) were used for primer design. As a probe, a 288bp PCR fragment was amplified using forward primer 5 0 -CCGTCCCTGCTCCCAAGATTT-3 0 , reverse primer 5 0 -CAGGCACAGCAAGGCGATAGC-3 0 , and the rat brain cDNA library as a template. For library screening, electrocompetent Escherichia coli DH5a were transformed with a rat brain pCMV-SPORT 2 cDNA library amplified and plated on LuriaYBertani/ampicillin agar plates. The resulting transformants were screened by colony hybridization with the 288-bp cDNA fragment labeled with [a-32 P]dCTP by PCR. Positive signal areas were amplified and rescreened for single positive colonies.
cDNA sequencing and computational sequence analysis DNA sequencing was performed by Scientific Research and Development GmbH (Oberursel, Germany). Primer walking in both directions was employed for obtaining the complete full length sequence of the cDNA clone 3.1.1.1. The Omiga 2.0 sequence analysis program (Oxford Molecular Ltd., Oxford, UK) was used for assembling sequence fragments, translating DNA into amino acid sequences, generating hydrophobicity blots and amino acid alignment (CLUSTAL W algorithm). To align the amino acid sequences for the dendrogram, ClustalX 1.81 and for the graphic depiction BoxShade v3.31c were used. For prediction of transmembrane domains, the software TMHMM 2.0 (www.cbs.dtu.dk/services/TMHMM-2.0) was employed. For signal peptide and sorting analysis, SignalP 3.0 (www.cbs.dtu.dk/services/SingalP/) and PSORT II (http:// psort.nibb.ac.jp/form2.html) were used. The DNA and deduced amino acid sequences were analyzed for similarity to known sequences with the NCBI Blast Network service (www.ncbi.nlm.nih.gov/BLAST/). Protein motif search was performed using the prosite database (www.expasy.org/ prosite/). Secondary structure prediction of the amino acid sequences was performed with the SSpro tool (www.igb. uci.edu/tools/scratch/). The genomic library was screened using BLAST and the splice analysis of the genomic sequence was performed using the splice site analysis tool www.fruitfly.org/seq-tools/splice.html.
Expression of recombinant proteins
For recombinant expression, the EcoRI/NotI cDNA fragment of clone 3.1.1.1 was cloned into the EcoRI/NotI sites of pcDNA3. Chinese hamster ovary (CHO) cells were cultured as previously described [8]. Cells were transfected by electroporation with the rat NTPDase3 pcDNA3 plasmid or with plasmids expressing rat NTPDase1 [20] or NTPDase2 [8] in electroporation buffer (in mM: 137 NaCl, 5 KCl, 0.7 Na 2 HPO 4 , 6 dextrose, 20 Hepes, pH 7.0) using a BTX Electrocell manipulator 600. In control experiments cells were transfected with empty vector alone. Twenty-four hours after electroporation the culture medium of transfected CHO cells was exchanged to remove dead cells and debris.
Preparation of membrane fractions
Forty-eight hours after electroporation, the culture medium was removed, cells were washed twice with isotonic buffer (in mM: 140 NaCl, 5 KCl, 0.5 EDTA, 20 MOPS, pH 7.4) and scraped from the plates in ice cold homogenization buffer (in mM: 250 sucrose, 2 EDTA, 2 iodoacetamide, 30 MOPS, pH 7.4) containing a mixture of protease inhibitors (in mg/ml: 2 chymostatin, 2 aprotinin, 1 pepstatin, 150 benzamidine, 2 antipain, 2 leupeptin). After centrifugation at 300 g av cells were resuspended in homogenization buffer, homogenized, and centrifuged for 10 min at 300 g av at 4 -C. The resulting supernatant was sonicated and subsequently centrifuged at 100,000 g av for 60 min at 4 -C, pellets were resuspended in storage buffer (in mM: 2 iodoacetamide, 25 Hepes, pH 7,4) containing the protease inhibitor mixture and 50% (v/v) glycerol, and stored at j20 -C until further processing. For preparation of membrane fractions from various brain tissues, Wistar rats obtained from Charles River Wiga (Sulzfeld, Germany) were used. Animals were anaesthetized with CO 2 , decapitated, the brain was dissected, homogenized in five volumes of homogenization buffer containing the protease inhibitor mixture and further processed as described above.
Measurement of nucleotidase activities
Nucleotidase activity was determined by measuring the formation of Pi liberated from nucleotides [26]. Membrane fractions were incubated in phosphate-free solution containing 500 mM CaCl 2 and 25 mM Hepes (pH 7.4), and 500 mM nucleoside tri-or diphosphates, respectively, at 37 -C. Samples were heat-inactivated for 2 min at 95 -C prior to determination of inorganic phosphate. To investigate the dependence of enzyme activity on metal ions, 50Y400 mM CaCl 2 or MgCl 2 was added, or CaCl 2 and MgCl 2 were replaced by 1 mM EDTA. pH-dependency was determined using a combined buffer (25 mM Hepes and 50 mM glycine) ranging from pH 3 to 10, containing 500 mM ATP and 500 mM CaCl 2 or 500 mM MgCl 2 . Catalytic activity of membrane fractions derived from cells transfected with the empty plasmid was subtracted from that obtained with cDNA-transfected cells. It was verified for each experimental condition applied that hydrolysis rates were constant (10Y30 min). At the end of the reaction with nucleoside triphosphate it was ensured that less than 10% of the initial substrate was hydrolysed.
For determination of product formation following ATP hydrolysis (250 mM ATP, 250 mM CaCl 2 ), aliquots were collected at various time points and subjected to HPLC analysis. Following heat inactivation, samples were centrifuged at 20,000 g av for 15 min (4 -C). ATP, ADP, AMP, and adenosine were separated by a Sepsil C 18 reverse phase column (Jasco, Grob-Umstadt, Germany) and eluted with the mobile phase, consisting of 50 mM potassium-phosphate buffer (pH 6.5), 6% methanol and 5 mM tetrabutylammonium hydrogen sulfate [20]. The absorbance at 260 nm was continuously monitored and nucleotide concentration was determined from the area under each absorbance peak.
Immunoblotting
The anti-rat NTPDase3 antibody (N3-3i4) was raised in rabbits by direct injection of the encoding NTPDase3-pcDNA3 plasmid [27]. Previous to Western blot analysis of membrane fractions from transfected CHO cells or mouse brain tissues, NTPDase3 was enriched using ConAY Sepharose. The 100,000 g av pellet was solubilized in ConAbuffer (in mM: 150 NaCl, 2 MgCl 2 , 2 CaCl 2 , 2 MnCl 2 , 20 Tris/HCl, pH 7.4) containing 0.1% Triton X-100. After an overnight incubation with ConAYSepharose at 4 -C, the ConAYSepharose was washed several times with ConAbuffer containing 0.1% Triton X-100, and protein was eluted with the same buffer containing 200 mM methyl-a-D-mannopyranoside. For immunoblotting the ConA eluate was precipitated with 10% trichloroacetic acid and the pellet was resuspended in sample buffer without reducing agents. Polyacrylamide gel electrophoresis was carried out on minigels (10% acrylamide). Immunoblotting using the polyclonal NTPDase3 antibody (dilution 1:4000) was performed using the Amersham enhanced chemiluminescence system according to the manufacturer's instructions.
Immunofluorescence
Transfected CHO cells (15,000 cells per well) or nontransfected PC12 cells (15,000 cells per well) were seeded onto poly-D-lysine-coated (10 mg/ml) glass cover slips (10 mm diameter) and cultured for 2 days (CHO cells) or up to 14 days (PC12 cells) [28]. An aliquot of 50 ng/ml of NGF was added to PC12 cells every 3Y4 days. For surface Figure 1. DNA sequence and predicted protein sequence of rat NTPDase3. The five Fapyrase conserved regions_ (ACR) are indicated by boxes and numbered. Cysteine residues and potential N-glycosylation sites are indicated by arrow heads and filled circles, respectively. Predicted N-and Cterminal hydrophobic sequences are shaded. Amino acid residues conserved from NTPDase1 and NTPDase8 are in bold. In ACR 1 and ACR 4 these include the DXG-containing phosphate-binding motif and a conserved glycine in ACR 5. (GenBank accession number AJ437217.) labeling, the anti-rat NTPDase3 antibody (1:1000) was applied to viable cells for 15 min at 37 -C followed by repeated washing with phosphate-buffered saline (PBS, in mM: 137 NaCl, 3 KCl, 15 Na + /K + phosphate buffer, pH 7.4) at room temperature, methanol fixation at j20 -C and application of Cy3-conjugated anti-rabbit IgG antibody together with 2-(4-amidinophenyl)-6-indolecarbamidine dihydrochloride (DAPI, 1 mg/ml). After immunolabeling, cells were mounted and investigated with an epifluorescence microscope equipped with an MCID 4 imaging analysis system (Imaging Research, St. Catherines, Canada).
Cloning and sequencing of rat NTPDase3
Two mouse EST sequences (GenBank accession numbers bf302156 and w46136) homologous to human NTPDase3 (GenBank accession number AF034840) were used for primer design. With these primers a 288-bp fragment was amplified from rat brain cDNA by RT-PCR. Using colony hybridization, six positive cDNA clones were isolated from a rat brain pCMV-SPORT 2 cDNA library using the 288bp radiolabeled PCR fragment as a probe. One of the clones to the extracellular domain, five potential protein kinase C phosphorylation sites and six potential casein kinase II phosphorylation sites. The hydrophobicity analysis predicts two strong hydrophobic stretches in the polypeptide chain, close to the N-and C-terminal ends ( Figure 1). These regions represent predicted transmembrane domains with the N-and C-terminus at the cytosolic side, separated by a large extracellular loop. This corresponds to the membrane topography of the known plasma membrane-located NTPDases [1].
A homology search using the rat genomic sequence database localized the NTPDase3 gene (Entpd3) to chromosome 8q32 (Genebank Acc.
Relation to other members of the E-NTPDase family
The deduced amino acid sequence of the cloned rat NTPDase3 shares 94.3% and 81.7% amino acid identity, respectively, with the mouse [23] and human [22] orthologs. Compared to other rat members of the E-NTPDase family, NTPDase3 is most closely related to cell surfacelocated NTPDase2, NTPDase8, and NTPDase1 (36.9%, 36.3%, 35.1% identity, respectively). It is more distantly related to the intracellularly located rat NTPDase5 and rat NTPDase6 (17.8% and 15.2% identity, respectively). The graphic depiction of a multiple sequence alignment of 22 mammalian members of the E-NTPDase family illustrates the subdivision of the family into intracellular and surfacelocated enzymes (Figure 3). Whereas the origins of the branches for the four surface-located members (NTPDase1, NTPDase2, NTPDase3, NTPDase8) are located at close distance to each other, the intracellular members are divided into two sub-branches. One sub-branch consists of NTPDase4 and NTPDase7 that share their membrane topology with the surface-located enzymes. The other subbranch consisting of NTPDase5 and NTPDase6 is characterized by a single N-terminal hydrophobic domain, which can be cleaved resulting in the formation of a soluble enzyme [2].
The surface-located members of the E-NTPDase family share 10 cysteine residues in comparable location within the sequence (Figure 4). Of these, two residues are situated between ACR1 and ACR2 and eight residues between ACR4 and ACR5. The closely related NTPDase1 and NTPDase2 possess an additional cysteine residue located at the N-terminal transmembrane domain. In contrast, rat NTPDase3 has one additional cysteine residue at the Nterminal intracellular domain and two cysteine residues at the C-terminal transmembrane domain. The plasma membrane-located E-NTPDases possess seven to eight predicted N-linked glycosylation sites. Their distribution within the protein sequence is similar but not identical. Only the N-glycosylation site between ACR1 and ACR2 (N81 in rat NTPDase3) is conserved between NTPDase1, NTPDase2, and NTPDase3 (rat, mouse, and human sequences). This site is essential for full enzymatic activity of human NTPDase3 [29] but it does not exist in NTPDase8 (mouse, human). The majority of the potential N-glycosylation sites is situated between ACR4 and ACR5.
Secondary structure prediction and relation to the actin/HSP70/sugar kinase superfamily
The similarities in the distribution of the ACRs between the members of the E-NTPDase family imply structural conservation relevant for catalytic activity. It has previously been emphasized that NTPDases share two nucleotide binding motifs with the actin/HSP70/sugar kinase superfamily [3,8,9]. Members of this superfamily share little overall sequence identity except for the strictly conserved DXG motifs in the consensus sequence. These motifs can be identified in ACR1 and ACR4 of all E-NTPDases.
There are, however, apparent similarities in the secondary structure of members of the actin/HSP70/sugar kinase superfamily. Its members possess two major domains with similar topology (b 1 b 2 b 3 a 1 b 4 a 2 b 5 a 3 ) that fold into a pocket for substrate binding [30Y32]. The two domains presumably are the result of gene duplication. Each of these domains is composed of two subdomains (a and b). The opposing subdomains Ia and IIa share the same basic topology (b 1 b 2 b 3 a 1 b 4 a 2 b 5 a 3 ) that is involved in nucleotide binding. The nucleotide binding site bridges both domains. The two subdomains Ib and IIb (formed by sequences in between b 3 and a 1 ) are different from one subfamily to another [30]. Figure 5 shows an alignment of the secondary structure conserved between members of the actin/HSP70/ sugar kinase superfamily, as depicted for actin, with the predicted secondary structure of NTPDase1 to NTPDase8. The atomic structure of actin has previously been revealed [33,34]. At the N-terminus, the alignment is oriented around the DXG motif in ACR1 of NTPDases and the corresponding DXG motif in actin. The two b strands around ACR1 of E-NTPDases would thus correspond to b 1 and b 2 of actin. The overall pattern of conserved actin secondary structure (a 1 b 4 a 2 b 5 a 3 ) is very similar to that of NTPDases. This b 1 b 2 b 3 a 1 b 4 a 2 b 5 a 3 topology is repeated in the second domain of actin. Similar to actin, the conserved ACR4 of the NTPDases (second DXG motif ) is embedded between two b strands. Taking further into consideration the distance between b 3 and a 1 in the second domain of actin and assuming that the glycine residue conserved between E-NTPDases in ACR5 (Gly462 in NTPDase3) corresponds to the conserved glycine residue in a 3 of the actin/HSP70/sugar kinase superfamily (Gly342 in actin), the secondary structure of the C-terminal half of E-NTPDases may also be aligned.
The degree of similarity in secondary structure mirrors the phylogenetic distance between sequences (comp. Figure 3). In all NTPDases, the sequences ACR1 and ACR4 are situated between two b strands. The alignment not only implies considerable structural conservation between E-NTPDases and members of the actin/HSP70/sugar kinase superfamily. It also suggests that NTPDases Y in spite of their membrane anchorage Y may consist of two major domains that repeat basic topology and key conserved sequence domains.
Catalytic properties of heterologously expressed rat NTPDase3
Preliminary experiments revealed that the ATPase activity at the surface of viable [20] mock-transfected CHO cells was 8.9 T 0.4% (n = 2, triplicate determinations in each) of the activity obtained after transfection with the NTPDase3encoding plasmid (500 mM ATP). In the isolated mem- brane fraction, ATPase activity after mock-transfection was 3.1 T 2.2% (n = 7) of the activity obtained after NTPDase3 transfection. No significant ADP hydrolysis could be determined in membrane fractions from mocktransfected cells. In the following, catalytic properties of rat NTPDase3 were analyzed in membrane fractions, whereby the catalytic activity derived from cells transfected with the empty plasmid was subtracted from that obtained with cDNA-transfected cells. Activity of rat NTPDase3 strongly depended on the presence of Ca 2+ or Mg 2+ . ATP hydrolysis was stimulated by Ca 2+ to a larger extent than by Mg 2+ , particularly at low concentrations ( Figure 6). The difference is less prominent for ADP hydrolysis. In the presence of EDTA (1 mM) no significant nucleotidase activity could be measured (not shown).
ATPase activity revealed an activity optimum between pH 7.5 and 8.5 for both Ca 2+ and Mg 2+ activation (Figure 7). At pH 5 the rates were approximately 45% of maximal activity. The following experiments were performed at pH 7.4 in the presence of 500 mM Mg 2+ and 500 mM nucleotide substrate. An HPLC analysis of product formation using ATP as a substrate showed that ADP is formed in a time-dependent manner and accumulates in the medium. It is further hydrolyzed to AMP. No adenosine is formed (Figure 8).
In addition we compared the hydrolysis rates for ATP, GTP, CTP, ITP and UTP as well as the respective diphosphates. NTPDase3 hydrolyzed nucleoside triphos-phatesYuridine or pyrimidine nucleotides Y at similar rates (Table 1). Nucleoside diphosphates were hydrolyzed at considerably lower rates. In these conditions, the ratio of hydrolysis rates (triphospho-to diphosphonucleotides) varied between 4.1 and 9.5. ATP was hydrolyzed five times faster than ADP and UTP eight times faster than UDP.
Antibody production and enzyme distribution
A rabbit polyclonal antibody was produced by genetic immunization. The specificity of the antibody was first investigated in CHO cells transfected with the cDNA encoding NTPDase1, NTPDase2, or NTPDase3 (Figure 9).
When applied to the surface of viable cells the anti-NTPDase3 antibody revealed a significant surface staining of NTPDase3-transfected cells, demonstrating surface localization of the expressed protein. No immunofluorescence was obtained after application of the anti-NTPDase3 antibody to NTPDase1-and NTPDase2-expressing cells. Control experiments verified that antibodies against NTPDase1 or NTPDase2 yielded a cell surface immunosignal with CHO cells transfected with the respective cDNA (not shown).
Surface staining revealed endogenous expression of NTPDase3 by the rat-derived PC12 cells (Figure 9e). Immunofluorescence was observed over the entire cell surface. The formation of small immunofluorescent dots implies a partial clustering of the protein. In addition, immunostaining was enhanced at growth cones. Application of preimmune serum either to CHO or PC12 cells yielded negative results (not shown).
NTPDase3 immunoreactivity could be detected by immunoblotting in all brain regions analyzed. In membrane fractions obtained from tissue extracts NTPDase3 was hardly detectable (not shown). However, when the protein was enriched by binding to concanavalin A, clear bands could be visualized at 80 kDa ( Figure 10). These protein bands corresponded to that obtained from membrane fractions of CHO cells heterologously expressing NTPDase3.
No immunosignal was obtained with CHO cells transfected with the empty vector or with transfected CHO cells expressing rat NTPDase1 or rat NTPDase2 (not shown).
Discussion
Rat NTPDase3 differs significantly in its functional properties from the two other surface-located NTPDases previously isolated from rat, namely NTPDase1 and NTPDase2. Whereas rat NTPDase1 expressed in CHO cells exhibits the typical catalytic properties of an apyrase with a ratio of hydrolysis rates for ATP and ADP of 1:0.8, rat NTPDase2 has dominant ATPase activity with a ratio of 1:0.05Y1:0.03 [8,20,35]. When expressed in CHO cells, rat NTPDase3 reveals an ATP to ADP hydrolysis ratio of 1:0.2 and thus represents a functional intermediate between the two other NTPDases. This difference is further underlined when the product formation following hydrolysis of ATP is analyzed. Whereas NTPDase1 hydrolyzes ATP directly to AMP with minimal formation of free ADP, ADP accumulates in the reaction medium following hydrolysis of ATP by NTPDase2 [20]. NTPDase3 reveals intermediate properties.
In the presence of ATP, NTPDase3 accumulates extracellular ADP which is finally converted to AMP. Rat NTPDase3 differs significantly from rat NTPDase1 and NTPDase2 regarding cation dependence. Whereas rat NTPDase1 and NTPDase2 are equally activated by Ca 2+ or Mg 2+ [20], rat NTPDase3 shows a clear preference for activation by Ca 2+ . In addition, ATPase and ADPase activities are differentially affected by Ca 2+ or Mg 2+ . This differential activation remains unexplained and may depend on the difference in phosphate chain length between ATP and ADP and thus the potential coordination with the respective metal cation within the protein. Removal of divalent cations abolishes catalytic activity of all three enzymes. None of these enzymes hydrolyzes AMP. NTPDase3 shares, however, with NTPDase1 and NTPDase2 its broad substrate specificity towards purine and pyrimidine nucleoside triphosphates. It can be expected that the differences in catalytic properties between individual subtypes of NTPDases differentially affect P2 receptor signaling either by activating or inactivating P2 receptors [36]. The principal functional properties of rat NTPDase3 are similar to those of human NTPDase3 (HB6, [22]) and mouse NTPDase3 [23]. Interestingly, the pH-dependence of ATP hydrolysis by rat NTPDase3 clearly differs from that of mouse NTPDase3. Whereas mouse NTPDase3 expresses a considerably higher activity at pH 5 than at pH 7Y8 [23], rat NTPDase3 reveals its maximal activity at alkaline pH. Interestingly, the recently cloned NTPDase8 [24] shares principal functional properties with NTPDase3 rather than with NTPDase1 or NTPDase2. Mouse NTPDase8 is preferentially activated by Ca 2+ over Mg 2+ , has an ATP to ADP hydrolysis ratio of approximately 1:0.5 and accumulates ADP that is effectively further hydrolyzed to AMP [24]. Human NTPDase3 forms a dimer [37] and glycosylation is essential for functional expression [38]. The length of the ORF of rat, mouse [23] and human [22] NTPDase3 is identical (529 aa). The three enzymes share 13 cysteine residues and reveal 7 (rat, human) or 8 (mouse) potential N-glycosylation sites. The rat gene (31.0 kb) is slightly longer than the mouse gene (26.9 kb) but it shares its general organization into 11 exons of which exons 2 to 11 contain the ORF.
Rat NTPDase1 and 2 have previously been immunolocalized in the brain. NTPDase1 is associated with the endothelium of blood vessels and smooth muscle as well as with microglia [39,40]. NTPDase2 is expressed by neural stem cells in the subventricular zone of the lateral ventricles [41]. The immunoblot analysis identified NTPDase3 in all brain regions investigated. NTPDase3 could be detected by immunocytochemistry at the surface of viable PC12 cells. The corresponding RNA has previously been identified in PC12 cells by RT-PCR together with that of NTPDase1 and NTPDase2 [42]. Interestingly, the hydrolysis ratio of ATP:ADP (1:0.28) and the product formation following application of ATP to viable PC12 cells is much closer to that of NTPDase3 than of NTPDase2 for which also a weak immunostaining was obtained [42]. This suggests that NTPDase3 is the predominant ecto-nucleotidase of PC12 cells. In addition we observed an enhancement of NTPDase3 immunoreactivity at growth cones of PC12 cells suggesting that the enzyme may be preferentially associated with sites of active membrane incorporation.
The availability of all NTPDase isoforms expected from genomic analysis opens the possibility of structural and functional comparison. To date structural data for this enzyme family are not available. However, the atomic structure of a considerable number of enzymes belonging to the actin/HSP70/sugar kinase superfamily, including glycerol kinase has been derived. All these proteins are soluble, have ATP phosphotransferase or hydrolase activity, depend on divalent metal ion and tend to form oligomeric structures [32]. Individual enzyme families lack global sequence identity. They share, however, the principal structure of two major domains (I and II) of similar folds on either side of a large cleft with an ATP binding site at the bottom of the cleft [30]. These two domains are expected to undergo conformational changes involving movement relative to each other. Both, domain I and II are divided into subdomains (Ia, Ib, IIa, IIb). Subdomains Ia and IIa have the same basic fold with conserved secondary structure elements that share considerable similarity with those of NTPDases.
A comparison of the conserved secondary structure ( Figure 5) reveals duplicate conservation of DXG motifs between b strands (ACR1 and ACR4) of NTPDases that correspond to the b-and g-phosphate binding motifs in subdomains Ia and IIa of actin, as well as a conserved glycine in ACR5 that can be identified among all members of the actin/HSP70/sugar kinase superfamily (a3, [32]). This further supports the notion that E-NTPDases are members of the actin/HSP70/sugar kinase superfamily. It implies in addition that E-NTPDases, like the other members of this superfamily, consist of two major domains with one phosphate binding motif in each domain and the binding of the nucleotide in a cleft between the two opposing domains [31]. Interestingly, some members of the E-NTPDase superfamily are entirely soluble (e.g., potato apyrase or the nucleoside triphosphatases of the protozoan parasite Toxoplasma gondii, references in [35]), others have one transmembrane domain and can be cleaved to form catalytically active soluble enzymes (NTPDase5, NTPDase6), and yet others are (NTPDase 1, 2, 3,4,8) are firmly anchored to the membrane via two transmembrane domains. The transmembrane domains of NTPDase1 were found to be important for maintaining catalytic activity and substrate specificity [6,43], presumably by affecting tertiary and/or quarternary structure. The two transmembrane domains of NTPDase1 interact both within and between monomers and may undergo coordinated motions during the process of nucleotide binding and hydrolysis [44].
NTPDases differ from members of the actin/HSP70/ sugar kinase superfamily by additional conserved short sequence domains (ACR2, ACR3). Differences in sequence, secondary and tertiary structure are believed to account for differences in catalytic properties between related NTPDases [35]. The essential role of the ACRs for catalytic activity has been underpinned by a considerable number of studies using point mutations in the ACRs or ACR deletions [2, 29, 45Y48].
Our present study shows that rat NTPDase3 displays catalytic properties distinctly different from those of rat NTPDase1 and rat NTPDase2. Rat NTPDase3 differs from mouse NTPDase3 in its pH dependence. The enzyme is expressed in multiple brain regions and at the surface of PC12 cells. A comparison of the conserved secondary structure of actin and of NTPDases reveals apparent similarities, inferring that also basic tertiary structure may be conserved between members of the actin/HSP70/sugar kinase superfamily and NTPDases. | 2014-10-01T00:00:00.000Z | 2005-07-29T00:00:00.000 | {
"year": 2005,
"sha1": "b85e02b5f7abdc491e2ccb662a4e0ca1257aae93",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11302-005-6314-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b85e02b5f7abdc491e2ccb662a4e0ca1257aae93",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
85527860 | pes2o/s2orc | v3-fos-license | Cognitive Benefits From a Musical Activity in Older Adults
The aging population is growing rapidly. Proposing interventions that enhance the cognitive functions or strategies that delay the onset of disabilities associated with age is a topic of capital interest for the biopsychosocial health of our species. In this work, we employed musical improvisation as a focal environmental activity to explore its ability to improve memory in older adults. We present two studies: the first one evaluated neutral memory using the Rey Complex Figure (RCF) and the second one evaluated emotional memory using International Affective Picture System (IAPS). A group of 132 volunteers, between the ages of 60 and 90, participated in this investigation. Fifty-one of them were musicians with more than 5 years of formal musical training. After acquisition of neutral (Study 1) or emotional (Study 2) information, the groups of older adults were exposed to music improvisation (experimental intervention) or music imitation (control intervention) for 3 min. We then evaluated memory through two tasks (free recall and recognition), by means of immediate and deferred measures (after a week). We found a significant improvement in memory among participants involved in music improvisation, who remembered more items of the RCF and images from IAPS than the imitation group, both in the immediate and deferred evaluation. On the other hand, participants who had musical knowledge had a better performance in neutral visual memory than non-musicians. Our results suggest that a focal musical activity can be a useful intervention in older adults to promote an enhancement in memory.
INTRODUCTION
Nowadays, there is an increase in life expectancy, which is highly positive for the human being, although it brings with it a decline in our cognitive functions (Christie et al., 2017). It is estimated that by 2050 there will be 114 million people with dementia, this condition being one of the major causes of disability and dependence in the older adult population (World Health Organization, 2012;Iuliano et al., 2015). For this reason, proposing interventions that enhance the cognitive functions or strategies that delay the onset of disabilities associated with age is a topic of capital interest for the biopsychosocial health of our species (Kramer et al., 2004). For example, treatments that enhance cognitive abilities could be promoted in each life stage, from childhood to old age.
Memory is one of the cognitive skills most affected by aging (Nyberg et al., 2003;Park and Festini, 2017). This function could be defined as the capacity to learn, store, and retrieve information (Tulving, 2002;Squire and Wixted, 2011). There are several memory subsystems; the one mostly affected by aging is episodic memory (Friedman, 2013). At the same time, emotional memory could be considered a part of episodic memory, and it is defined as better storage and recall of the events associated with emotional factors, i.e., those events that have an emotional load are better remembered than the neutral ones (Cahill and McGaugh, 1995;Bermúdez-Rattoni and Prado-Alcalá, 2001). Evidence showed that older adults had a decrease in episodic memory, but emotions could work as enhancers and compensate for this deficit (Moayeri et al., 2010).
Several strategies or environmental interventions, in addition to lifestyles, have been investigated mainly to improve cognitive functions and to prevent and/or delay cognitive deficits. Such interventions include learning other languages (Abutalebi et al., 2015), physical activity (Loprinzi et al., 2018), and music (Schneider et al., 2018). In particular, music makes unique demands on our nervous system (Justel and Diaz Abrahan, 2012), and therefore, over the last years, music and each of its components have been used as a tool to investigate human cognition and its underlying brain mechanisms, because music affects the cortical and subcortical areas (Pantev and Herholz, 2011;Koelsch et al., 2018). Some studies show that listening to music improves cognitive skills such as fluency (Thompson et al., 2006), working memory (Mammarella et al., 2007), and recognition memory (Ferreri et al., 2013), among others. For example, background music was investigated as a focal and acute strategy that could improve cognitive skills. This technique refers to any music that is played while the listener's primary attention is focused on another task or activity (Bottiroli et al., 2014). Different studies about the effect of background music have shown some improvements on cognitive abilities. For example, Judde and Rickard (2010) performed a study in which participants listening 3 min of music after the acquisition of information and they had a better recognition memory 1 week later. However, there is some evidence of reduced cognitive performance when music is present (Kämpfe et al., 2010;Rickard et al., 2012).
Furthermore, other investigations indicate that musical production could have even more beneficial effects than musical perception (Lappe et al., 2008;Fancourt et al., 2014).
There is some research about music production, as a focal intervention, in the field of neurologic music therapy (Thaut et al., 2009;Thaut and Hoemberg, 2014), but none of them focused on the effects of music production on memory. Besides, the studies distinguish how music and its components affect people with and without formal musical knowledge (Zuk et al., 2014;Schlaug, 2015;Zhao et al., 2017). In general, because of their extensive training affecting the anatomical and functional organization of their brains, musicians have been shown to have a greater cognitive reserve than non-musicians (Hanna-Pladdy and Gajewski, 2012), and hence, their memory would be less compromised over the years (Talamini et al., 2018). In addition, the protective effect of playing an instrument is greater than that of other leisure activities (Amer et al., 2013). For example, some studies indicated that music training has shown improvements in the cognitive functions of older musicians compared with non-musicians, such as memory, naming, and executive functions, among others (Hanna-Pladdy and MacKay, 2011).
Among the interventions that involve musical production, musical training is the one that has received the most attention. Training includes learning how to play an instrument, and most studies evaluate the effect of moderate or long-term learning (Barrett et al., 2013), leaving a gap as far as focal interventions are concerned. Another intervention that involves musical production is musical improvisation, which is defined as an example of musically creative behavior, conceived as an original and novel process requiring divergent thinking (Bengtsson et al., 2007;Manzano and Ullén, 2012;Diaz Abrahan and Justel, 2015). Research is scarce in this area, and most studies emphasize the use of improvisation in musicians (Limb and Braun, 2008); assuming that improvising musically implies having some degree of expertise in music. However, it is also used with people without musical training as a technique for the patient population (e.g., neurological music therapy, Thaut et al., 2009). In this perspective, music improvisation is conceived as the combination of sounds created in a specific framework inside an environment of trust, which is established to address the needs of the participant or patient (Wigram, 2004). In this sense, music improvisation is not only performed by musicians, but it is also a real-time ability that every person has (Wigram, 2004). Still, research on the use of the musical improvisation technique in people without a pathology and in non-musicians is infrequent. In addition, older people are unlikely to begin learning an instrument at an advanced age. Therefore, providing the opportunity of a focal intervention where the participants play instruments and create something novel in groups, without long-term demands, could result in low dropout rates.
The main goal of this work was to investigate the effect of a focal environmental activity as a possible memory improvement technique in older adults. We evaluated whether there were differences between neutral and emotional memory and between participants with and without formal musical knowledge. The intervention employed was musical improvisation, because it involves a musically creative behavior that may be implemented in musicians or non-musicians and because this focal/acute technique is used with older adults. We expected musical improvisation to improve memory and musicians to perform better than non-musicians in the memory evaluations. Finally, we hypothesized that information with emotional content would be better remembered than neutral information.
Participants
Sixty-nine volunteers (75% female participants) between the ages of 60 and 90 (M = 74.16; SD = 1.1) participated in this study. Twenty-six were musicians (M) with more than 5 years of formal musical training (schools, institutes, music conservatories). Forty-three were considered non-musicians (NM). An a priori power analysis suggested that N = 57 would be adequate to provide 0.60 power (software G*power, Faul et al., 2007). They were recruited from different senior cultural centers through online announcements. Participant exclusion criteria included visual or hearing impairment, amusia, or any music-related pathology, cognitive impairment, and depression. Each participant signed a written informed consent form and completed a questionnaire where sociodemographic and musical expertise information was requested. The procedure was approved by the University of Buenos Aires Ethics Committee.
General Cognitive State Evaluation
As depressive symptomatology may affect memory, we administered the Yesavage Geriatric Depression Scale (GDS, Sheikh and Yesavage, 1986;Martinez de la Iglesia et al., 2002), which measures depression specifically in older adults by assessing anhedonia, sadness, loss of interest, etc. Scores between 0 and 10 are considered to be within the normal range, scores 11-14 show sensitivity to depression, and scores over 14-signal show depression. The participants with a score of 11 or more were excluded. The Mini Mental State Examination (MMSE, Folstein et al., 1975) was used to rule out cognitive impairment. The MMSE is a screening test that measures dementia symptoms. Scores between 9 and 11 are considered to be within the dementia range, scores between 12 and 24-signal cognitive impairment, and scores between 24 and 26 suggest sensitivity to dementia. For schooled participants under 75 years of age, 27 points was the cut score; when the schooled participants were over 75 years old, 26 was the score selected to exclude participants (Butman et al., 2001). Both, the GDS and MMSE were administered individually.
Neutral Memory Evaluation
The material for the neutral memory task was the Rey Complex Figure (RCF; Meyers and Meyers, 1995). It is a figure with 18 different items that compose a larger image.
Instrumental Setting
For the musical experiences (imitation or improvisation), participants were allowed to choose percussion instruments (e.g., drums, maracas, bells, wood blocks, shakers, tambourine) or melodic/harmonic instruments (e.g., guitar, melodica, xylophone, flutes). These instruments were included because they were easy to handle.
Musical Interventions
Music Improvisation (Experimental Condition, EXP). The first author (a music therapist) performed a rhythmic pattern repeatedly during 3 min as a base for an improvised performance by the participants playing their instruments. This pattern was performed with a percussion instrument at a medium volume (Figure 1; Ansari, 2008, 2010;Manzano and Ullén, 2012;Pinho et al., 2016). Participants chose any instrument and improvised musical patterns with instruments or their voices or bodies, spontaneously creating some musical feature according to the context provided by the base-pattern. The instructions included playing without restrictions: the researcher proposed a free improvisation based on the same rhythmical pattern used in REP intervention (Figure 1). Such a rhythmical baseline was introduced in order to guide non-musician participants because pilot studies had shown that without such a guidance participants could not follow the improvisation directions.
Imitation (Control Condition, CTRL). The same researcher who conducted the musical improvisation performed the same rhythmic pattern repeatedly during 3 min as a model to be imitated by the participants with their instruments. This pattern was performed in the same percussion instrument at a medium volume. In this intervention, the participants imitated the pattern for 3 min (Gilbertson, 2013). The instructions included imitating the pattern heard as faithfully as possible, avoiding variations or new musical materials. This intervention was meant to control for possible effects of movements, music perception, musical instruments, among others, that could explain the results.
Experimental Design
Because there were two interventions (EXP vs. CTRL) and the participants had different musical expertise (M and NM), a 2(Intervention) × 2(Training) experimental design was run, with four groups with the following number of subjects: (1) M/EXP: musicians' improvisation group (n = 15); (2) M/CTRL: musicians' imitation group (n = 11); (3) NM/ EXP: non-musicians' improvisation group (n = 22); and (4) NM/CTRL: non-musicians' imitation group (n = 21). Participants were randomly and blindly assigned to the different groups, and they were always tested in groups, with a minimum of four and a maximum of 10 participants, in order to control the involvement of each participant in the music performance.
Procedure
The study was divided into two sessions with a one-week intersession interval. The first session consisted of four immediately FIGURE 1 | Rhythmic base-pattern presented by the researcher to guide both music reproduction and improvisation that participants were asked to perform with a set of basic instruments.
Frontiers in Psychology | www.frontiersin.org consecutive phases. In the first phase (information phase, about 15 min), the participants signed the informed consent form and completed the socio-demographic and musical expertise questionnaire. In this step, we also evaluated the general cognitive state with MMSE and GDS. In the second phase (acquisition, 9 min), the participants watched the RCF and they were asked to copy it (they were supplied with pencil and paper).
In the third phase (treatment phase, about 3 min), the participants were exposed to the musical interventions (improvisation or imitation). The following directions were given during the music improvisation intervention: "We will listen to a rhythmic base, from which you have to create something musical as a group. This rhythmic base will help you to start the improvisation at any time you want. You can use instruments, your voice or your body. It is important to listen not only to the base but also to your own group. " In the imitation intervention (control condition), the following directions were given: "We will listen to a rhythmic base and, anytime you want, you can start to imitate me. You can use instruments, your voice or your body. " Before starting, the researcher corroborated that all the participants understood the instructions. Then, they chose freely the musical instrument that they wanted to play, and they performed the improvisation or imitation task in groups for 3 min.
Soon afterwards, in the fourth phase (test phase, about 11 min), a two-task test was run. Participants were given paper and pencil to drawn from memory the RCF (Immediate Free Recall task), and then 12 target items of the RCF were mixed with 12 new items and participants were asked to indicate whether they had seen the item before or not (Immediate Recognition task).
The second session (11 min) was held a week later, when the two-task test was run again (Deferred Free Recall task and Deferred Recognition task; see Figure 2 for a schematic design of the procedure).
Data Analysis
Age, years of formal education, and years of musical education were analyzed independently via univariate analysis of variance (ANOVA), where Intervention (improvisation vs. imitation) and Training (musicians vs. non-musicians) were the between-factors.
Copy and free recall (immediate and deferred) of the RCF were evaluated by means of the following procedure: Each of the 18 components of the RCF was evaluated according to whether it was well-drawn and correctly located (2 points), well-drawn but incorrectly located (1 point), badly drawn but correctly located (1 point), badly drawn but recognizable (0.5 points), and badly drawn and incorrectly located (0 points). The maximum final score could amount to 36. Because musicians had more years of education than non-musicians and because there were differences in the copy of the RCF (data shown in Results section), recall and recognition (immediate and deferred) were independently analyzed via ANCOVA with Intervention (improvisation vs. imitation) and Training (musicians vs. non-musicians) as the between-factors and Education and Copy as the co-variables.
Post hoc least-significant difference (LSD) pairwise comparisons were conducted to analyze significant interactions. The partial Eta square (h p 2 ) was utilized to estimate effect size. The alpha value was set at 0.05, and the SPSS software package was used to compute descriptive and inferential statistics.
Socio-Demographic Characteristics and General Cognitive State
The final sample consisted of 64 participants, because five evaluations were discarded due to cognitive deficit and/or depression; the final number of participants per group were Table 1.
Regarding the socio-demographic information (Table 1), no differences were found between the groups in terms of age p > 0.05. Nonetheless, depending on the educational level, there were differences in the Intervention factor F(1, 68) = 5.95, p = 0.017, h p 2 = 0.084, where the improvisation groups had a higher educational level than the imitation groups. For this reason, educational level was a co-variable in the statistical analyses performed for the memory evaluations of the RCF. As regards musical expertise, there were differences in the Training factor F(1, 68) = 61.26, p < 0.0001, h p 2 = 0.485, as expected, since we selected musicians and non-musicians for the samples. The average year of musical experience in the musicians' group was 15.24 ± 2.4 years. Non-musicians had an average musical experience of 0.96 ± 0.3 years.
Copy of the RCF
The acquisition of neutral visual information was evaluated through the copy of the RCF. The results are depicted in Figure 3. The ANCOVA indicated a main effect of the Intervention factor F(1, 64) = 9.98, p = 0.002, h p 2 = 0.135. The post hoc test showed that the improvisation groups had higher copy scores than the imitation groups. Due to this result, copy was implemented as an additional co-variable in the subsequent memory analysis (immediate and deferred).
Immediate Measures
After being exposed to the different musical interventions, the participants were instructed to draw from memory the RCF that they had seen in the acquisition phase. The ANCOVA yielded a main effect of Training F(1, 63) = 8.68, p = 0.005, h p 2 = 0.121. The post hoc analysis indicated that musicians had a better recall of the RCF than non-musicians ( Figure 4A).
Recognition was the second task employed to evaluate memory. The participants watched 24 items, and they had to decide which ones were part of the RCF and which were new. False recognitions were subtracted from the total recognition score. The results are depicted in Figure 4B. The ANCOVA indicated a significant effect of the double interaction Training × Intervention F(1, 63) = 4.889, p = 0.031, h p 2 = 0.072. The post hoc test showed that the musicians' improvisation group had a better recognition score than non-musicians' improvisation group. Also, this test indicated that the non-musicians' imitation group had a better recognition score than the musicians' imitation group.
Deferred Measures
After 7 days, free recall and recognition were again evaluated (deferred measures). Regarding free recall, the ANCOVA indicated a main effect of Intervention F(1, 57) = 8.36, p = 0.005, h p 2 = 0.128. The post hoc test showed that improvisation groups had a better recall of the RCF than imitation groups (Figure 5A).
After the free recall evaluation, participants watched 24 items, and they had to decide which ones were part of the RCF and which were new. False recognitions were subtracted from the total recognition score ( Figure 5B). The ANCOVA yielded a main effect of Training F(1, 57) = 4.696, p = 0.034, h p 2 = 0.076. The corresponding post hoc indicated that the participants with formal musical knowledge had a better recognition score than the non-musicians.
Participants
Sixty-three new volunteers (76% female) between the ages of 60 and 90 (M = 71.94; SD = 0.91) participated in this study. Twenty-five were musicians (M) with more than 5 years of musical formal training (schools, institutes, music conservatories). Thirty-eight participants were considered non-musicians (NM). An a priori power analysis suggested that N = 57 would be adequate to provide 0.60 power Frontiers in Psychology | www.frontiersin.org 6 March 2019 | Volume 10 | Article 652 (Faul et al., 2007). They were recruited from different senior cultural centers through online announcements. The participant exclusion criteria were the same as those used in Study 1. Each participant signed a written informed consent form and completed a questionnaire where socio-demographic and musical expertise information was requested. The procedure was approved by the University of Buenos Aires Ethics Committee.
General Cognitive State Evaluation
This evaluation was conducted in the same way as in Study 1.
Emotional Memory Evaluation
The material for the emotional memory task consisted of thirty-six pictures selected from the International Affective Pictures System (IAPS; Lang et al., 1995). Twenty-four pictures were emotionally arousing (12 with a positive valence and 12 with a negative valence) and 12 were non-arousing, neutral images. Following guidelines from previous works (Cahill et al., 2003), we selected the pictures, which covered a wide range of arousal (from 2.95 to 6.36) and valence (from 1.97 to 4.93) in line with the manual by Lang et al. (1995).
Instrumental Setting
The setting was the same as the one used in Study 1.
Musical Interventions
The musical interventions were the same as the ones used in Study 1.
Experimental Design
Because there were two interventions (EXP vs. CTRL) and the participants had different levels of musical expertise (M and NM), a 2(Intervention) × 2(Training) experimental design was run, with four groups with the following number of subjects: (1) M/EXP: musicians' improvisation group (n = 13); (2) M/CTRL: musicians' imitation group (n = 12); Participants were randomly and blindly assigned to the different groups, and they were always tested in groups, with a minimum of four and a maximum of 10 participants in order for the researchers to control the involvement of each participant in the music performance.
Procedure
This study was also divided into two sessions with a one-week intersession interval. The first session consisted of four immediately consecutive phases. The first phase was identical to the one used in Study 1.
In the second phase (acquisition phase, about 7 min), the participants watched the 36 selected pictures for 7 s each. The pictures were presented in random order except for the first and last locations in the series, which had to meet the condition of being a neutral picture (Cahill et al., 2003). Simultaneously, the participants were asked to rate on a 0-10 scale "how emotional" or "activating" they felt each image was (from 0 = not arousing at all to 10 = highly arousing). This behavioral task (Arousal task) was included in order to (1) ensure that the participants paid attention to each image; (2) validate the selection of IAPS images for this research context, and (3) compare the emotional impact of the images between M-NM groups prior to the musical intervention.
The third phase (intervention) was identical to the one employed in Study 1. Soon afterwards, in the fourth phase (test phase, about 11 min), a two-task test was run. The participants were asked to describe in one word or short phrase as many pictures as they could remember (Immediate Free Recall task). Next, they observed the 36 original pictures mixed with 36 new pictures in a random order and they had to mark on a sheet of paper if they had seen the image before or not (Immediate Recognition task).
The second session (11 min) was held a week later, when the two-task test was run again (Deferred Free Recall task and Deferred Recognition task; see Figure 2 for a schematic design of the procedure).
Data Analysis
Age, years of formal education, and years of musical education were analyzed independently via univariate analysis of variance (ANOVA), where Intervention (improvisation vs. imitation) and Training (musicians vs. non-musicians) were the between-factors. Because musicians had more years of education than non-musicians (data shown in Results), arousal, recall, and recognition (immediate and deferred) were independently analyzed via repeated measures (RM) ANCOVA with Intervention (improvisation vs. reproduction) and Training (musicians vs. non-musicians) as the between-factors, Picture (neutral, positive, and negative) as the RM, and Education as the co-variable.
Post hoc least-significant difference (LSD) pairwise comparisons were conducted to analyze significant interactions. The partial Eta square (h p 2 ) was utilized to estimate effect size. The alpha value was set at 0.05, and the SPSS software package was used to compute descriptive and inferential statistics.
Socio-Demographic Characteristics and General Cognitive State
The final sample was composed of 52 participants, because 11 evaluations were discarded due to cognitive impairment or depression; the final number of participants per group was as follows: (1) Table 2.
Regarding socio-demographic information ( Table 2), there were no differences between groups related to age p > 0.05. Nonetheless, there were differences depending on the educational level related to the Training factor F(1, 44) = 5.79, p = 0.02 h p 2 = 0.116. The musicians had a higher academic level than the non-musicians, and therefore, this variable was considered a co-variable in the statistical analyses that were performed for memory. There were differences in musical level related to the Training factor F(1, 45) = 29.53, p < 0.0001, h p 2 = 0.39, as expected, since we selected musicians and non-musicians for the samples. The average year of musical experience in the musicians' group was 16.05 ± 3.43 years. Non-musicians had an average musical experience of 0.53 ± 0.23 years.
Arousal
Arousal was the first dependent variable analyzed. Participants watched neutral, positive, and negative images, and simultaneously rated, from 0 to 10, how arousing the pictures were for them. The emotional pictures were rated as more activating than the neutral ones, and the rating of neutral images was affected by Training and Intervention (Figure 6). These impressions were corroborated by the statistical analysis, since the ANCOVA yielded a main effect of Picture F(2, 86) = 12.817, p < 0.0001, h p 2 = 0.230, while the corresponding post hoc indicated that the emotional images were considered more activating than the neutral ones. Besides, the effect of the Picture × Intervention interaction was significant F(1, 43) = 5.23, p = 0.027, h p 2 = 0.108, and the triple interaction Picture × Intervention × Training was also significant F(2, 86) = 4.27, p = 0.017, h p 2 = 0.09. The analysis of the triple interaction indicated that the M/EXP group rated the neutral images as more activating than did the M/CTRL group, while the opposite pattern was observed in non-musicians since the NM/CTRL group rated the neutral images as more activating than did the NM/EXP group. In addition, the NM/CTRL group rated the neutral images as more activating than did the M/CTRL group.
Immediate Measures
After participants were exposed to the intervention (imitation or improvisation), they were asked to recall as many pictures as they could. The ANCOVA indicated a significant effect of Intervention F(1, 43) = 6.93, p = 0.012, h p 2 = 0.139, where the post hoc showed that the improvisation group remembered more images than the imitation group. Also, the double interaction Picture × Intervention achieved significance F(2, 86) = 5.22, p = 0.007, h p 2 = 0.108. The post hoc indicated that the improvisation group remembered more negative images than the imitation group. The results are depicted in Figure 7.
After the free recall, the participants observed the 36 original pictures randomly intermixed with 36 new ones. They had to discriminate the new images from the old ones. The ANCOVA indicated no significant differences in Training, Picture, or Intervention, or any of their interactions, p > 0.05 (data not shown).
Deferred Measures
The test of free recall and recognition tasks was repeated a week later. Figure 8A illustrates the results of the free recall task. The ANCOVA indicated a main effect of Intervention F(1, 43) = 18.27, p < 0.0001, h p 2 = 0.29, the post hoc showed that the improvisation groups remembered more images than the imitation groups. The double interaction Picture × Intervention also achieved significance F(2, 86) = 5.59, p < 0.005, h p 2 = 0.115, and the corresponding post hoc indicated that for positive and negative images the improvisation groups remembered more images than the imitation groups.
To evaluate recognition, the 36 target pictures were mixed with 36 new pictures and participants had to indicate whether the images were new or old ( Figure 8B). False recognitions were subtracted from the total recognition score (from each of the pictures). The ANCOVA showed a significant main effect of Intervention F(1, 43) = 9.76, p = 0.003, h p 2 = 0.185, where the improvisation groups recognized more images than the imitation groups. In addition, there was a main effect of Picture F(2, 86) = 3.17, p = 0.047, h p 2 = 0.069, and the post hoc indicated that the neutral pictures had a better recognition score than the positive and negative ones and also that the positive images were better recognized than the negative ones. Finally, the double interaction Picture × Intervention achieved significance F(2, 86) = 3.29, p = 0.042, h p 2 = 0.071. This interaction indicated that in the three Frontiers in Psychology | www.frontiersin.org types of images, the improvisation group had a better recognition score than the imitation groups.
DISCUSSION
The goal of this work was to evaluate if a musical intervention could improve neutral or emotional memory in older adults with or without formal musical knowledge. Our control group was not a passive one; instead, it participated in a group musical activity, allowing us to detect specific parameters in each type of intervention that could explain the possible benefits of improvisation. The main results indicated that musical improvisation enhanced memory especially when the information to be consolidated was emotional, indicating that the intervention is more linked to the emotional content than to the neutral one. In addition, musicians performed better than non-musicians. In the following paragraphs, each of the findings is explained in detail.
In both studies, the improvisation groups had a better mnemonic performance than the imitation groups. Nonetheless, this effect was higher in Study 2, where memory with emotional content was evaluated. The improvisation groups performed better at their immediate and deferred free recall and also at their deferred recognition than the imitation groups. Furthermore, in the immediate free recall, the negative images were better remembered; in the deferred free recall, both positive and negative images; and in the deferred recognition, the three types of images were better recognized. In other words, over time, the information had a better consolidation and the participants remembered or recognized more information. By contrast, in the complex figure, better performance was achieved in the improvisation condition only for the deferred free recall. These results would indicate that there was an interaction between musical improvisation and visual memory, and the greatest effect was found for the emotion-laden information. Frontiers in Psychology | www.frontiersin.org A possible explanation for these findings is that during the experience of musical improvisation a melody and a rhythm are spontaneously created, integrating the emotional with the different cognitive levels (Bruscia, 1998(Bruscia, , 1999. In this musical technique, all the body is used to express intentions, emotions, and memories. For this reason, musical improvisation is defined as a special self-expression technique (Gilboa et al., 2006;Punkanen, 2011;Godman, 2012;McPherson et al., 2014). Besides, it has been shown that sound is a potent elicitor of emotions and that musical experiences activate specific pathways in several brain areas associated with emotional content, such as the cingulate and insular cortices, hypothalamus, hippocampus, amygdala, and prefrontal cortex (Boso et al., 2006;Koelsch, 2012Koelsch, , 2014. A study conducted by Koelsch et al. (2018) demonstrated that the auditory cortex, activated during the musical perception, hosts regions that are influential within networks underlying the affective processing of auditory information. The emotional state induced by the musical improvisation may have enhanced the emotions produced by the affective pictures, thus strengthening the memory process. Some studies indicate that music, because of the emotionality state that it generates (Koelsch, 2012), will work as an enhancer of visual elements loaded with emotion (Logeswaran and Bhattacharya, 2009;Kamiyama et al., 2013), causing a synergy between both emotional states. In the first study, this synergic effect between the emotion aroused by the improvisation and the emotion aroused by the task was not observed, probably because the stimuli lacked emotional content.
Musical improvisation, as opposed to imitation where a pattern is replicated, is characterized by the presence of creative elements. This characteristic would indicate that it is not the music itself that modulates memory, since in the imitation condition, participants also perceive and produce musical components but rather the creation of a novel musical product in groups. In future studies, a creative non-musical group could be added to address this topic. Besides, spontaneous improvisation, as opposed to the performance of learned sequences (as in the imitation), is characterized by an extensive deactivation of the medial dorsolateral prefrontal cortex and lateral orbital regions with a focal activation of the medial prefrontal cortex (Limb and Braun, 2008). In addition, there is a relation between musical improvisation and autobiographic memories, since independently of the level of complexity used in the improvisation, the prefrontal and medial temporal cortices are activated, and these areas are involved in memory (Limb and Braun, 2008).
Imitation could interfere with memory. When there are restrictions, especially attentional ones where the participant is asked to replicate, repeat a pattern, adjust to it in intensity, and synchronize, this intervention could diminish cognitive resources and lead to mnemonic deterioration (Miendlarzewska et al., 2013). This is relevant since most musical activities designed for older adults are repetitive (the typical case is the choir, where the participant has to memorize his or her part, pay attention to the tuning, rhythm, etc.). Even though these activities reinforce musical contents per se, they are less efficient when the goal is to improve cognitive skills such as memory.
In the first study, an effect of musicianship was found, which is in line with previous studies about the effect of musical training on visual memory (Hanna-Pladdy and MacKay, 2011). Musicians outperformed non-musicians in immediate free recall and recognition and in deferred recognition. A plausible explanation for the better performance of musicians is that there are structural and functional brain differences between musicians and non-musicians (Zatorre, 1998;Gaser and Schlaug, 2003;Lotze et al., 2003;Bermúdez and Zatorre, 2005;Zatorre et al., 2007;Justel and Diaz Abrahan, 2012;Barrett et al., 2013;Strait and Kraus, 2014;Schlaug, 2015;Herrero and Carriedo, 2018;Li et al., 2018). Becoming a skilled musician requires extensive training, and the type of learning involved entails the development of several abilities (e.g., perception, cognitive control, memory, motor skills, among others). The abilities developed by musicians induce connections and interactions between several brain areas. The brain structural differences between musicians and non-musicians were found to involve the enlargement or thickening of numerous areas in people with musical training. Some of these differences were associated with the anteromedial portion of Heschl's gyrus, the corpus callosum, the planum temporale, and with changes in gray matter that implied a greater plasticity (Luders et al., 2004;Bermúdez et al., 2009;Anaya et al., 2016).
At the same time, the structural differences are accompanied by functional and behavioral divergences in several domains (Herrero and Carriedo, 2018). Depending on the extent of the effect of musical training, the near transfer label is used when the cognitive functions affected by training are those related closely with music, such as the recognition of melodic contour or intervallic sequences (Fujioka et al., 2004). While musical training could transfer cognitive advantages that go beyond musical areas, if the functional change is observed in non-musical skills such as language (Schlaug et al., 2005), mathematical reasoning (Vaughn, 2000), or attentional functions (Wang et al., 2015), the process is named far transfer. In the present work, we contribute evidence to the far-transfer literature, since the benefits for musicians were observed in a cognitive skill not strictly related to musical training.
The fact that we found no differences in terms of musical training in the second study could be associated with non-musicians benefiting from the information with emotional content, and accordingly the greater effect was observed in the intervention factor (improvisation vs. imitation). Because, in the first study, the emotional components were not present, the prevailing factor was musicianship (training). Therefore, the emotionality effect associated with the intervention (improvisation) could have shadowed the training factor effects in the second study.
Nonetheless, it is not necessary to be a professional musician and have lifetime experience in music to benefit from musical training. Some studies indicated that only 1 week of stimulation in musical perception and production resulted in functional changes in the participants (Bangert and Altenmüller, 2003).
Besides, it has been demonstrated that older adults who began their musical training in old age had benefits in several cognitive domains (Bugos et al., 2007). Thus, focal musical interventions (such as the one proposed in the present work) as well as short-and long-term interventions induced a benefit in the cognitive functions of older age participants.
Frontiers in Psychology | www.frontiersin.org Studies about the effect of music in visual memory are scarce. As far as we know, no research has so far focused on memory with emotional content, and it is in this topic that the novelty of our study lies. Besides, the relation between musical experience and neutral visual memory has been the topic of few studies, with conflicting results. Fauvel et al. (2014) found no enhancement of neutral memory in older adults. However, in agreement with our results, Hanna-Pladdy and Mackay (2011) found an improvement in the visual memory of musicians compared to non-musicians. Notably, as far as learning and evaluation of memory are concerned, there are different tests to evaluate this cognitive function, and it is precisely this issue that differentiates the mentioned studies. The methodologies used for measuring memory could have resulted in the divergences found in the studies.
The limitations of our study involve the inclusion criterion to be considered a musician. The criterion was to have more than 5 years of musical training, and although the participants were asked what musical instrument they played, they were not asked whether they were currently active, how many hours a week they devoted to musical training, or how old they were when they started learning music. These questions will be included in future research. In addition, although we found differences regarding the educational level, this variable was used as a co-variable in the statistical analyses so as not to bias the results. Another limitation of our studies is the sampling. In both studies, more than half of the samples were women, and it is possible that the effects might vary across genders, given that some studies show female participants to be more receptive to emotional cues (Andreano et al., 2008;Nielsen et al., 2011Nielsen et al., , 2013Felmingham et al., 2012). We intend to improve this point in future research.
A key challenge for successful aging is to discover cognitive treatments or interventions that have the ability to integrate multiple neural systems that alleviate or prevent age-related cognitive decline (Bugos et al., 2007). Making music is the optimal cognitive intervention that includes multimodal sensorimotor integration, creation of novel elements, motivation, and difficulty. It is relevant to highlight the difference between improvisation and imitation, since the standard musical activities for older adults involve repetitive tasks with no novelty component. By endorsing the advantages of improvisation, group activities could be designed for the purpose of creating something musically novel in a context of social interaction. In addition, improvisation being a social practice, it increases the adherence to the treatment diminishing the dropout rates. Besides, this musical intervention had the benefit that it was pleasant, a motivational factor for the participant to perform this kind of activity, as opposed to other types of training. As a result, despite being a focal intervention, it could be presented in a regular schedule, since the core component is the creation of something musical and always novel. As music improvisation modulates memory, music treatment may provide a simple, safe, and effective method of preventing the potentially harmful physiological concomitants of memory impairment, with great potential for clinical application.
ETHICS STATEMENT
The participants of the studies gave voluntary written consent to take part in the studies without obtaining any type of remuneration and according to the requirements of The Declaration of Helsinki.
AUTHOR CONTRIBUTIONS
VDA and NJ contributed to the conception and design of the studies. VDA conducted the studies. VDA and NJ contributed to data analysis. VDA, FS, and NJ participated in the writing of the paper and interpretation of the data. FS and NJ supervised and integrated the information.
FUNDING
This study was funded by Fondo para la Investigación Científica y Tecnológica. | 2019-03-28T13:08:48.434Z | 2019-03-28T00:00:00.000 | {
"year": 2019,
"sha1": "c1fa0b9aa880df0c8d7b555a6513cb3e172fd625",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00652/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c1fa0b9aa880df0c8d7b555a6513cb3e172fd625",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
14013744 | pes2o/s2orc | v3-fos-license | Coadministration of FTY720 and rt-PA in an experimental model of large hemispheric stroke–no influence on functional outcome and blood–brain barrier disruption
Background Systemic thrombolysis with recombinant tissue plasminogen activator (rt-PA) is the standard of acute stroke care. Its potential to increase the risk of secondary intracerebral hemorrhage, especially if administered late, has been ascribed to its proteolytic activity that has detrimental effects on blood–brain barrier (BBB) integrity after stroke. FTY720 has been shown to protect endothelial barriers in several disease models such as endotoxin-induced pulmonary edema and therefore is a promising candidate to counteract the deleterious effects of rt-PA. Besides that, every putative neuroprotectant that will be eventually forwarded into clinical trials should be tested in conjunction with rt-PA. Methods We subjected C57Bl/6 mice to 3 h filament-induced tMCAO and postoperatively randomized them into four groups (n = 18/group) who received the following treatments directly prior to reperfusion: 1) vehicle-treatment, 2) FTY720 1 mg/kg i.p., 3) rt-PA 10 mg/kg i.v. or 4) FTY720 and rt-PA as a combination therapy. We measured functional neurological outcome, BBB disruption by quantification of EB extravasation and MMP-9 activity by gelatin zymography. Results We observed a noticeable increase in mortality in the rt-PA/FTY720 cotreatment group (61%) as compared to the vehicle (33%), the FTY720 (39%) and the rt-PA group (44%). Overall, functional neurological outcome did not differ significantly between groups and FTY720 had no effect on rt-PA- and stroke-induced BBB disruption and MMP-9 activation. Conclusions Our data show that FTY720 does not improve functional outcome and BBB integrity in large hemispheric infarctions, neither alone nor in conjunction with rt-PA. These findings stand in contrast to a recently published study that showed beneficial effects of FTY720 in combination with thrombolysis in a thrombotic model of MCAO leading to circumscript cortical infarctions. They might therefore represent a caveat that the coadministration of these two drugs might lead to excess mortality in the setting of a severe stroke.
Background
Thrombolysis with recombinant human tissue plasminogen activator (rt-PA) is the only approved and evidence-based medical therapy for ischemic stroke. Within the narrow therapeutic time window of 4.5 h, it confers a clear net benefit to all stroke patients who receive thrombolysis which comes at the cost of an increased risk of secondary intracerebral hemorrhage for the individual patient, mostly caused by hemorrhagic transformation (HT) of the infarcted brain tissue [1].
Magnetic resonance imaging studies in acute stroke patients have shown that blood-brain barrier (BBB) disruption is significantly more prevalent in stroke patients who received thrombolysis than in untreated stroke patients [2]. Based on the description of HARM (hyperintense acute reperfusion marker) which was defined as the presence of gadolinium extravasation into the cerebrospinal fluid (CSF) space adjacent to the infarction on the fluid-attenuated inversion recovery (FLAIR) sequence of a follow up scan after gadolinium injection for a scan that took place a few hours earlier, it became evident that preceeding BBB disruption was present in 73% of patients who developed hemorrhagic transformation within the next hours [3].
These observations were supported by animal studies showing that cerebral ischemia leads to an increase of matrix-metalloproteinase (MMP) activity, especially of MMP-9 which follows the same time course as BBB disruption after experimental stroke and that both are aggravated by treatment with rt-PA [4] and correlate with HT [5]. Therefore, combination therapies with drugs that protect endothelial barrier function seem to be a reasonable approach to limit the risks of rt-PA treatment.
One of these substances is the sphingosine 1-phosphate analog FTY720, an immunomodulator that has been marketed in 2010 for the treatment of relapsing-remitting multiple sclerosis (MS). The presumed mechanism of this drug is the induction of peripheral lymphocytopenia that is caused by agonist-induced internalization of the S1P 1 receptor of B and T lymphocytes which limits lymphocyte egress from primary lymphoid organs [6]. But beside its effects on immune cell trafficking, FTY720 also has effects on neurons, glia, and progenitor cells in the brain and an effect on the BBB can be presumed [7]. Interestingly, FTY720 treatment led to a downregulation of inflammatory genes including MMP-9 and to an increased expression of its counterpart tissue inhibitor of metalloproteinase (TIMP) in experimental autoimmune encephalitis (EAE) [8]. Given as a rescue therapy, FTY720 reduced BBB leakiness after disease onset [8]. Studies in other vascular beds have shown that FTY720 can protect endothelial cells from apoptotic cell death [9]. However, there are conflicting data on the net effect of S1P receptor agonism on endothelial barrier function, with FTY720 being protective in a model of pulmonary edema caused by systemic LPS injection [10] while it has deleterious effects in a model of bleomycin-induced pulmonary fibrosis [11].
A neuroprotective effect of FTY720 in the acute phase of stroke has been shown in several experimental studies [12][13][14]. From a translational point of view, every neuroprotectant that is aimed at ameliorating acute brain damage in the first hours within stroke onset should be tested in conjunction with rt-PA. Therefore, the aim of this study was to assess the effect of FTY720 in conjunction with rt-PA treatment in an experimental model of stroke and its effect of stroke-and rt-PA-induced BBB disruption and matrix metalloproteinase expression.
Experimental model of middle cerebral artery occlusion
C57Bl/6 mice (Charles River Laboratories, Sulzfeld, Germany) were used at 10-12 weeks of age. All experiments were approved by the local governmental authorities (Regierungspräsidium Darmstadt, Germany, approval number F143/51) and conducted in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals. Mice were subjected to transient middle cerebral artery occlusion (MCAO) as described previously [15]. Briefly, mice were anesthetized with 1.5-2.5% isoflurane (Forene; Abbott, Wiesbaden, Germany) and 0.1 mg/kg buprenorphine (Temgesic®; Essex Pharma, Munich, Germany) under spontaneous respiration. Focal cerebral ischemia was induced by inserting a custom made filament with a tip diameter of 0.23 mm (Doccol, Sharon, USA) into the middle cerebral artery (MCA). Regional cerebral blood flow was monitored by laser Doppler flowmetry (PF5010, Perimed; Stockholm, Sweden) to confirm vessel occlusion. After an occlusion time of 3 h, the filament was withdrawn to initiate reperfusion. After the operation, mice were allowed to recover from anesthesia with free access to food and water. At the end of a 24 h observation period, mice were lethally anaesthetized and perfused transcardially with PBS. Brains were removed quickly and divided into ischemic and non-ischemic hemispheres before they were frozen and stored at −80°C to await further procedures. Brains from mice that had died within the 24 h observation period were harvested without prior transcardial perfusion.
Sample size calculation, experimental groups and substance application
We based our sample size calculation on the quantifiable parameters Evans Blue extravasation and matrix metalloproteinase-9 activatity. Assuming an increase of 25% in these two parameters between the ischemic hemispheres of the vehicle-treated and rt-PA-treated mice that has been shown in previous studies and a standard deviation of 33% of the respective mean values (Cohen's D 0.85), a group size of 18 animals was necessary to show this effect with a power of 0.8 and a probability of a type I error of < 0.5 [16]. After the MCAO operation and prior to reperfusion, we randomized 18 mice per group into the four treatment groups: 1) vehicle-treatment, 2) 1 FTY720 1 mg/kg 3) rt-PA 10 mg/kg, or 4) FTY720 and rt-PA as a combination therapy. The mice received either FTY720 (1 mg/kg, dissolved in 0,9% NaCl; Fingolimod; Cayman Chemicals, Ann Arbor, USA) or saline i.p. in combination with an i.v. bolus of either rt-PA (10 mg/kg; Actilyse®; Boehringer Ingelheim, Ingelheim, Germany) or aqua ad injectabilia. The operator was blinded to the pharmaceutical treatment during the whole study. Additionally, we assessed whether preconditioning of the rt-PA-solution by exposure to a fibrin-containing clot would enhance its deleterious effects on the blood-brain barrier. To this end, blood was drawn from a donor mouse to generate a spontaneously-formed blood clot. Rt-PA was incubated with this blood clot for 30 min under gentle shaking, generating "activated" rt-PA (Art-PA). We performed 3 h of MCAO in two sets of mice that received an i.v. bolus of either rt-PA (10 mg/kg) or Art-PA (10 mg/kg).
Evaluation of neurological function
Neurological function was evaluated at 3 h directly prior to reperfusion and at 24 h post-MCAO on a 14 point scale modified from Chen et al. [17] testing hemiparesis, gait, coordination and sensory functions (Additional file 1: Table S1).
Determination of blood-brain barrier leakage
To assess blood-brain barrier leakage, the extravasation of the autofluorescent dye Evans Blue (EB) that binds to plasma albumin was quantified from brain hemispheres as described before [5]. 23 h after MCAO, 200 μl of 2% Evans Blue in 0.9% NaCl were injected into the retrobulbar venous plexus and allowed to circulate for 1 h prior to transcardial perfusion and brain removal. Brain samples were homogenized in lysis buffer as described above and additionally with ultrasound.
Protein precipitation was obtained by adding 50% trichloroacetic acid. The supernatant was diluted 4fold with ethanol. The amount of Evans blue dye was measured by a microplate fluorescence reader (excitation 600 nm, emission 650 nm; SpectraMax M5; Molecular Devices, Sunnyvale, USA).
MMP-9 activity
Gelatin zymography was used to measure the levels of MMP-9 activity in the brain samples as described previously [5,18]. We analyzed the brains of all mice, including those that did not survive the observation period. Brain hemispheres were homogenized in ice-cold lysis buffer contanining protease inhibitors. After centrifugation, the supernatant was collected and total protein concentration of each sample was determined by the Bradford assay (Nanoquant, Roth, Karlsruhe, Germany). Equal volumes of total protein extracts in sample buffer (4% SDS, 0.005% bromphenol blue, and 20% glycerol) were loaded onto 10% polyacrylamide gels containing 0.1% gelatin as a substrate. After electrophoresis, the gels were incubated in 2.5% Triton X-100 at room temperature for 30 min with gentle agitation and stained with 0.5% Coomassie Blue G250. For densitometry, gels were scanned, inverted and integrated density of the bands was quantified with NIH image J 1.44p.
MMP-9 protein expression
We evaluated MMP-9 expression on the protein level by Western Blotting. Samples of equal total protein content were loaded onto polyacrylamide gels. After migration, proteins were electrotransferred onto a nitrocellulose blotting membrane. To confirm successful transfer, the membrane was stained with Ponceau S. It was then blocked in 5% dry milk and 0.05% Tween-20 for 1 h at room temperature under gentle agitation. After extensive washing, the membrane was incubated with the primary antibody (Anti-MMP-9 rabbit polyclonal; Millipore, Billerica, USA) over night. After exposition to a second antibody (goat anti-rabbit antibody; Bio-Rad, Munich, Germany), the blots were developed with a chemiluminescence reagent on hyperfilm.
Statistical analysis
Graph Pad Prism 4 (Graph Pad Software Inc., La Jolla, CA, USA) was used for statistical analysis. Results are expressed as means +/− standard deviation. Statistical significance of the differences between groups was evaluated with One-way ANOVA with Bonferroni's correction for parametric values or the Kruskal-Wallis test with Dunn's correction for nonparametric values.
Intraperitoneal injection of FTY720 leads to a rapid decrease of blood lymphocyte counts in mice
To verify efficient uptake and potency of FTY720 via the route of administration that we chose for our experiment, we sampled blood 2 h after i.p. injection of 1 mg/kg FTY720. Fluorescence-activated cell sorting (FACS) analysis of CD4+ and CD8+ T cell populations showed a significant decrease in circulating T lymphocytes to 22.5% for CD8+ T cells and 4.3% for CD4+ T cells as compared to vehicle treated mice (Figure 1). FTY720 in conjunction with rt-PA does not improve survival or functional neurological outcome in large hemispheric infarctions Mice were randomized into four groups (n = 18/group) to receive either 10 mg/kg rt-PA or vehicle i.v. at the end of the 3 h MCAO period in combination with either 1 mg/kg FTY720 or vehicle i.p. The vehicle only-treated group showed a mortality of 33%. Interestingly, the group who received FTY720 in conjunction with rt-PA showed a considerably higher mortality of 61% (Figure 2A). There were no significant differences between groups in the functional neurological examination with the 14-point neuroscore mNSS ( Figure 2B). FTY720 does not enhance blood-brain barrier integrity in large hemispheric infarcts alone or in combination with rt-PA Brain weight of the ischemic hemisphere was increased by 30-40% in comparison to the non-ischemic hemisphere. Neither rt-PA nor FTY720 treatment alone had an influence on this very crude measure of brain swelling and also rt-PA in combination with FTY720 did not lead to significant changes in wet brain weight. We only included mice whose ischemic hemisphere showed at least a weight increase by 10% to exclude all mice who showed postmortal global brain swelling ( Figure 3A) after being found dead during the observation period. All mice who survived the 24 h observation period received 200 μl 2% Evans Blue (EB) i.v. one h prior to sacrifice to assess BBB permeability for macromolecules such as albumin. Fluorometric EB Figure 1 FTY720 leads to a rapid decrease of blood lymphocyte counts. We sampled blood 2 h after i.p. injection of FTY720 (1 mg/kg). FACS analysis showed a significant reduction of both CD4+ and to a lesser extent CD8+ T cells (n = 3/group). Statistical significance between groups was tested with two-tailed Student's t-test for unpaired values. * p < 0.05. quantification from brain homogenates of transcardially perfused mice did also not show an alteration of strokeinduced EB extravasation by rt-PA, FTY720 or the combination of both substances ( Figure 3B).
Neither rt-PA nor FTY720 induce significant changes in MMP-9 activity in brain homogenates 24 h after MCAO Matrix metalloproteinase-9 (MMP-9) is a serine protease that becomes activated in case of ischemic brain injury and has been shown to be a key mediator of blood-brain barrier breakdown in experimental stroke [5]. We performed gelatin zymography to assess MMP-9 activity in the infarcted and non-infarcted hemisphere. While there was a clear, approximately five-fold increase in the infarcted hemisphere, much to our surprise, the administration of FTY720 or rt-PA or the combination of both substances at the end of the 3 h MCAO period did not lead to significant changes in MMP-9 activity ( Figure 4A). We performed exemplary Western Blots of MMP-9 and found that MMP-9 activity as assessed by gelatin zymography was highly correlated with MMP-9 protein expression ( Figure 4B).
Activated rt-PA (Art-PA) does not lead to a further increase in blood-brain barrier breakdown after cerebral ischemia We aimed to clarify the lack of a clear detrimental effect of rt-PA on BBB integrity and MMP-9 activity in our experimental setting. Since it is conceivable that rt-PA only develops its proteolytic effect when it is activated by a fibrin rich clot of adequate surface, we preincubated rt-PA with a spontaneously-formed autologous blood clot for 30 min at room temperature prior to injection after MCAO to generate activated rt-PA (Art-PA). This preconditioning of rt-PA, however, also did not lead to the 2-4fold increase of the cerebral ischemia-induced MMP-9 activity in the ischemic hemisphere that has been reported elsewhere for the filament occlusion model [4] (Figure 5).
Discussion
We found no relevant protective effect of FTY720 when applied in conjunction with rt-PA in an experimental model of large hemispheric strokes, neither on functional neurological outcome nor on markers of BBB disruption. In contrast, our data rather point towards safety concerns against the coadministration of these two drugs in patients with severe strokes.
Analyzing mortality, we found a noticeable difference between the vehicle group (33%) and the groups with administration of FTY720 alone (39%) or rt-PA alone (44%) on the one hand and the cotreatment group who received rt-PA in conjunction with FTY720 (61%) on the other hand. When evaluating functional outcome after 24 h on the 14 point neuroscore including dead animals which were assigned the maximal score of 14, the differences between groups were not statistically significant.
These data are in some aspects remindful of the results of the multicentric clinical phase II/III trial of eryhtrompoietin (EPO) published in 2009, which assessed its safety and efficacy in acute stroke [19]. This clinical trial was preceded by promising experimental studies that had shown a robust neuroprotective effects and a clinical phase I trial of EPO as a monotherapy in the acute phase of stroke [20] that had demonstrated adequate safety. In contrast, the phase II/III trial, which for the first time allowed systemic thrombolysis in conjunction with EPO treatment, failed to show efficacy. Characteristics of this trial population were Figure 3 FTY720 does not enhance blood-brain barrier patency in large hemispheric infarcts neither alone nor in combination with rt-PA. A) For the determination of wet brain weight, the cerebellum was discarded and brain hemispheres were frozen separately at −80°C. The frozen brain hemispheres were weighed prior to homogenization. Only mice whose ischemic hemisphere (I) showed an increase of wet brain weight of ≥ 10% in comparison to the non-ischemic hemisphere (NI) were included in our analysis. B) EB (200 μl of a 2% solution) was injected 1 h prior to sacrifice and transcardial perfusion. We excluded mice who showed more EB extravasation into the NI hemisphere and mice who died within the EB circulation time of 1 h. Statistical significance of the differences between groups was tested with One-way ANOVA with Bonferroni correction. ns indicates not significant. rather severe strokes (mean NIHSS: 13) and the coadministration of systemic thrombolysis in 60% of patients [19]. There was an excess of mortality in the group of erythropoietin-treated patients compared to the placebo group and the patients who received erythropoietin as a cotreatment together with thrombolysis fared distinctly worse [19]. Therefore, we interpret our data as a caveat that FTY720, which has shown a beneficial effect on outcome and infarct size in several different stroke models, might not be effective if used as a treatment for severe strokes, especially in conjunction with systemic thrombolysis.
Regrettably, our data do not allow a functional explanation of the excess mortality of rt-PA treatment in conjunction with FTY720. Focusing on BBB analyses, we did not perform a quantification of hemorrhagic transformation, e.g. with brain imaging or a hemoglobin assay. Therefore, we have no information on whether the combination therapy led to an increase in hemorrhagic transformation. Concerning alternative extracranial causes of mortality, FTY720 has been shown to induce bradycardia [21], bronchoconstriction and mild pulmonary edema [22] in mice and humans. Besides that, the paucity of circulating lymphocytes could in principle entrain an increased susceptibility towards infections, even though we showed previously, that FTY720 does not increase the rate of stroke associated pneumonia in mice [23] and similar findings have been shown for equally specific immunomodulatory interventions in experimental stroke [24]. However, these symptoms are no common side effects of rt-PA treatment and should have occurred to the same extent in the group receiving FTY720 alone if they were to explain this excess mortality.
Much to our surprise, we did not detect a deleterious effect of rt-PA alone on stroke induced BBB dysfunction. This is at odds with many experimental studies describing an aggravating effect of rt-PA on BBB disruption after stroke [4,5]. One possible explanation could have been that rt-PA was not biologically active in our experimental Figure 4 Neither rt-PA nor FTY720 induce significant changes in MMP-9 activity in brain homogenates 24 h after MCAO. A) Non-ischemic (NI) and ischemic (I) hemispheres were homogenized separately in ice-cold cell lysis buffer containing protease inhibitors and subjected to acrylamide gel electrophoresis on a gel containing 0.1% gelatin. Photographs of the Coomassie blue-stained gels were inverted and the MMP-9 band at 84 kDa densitometrically evaluated. We excluded all mice who showed greater MMP-9 activity in the NI as compared to the I hemisphere. B) We compared MMP-9 expression on the protein level (Western Blot, upper panel) with MMP-9 gelatinase activity (zymography, middle panel) from identical samples of brain tissue. Total protein staining with the azo dye Ponceau S served as a loading control. system (given as an i.v. bolus injection of 10 mg/kg). We cannot directly prove an effect of rt-PA, but previous studies from our group using the same injection technique showed clear effects of rt-PA on HT after stroke [25]. Interestingly, the increase of BBB disruption after stroke has been shown to be greater in embolic than in mechanical models of stroke as the filamentocclusion used in the present study [4]. There are even reports that the administration of rt-PA in the filamentmodel per se does not increase BBB damage unless it is "activated" by preincubation with a clot [26,27]. However, in our hands, even after preincubation of the rt-PA solution with an autologous blood clot, we did not find a relevant increase in the ischemia-induced MMP-9 activity by activated rt-PA. Therefore, use of preactivated rt-PA did not serve to improve our experimental model.
Against the backdrop of our own observations and the published studies on FTY720 in experimental stroke studies, it is somewhat surprising that we did not find a beneficial effect of FTY720 on functional outcome in this experiment. We ascribe this discrepancy to the long MCAO occlusion time which we chose to induce maximal brain damage in order to sensitively detect HT after rt-PA treatment. We did not measure ischemic lesion size in this study. It is conceivable that while ischemic lesion size was reduced by FTY720, this did not translate into clinical benefit anymore because of the severity of the ischemic insult, explained by a ceiling effect of the functional neuroscore. We ascertained the timely biological efficacy of FTY720 in the chosen mode of application by a quantification of blood lymphocytopenia via FACS analysis, shown in the first figure.
Recently, Campos et al. reported on a beneficial effect of FTY720 in thrombolysis which was only manifest when rt-PA was applied late, i.e. 180 min after vessel occlusion as opposed to 30 min [28]. They made use of distal MCAO by direct thrombin injection that led to circumscript cortical infarctions with reperfusion upon rt-PA treatment. FTY720 alone led to a reduction of lesion size but also the combination therapy of FTY720 and rt-PA applied at 180 min reduced lesion size in comparison to vehicle treatment. Interestingly, they were able to demonstrate that FTY720 reduces rt-PA induced EB extravasation after stroke and rt-PA as a marker for BBB dysfunction, even after normalization for the reduction of lesion size. The main discrepancy between our studies is the size of the infarction produced by the respective experimental model which might have led to a ceiling effect in our case, where a modest protective effect is no longer discernible. We chose large hemispheric infarctions to induce severe BBB damage in order to increase the aggravation of BBB injury by rt-PA.
From a translational point of view, our data point towards the issue that the protective effect of FTY720 in acute stroke may be lost in large hemispheric infarctions. They represent a caveat that the combination therapy of FTY720 and rt-PA might lead to an excess mortality. Figure 5 Activated rt-PA (Art-PA) does not lead to a further increase in blood-brain barrier breakdown after cerebral ischemia. Rt-PA (1 mg/ml) was either injected directly in the dose of 10 mg/kg in 2 mice after MCAO (first four lanes, homogenates of the non-ischemic and ischemic hemisphere of each mouse) or incubated with a spontaneously-generated blood clot for 30 minutes under gentle shaking prior to injection in 2 mice (last four lanes). MMP-9 activity was assessed with gelatinase zymography. Bands were analyzed densitometrically. | 2016-05-12T22:15:10.714Z | 2013-10-28T00:00:00.000 | {
"year": 2013,
"sha1": "8ae25886b989a58b79b269cb8c78ca188b414f77",
"oa_license": "CCBY",
"oa_url": "https://etsmjournal.biomedcentral.com/track/pdf/10.1186/2040-7378-5-11",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "94d465f890d854cbabfe991773a439fbb2b96096",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3676666 | pes2o/s2orc | v3-fos-license | Development of a framework for the co-production and prototyping of public health interventions
Background Existing guidance for developing public health interventions does not provide information for researchers about how to work with intervention providers to co-produce and prototype the content and delivery of new interventions prior to evaluation. The ASSIST + Frank study aimed to adapt an existing effective peer-led smoking prevention intervention (ASSIST), integrating new content from the UK drug education resource Talk to Frank (www.talktofrank.com) to co-produce two new school-based peer-led drug prevention interventions. A three-stage framework was tested to adapt and develop intervention content and delivery methods in collaboration with key stakeholders to facilitate implementation. Methods The three stages of the framework were: 1) Evidence review and stakeholder consultation; 2) Co-production; 3) Prototyping. During stage 1, six focus groups, 12 consultations, five interviews, and nine observations of intervention delivery were conducted with key stakeholders (e.g. Public Health Wales [PHW] ASSIST delivery team, teachers, school students, health professionals). During stage 2, an intervention development group consisting of members of the research team and the PHW ASSIST delivery team was established to adapt existing, and co-produce new, intervention activities. In stage 3, intervention training and content were iteratively prototyped using process data on fidelity and acceptability to key stakeholders. Stages 2 and 3 took the form of an action-research process involving a series of face-to-face meetings, email exchanges, observations, and training sessions. Results Utilising the three-stage framework, we co-produced and tested intervention content and delivery methods for the two interventions over a period of 18 months involving external partners. New and adapted intervention activities, as well as refinements in content, the format of delivery, timing and sequencing of activities, and training manuals resulted from this process. The involvement of intervention delivery staff, participants and teachers shaped the content and format of the interventions, as well as supporting rapid prototyping in context at the final stage. Conclusions This three-stage framework extends current guidance on intervention development by providing step-by-step instructions for co-producing and prototyping an intervention’s content and delivery processes prior to piloting and formal evaluation. This framework enhances existing guidance and could be transferred to co-produce and prototype other public health interventions. Trial registration ISRCTN14415936, registered retrospectively on 05 November 2014. Electronic supplementary material The online version of this article (10.1186/s12889-017-4695-8) contains supplementary material, which is available to authorized users.
Background
There are a range of approaches to public health intervention development [1][2][3][4][5][6][7][8][9][10][11][12]. The UK's Medical Research Council (MRC) guidance, the most widely cited approach, recommends that intervention development should consist of theory development, identification of an evidence base (typically through a recent or new systematic review), and modelling of processes and outcomes [13]. Other approaches provide more detailed guidance on: developing intervention or program theory [2,6,9]; using mapping techniques to inform the components required in an intervention [1,5,7,10]; cycles of testing and refinement [3,8] and the use of partnerships with individuals, communities, and service providers [4,7,8,12]. These guidelines support development of a theoretical rationale for an intervention, but provide scant pragmatic instruction on how to develop intervention materials and delivery methods.
Theory needs to be translated into intervention design in a way that facilitates adoption across settings and maximises implementation. The RE-AIM framework helped to re-focus away from efficacy to effectiveness, and assess the degree of reach, adoption, implementation and maintenance of effects [14]. As well as identifying reasons for (in)effectiveness, an assumption is that barriers to adoption, implementation and maintenance that are identified in evaluations are addressed in the adaptation of existing or design of new interventions. It is not clear whether this occurs. Even if barriers are addressed, as the policy and practice landscape can change with country, health system and time, some barriers identified may not be relevant in a new system. A method for the rapid identification of potential barriers to effectiveness, possible solutions, testing and re-testing of materials would save the costly implementation of interventions that do not adequately account for variations in context. The involvement of customers in the prototyping of new products has long been used in manufacturing [15], as a method for gaining feedback and improving design. Intervention design may benefit from incorporating the principles of iterative product development and testing intervention components, or prototyping, with those who deliver and receive interventions [16].
The concept of Transdisciplinary Action Research (TDAR) [12] has been developed to support effective collaboration between behavioural researchers, policy makers, frontline public services staff and communities. Building on Lewin's [17] concept of 'action research' that combines scientific and societal value, TDAR is an approach where researchers from multiple disciplines work with a range of stakeholders and intended beneficiaries to jointly understand social problems and identify practical solutions to them, such as through co-producing new public health interventions [5]. A key component of this approach to applied social science is the development of sustainable, replicable processes to support effective collaboration between researcher teams, frontline practitioners and communities in order to harness the latent expertise of key stakeholders (for example, those who deliver health promotion to the target population, gatekeepers within settings such as school teachers, managers, owners) so that the acceptability and feasibility of the intervention is addressed and maximised at the development stage [5,12,[18][19][20].
We present the framework for co-production and prototyping which was used to guide the adaptation of the ASSIST smoking prevention intervention to develop detailed content and delivery processes for two new peer-led drug prevention interventions, one as an adjunct to the ASSIST intervention (+Frank) and the other a standalone drug prevention intervention (Frank friends).
Case study: ASSIST + Frank intervention development study
Informed by the principles of TDAR, we tested a novel, staged approach to adapt and co-produce with stakeholders the content and delivery of two new informal, peer-led interventions to prevent illicit drug use among secondary-school students in the United Kingdom by adapting an effective peer-led smoking prevention intervention (ASSIST) [21]. ASSIST is a school-based peerled intervention that has been shown to be effective in reducing the uptake of smoking in UK secondary schools [21]. It is recommended in the National Institute for Health and Care Excellence (NICE) guidance on school-based smoking prevention [22] and forms part of the Welsh and Scottish Governments' tobacco harm reduction plans [23,24]. In contrast, studies of the implementation and effectiveness of peer-led drug prevention interventions report mixed evidence [25][26][27]. For example, very low-levels of implementation occurred in the EU-Dap trial, where only 8% of centres implemented all seven peer-led sessions and 71% did not conduct any meetings at all [26]. Moreover, there is some evidence of harmful effects for school students with drug using friends from the US TND-Network trial [27]. These challenges suggested new approaches were warranted and more careful intervention development was required.
Informed directly by the existing evidence surrounding the effectiveness of the ASSIST intervention [21] and its basis in the theory of Diffusion of Innovations [28], we adapted the ASSIST model of informal peer-led delivery (see Additional file 1: Table S1 for components of ASSIST) to drug prevention using information from the UK national drug education website, Talk to Frank [29]. The theoretical basis and design of the effective ASSIST intervention informed a skeleton structure of core intervention components and processes that underpinned the development of the two new informal, peerled interventions to prevent illicit drug use. From this, an intervention logic model was constructed for each of the two new peer-led drug prevention interventions, +Frank and Frank friends (see Additional file 1: Figs. S1 and S2), which would be compared at subsequent stages of evaluation.
The intervention "+Frank" is as an informal peer-led drug prevention adjunct to the ASSIST smoking prevention intervention. It is designed to be delivered in secondary schools to year 9 students (aged 13-14) who have previously received ASSIST in year 8. "Frank friends" is a stand-alone, informal drug prevention intervention. It aims to identify and recruit peer opinion leaders in year 9 to be trained as peer supporters. Both interventions involve off-site training to learn the effects and risks associated with specific drugs and potential harms; +Frank involves one day and Frank friends two days of training. Peer supporters are asked to have conversations with their peers on the risks of different drugs and log these interactions over 10-weeks. +Frank peer supporters are visited three times and Frank friends four times by trainers to support them to have conversations.
Methods
A three stage multi-method framework was tested to coproduce the content, resources, and delivery methods for the +Frank and Frank friends interventions based on their logic models. The three stages are: 1) Evidence review and stakeholder consultation; 2) Co-production; and 3) Prototyping. The methods used at each stage allowed for integration of scientific literature with stakeholders' knowledge and expertise. The key stakeholders in intervention development were the Public Health Wales (PHW) ASSIST delivery team, secondary school students, and health professionals working for drug agencies and with young people. The objectives of the methods used and topics explored at each stage are summarised in Additional file 1: Table S2.
Stage one: Evidence review and stakeholder consultation
In stage one, 'evidence review and stakeholder consultation' , members of the research team engaged in a process of co-operative enquiry with stakeholders. A variety of consultation methods were offered to groups of stakeholders to enable them to participate in the way that they felt was most appropriate. The overall aim of the stakeholder consultation was to gather multiple perspectives about drug use issues relevant to young people, existing drug education for young people, and ideas for appropriate and acceptable content for the peer-led drug interventions. This involved a range of methods.
Focus groups with young people
Six focus groups were conducted with 47 young people aged 13-15 who were purposively sampled from a range of settings allowing for variation in demographic backgrounds and existing experience of drug use (three schools, a youth centre and a student referral unit). A semi-structured topic guide was used consisting of broad open-ended questions relating to participatory taskbased activities using information and resources from Talk to Frank.
Interviews with the ASSIST intervention delivery team
Interviews were conducted with an opportunity sample of five members of the PHW ASSIST delivery team.
Observations of current practice
Observations of all five stages of ASSIST intervention delivery were conducted (n = 8) as well as one observation of the ASSIST 'Train the Trainers' course.
Stakeholder consultation
A range of unstructured consultations were also conducted with opportunity samples of young people and practitioners: one with five volunteers from a young people's public involvement group (ALPHA) aged 16-19 years old; one with seven young people aged 13-15; one with five recipients of ASSIST aged 12-13; and nine individual consultations with health professionals working for drug agencies (n = 4) or with young people (n = 4) or both (n = 1).
Audio recordings of the focus groups and interviews were transcribed verbatim and analysed using thematic analysis. An a priori coding framework focused on the objectives of the interviews with the delivery team was applied to the interview transcripts to organise data for subsequent searches for recurring patterns and themes. However, an element of flexibility was maintained such that codes which did not fit the framework were also captured. This analysis approach has been described in detail elsewhere [30].
Researcher field notes from observations and informal consultations were collated and combined with the outcomes from the analysis of interview and focus group data in order to identify similarities and differences across the various stakeholder perspectives emerging from the consultation process. These outcomes were then taken forward to feed into the co-production of intervention content during stage 2.
Stage two: Co-production
In stage 2, 'co-production' , an intervention development group consisting of members of the research team and key stakeholders was established to co-produce the intervention materials and resources. The key stakeholders identified for adapting the ASSIST intervention to deliver information from Frank were members of the Public Health Wales ASSIST delivery team. The PHW team had delivered ASSIST to over 350 schools over a period of seven years so had extensive experience of intervention delivery within schools and were well placed to consider the potential feasibility of adapting intervention content for use with an older age group and for drug prevention. The team had also been identified to deliver the new drug prevention interventions that were being developed.
Co-production of intervention content took the form of an action research cycle over a series of meetings of the intervention development group in which findings from stage 1 were considered, ideas were presented by all members, feedback on ideas sought, refinements made and presented again, until final content was agreed. Five face-to-face meetings were held over the course of a four month period. These were supplemented by communications via email where face-to-face meetings were not possible, or when matters arose that required discussion between meetings.
Stage three: Prototyping
In stage 3, 'prototyping' , the draft intervention manuals and associated resources underwent expert review by the lead author of the ASSIST randomised controlled trial [21], and the lead trainer of DECIPHer Impact, the company that licenses ASSIST. Reviewers looked over the adaptations made to ASSIST intervention content and resources, as well as newly developed content, and were asked to provide feedback regarding key uncertainties identified during development (for example, fit with the Diffusion of Innovations theory, age-appropriateness of activities, suitability of timings and sequencing).
In order to gain preliminary feedback regarding acceptability and feasibility of the intervention content, intervention delivery was tested with an opportunity sample of the ALPHA group (n = 5), as well as during two training sessions with the intervention delivery team. Independent observations of intervention delivery in two test schools were made by two members of the research team using a structured observation tool to check whether the learning outcomes for each activity were met and any deviations that were made. Figure 1 shows the framework and activities that were completed in the ASSIST + Frank intervention development study stage. The process of co-production and prototyping took 18 months and comprised 42 activities ( Fig. S3 shows the frequency and time line of each activity). The process was iterative and cumulative with refinements occurring prior to the next stage.
Stage one: Evidence review and stakeholder consultation
In line with the MRC guidance on developing complex interventions [13], we reviewed the existing literature to identify the existing distribution of illegal drug use amongst young people and whether there were any existing effective school-based drug prevention interventions. A non-systematic review of population-based prevalence studies with secondary school-aged children in the United Kingdom (aged 11 to 16 years of age) showed the lifetime prevalence of any illegal drug use doubled from 6.8% to 12.4% then 23.1% between the ages of 13, 14, and 15 years respectively [31], this informed our decision to deliver the intervention to UK year 9 students (13 to 14 years of age) as it an age of rapidly increasing drug experimentation. A systematic review of school-based drug prevention found small effects on cannabis use in the short term and poor implementation of interventions that were peer delivered [25,26]. The development of the +Frank and Frank friends interventions was informed by the effectiveness of the ASSIST intervention [21] and its basis in the theory of Diffusion of Innovations [28], an approach not previously used in relation to youth drug prevention.
During stage one we consulted with key stakeholders with the aim of gathering information to tailor the interventions to the target context and population in order to maximise acceptability and reduce problems with implementation. Key stakeholders were identified as people with direct experience or knowledge of youth drug taking, recipients of the existing ASSIST smoking prevention intervention, intended recipients of the newly developed interventions, and those who delivered any existing drug prevention interventions within the setting (i.e. schools) or provided intervention resources (e.g. financing, staffing). Table 1 summarises the results from the stage one focus groups, interviews, consultations and observations. Several narratives were replicated across the different stakeholders. With regard to which drugs the intervention should prioritise, the young people aged 12-18, practitioners working in drug support agencies with young people, and the review of prevalence studies all highlighted the same eight drugs which had over a 1% prevalence in 13-15 year olds [31]. The consultations with young people and practitioners also noted a local issue with steroid use in older age groups, which was not apparent in prevalence data as these were gathered in England and did not sample from Welsh schools. These consultations led us to tailor the intervention to the local context by including information on steroids in the interventions.
The consultations and focus groups with young people suggested that 13 to 14 year olds were relatively familiar with the potentially harmful effects of drugs on health. The familiarity of young people with the harms of drugs on health, prompted us to also add focus on the harms associated with drugs being illegal and therefore unregulated, such as unexpected effects brought about by consuming an unknown compound of an unknown dose. Other concerns that young people voiced included the potential harms of drug use on family relationships, future education and employment.
"I mean that's your mum, that's one of your parents, they put a roof over your head. If you get drove away from them you don't get food for yourself, you don't get a roof over your head, you're out on the streets. You don't have anyone to get you a meal or look after you 'cause you're on your own." [P3, male] "'Cause then you're getting a criminal record that's stopping you from getting a job and loads of stuff." [P4, female] A number of factors that might influence the engagement of students during peer supporter training were found in both the interviews with the ASSIST delivery team and independent observations by the research team of delivery of the intervention. In particular, the importance of flexibility in delivery of intervention activities to different groups and the need for engaging, interactive content.
"We work to the same objectives, but in terms of how we run some activities, we might change them a bit … with different groups you know, how they react to a certain activity you might change it round to help the running of it." [T1] "I think it's important that whatever we do that it's quite engaging and [students] get an input as well, you know, not just sitting there watching us, listening to us, I think it's important that it's interactive as well." [T3] "Making sure that they're interactive … so they're up and about, they get moving around, break off activities, um, just making it as interactive as possible." [T4] Fig. 1 Framework for intervention co-production and prototyping. a Stakeholders comprise those within or external to the delivery setting (e.g. school-based: school teachers, head teacher, contact teacher, head of Personal, Social, Health and Economic (PSHE) education, head of year, receptionist; national and local policy leads; parents/ guardians/ caregivers) Table 1 Results from application of the 3-stage framework for co-production and prototyping in the ASSIST + Frank study Activity Objectives Results • Drug education is typically didactic and should be more interactive; • Discussions with peers about drugs are frequent; • Commonly used drugs at their age are alcohol, cannabis, poppers, mephedrone, ketamine and cocaine; • Talk to Frank was viewed positively, but should be accompanied by other visual resources.
Consultation with Year 9 students
Explore views about drug use in their age group and ideas about content for a drug prevention intervention.
• Content suggested included effects of drugs on the body, and the legal consequences of drug possession; • Specific drugs to focus content on included cannabis, alcohol, steroids, magic mushrooms and legal highs.
Focus groups with Year 9 students
Explore knowledge and risk perceptions of drug use and perceptions of drug use prevalence in their age group. Explore acceptability and age-appropriateness of drug education messages on Talk to Frank website.
• Health risks of cannabis are known; • Legal consequences of cannabis use are less well known; • Content on impact of drug use on educational achievement directly, or through school exclusions if caught in possession needed; • Content on impact of drug use on parents worrying about harms (to health, criminal sanctions, schooling exclusions), shame brought to family, and increasing stress would be welcomed; • Attention to potential iatrogenic effect of Talk to Frank messages on amphetamine use promoting weight loss required.
Consultations with stakeholders (Drug agencies and professionals who work with young people) Explore awareness of drug education resources and support, and views on appropriate content for a drug prevention intervention.
• Cannabis and alcohol are the most commonly used drugs by 13 to 14 year olds; • New Psychoactive Substances (NPS) are an increasing concern, particularly synthetic cannabinoids; but not in 13 to 14 year olds; • Staff from drug agencies noted a local problem with anabolic steroids regarding attendance at needle exchange programs. Use is not in 13-14 year olds; • Existing drug education for 13-14 year olds is either provided in classroom-based sessions, or one-off workshops delivered by a specialist agency or a community police officer; • There are limited drug education resources available and existing resources such as 'drugs box displays' are expensive. Resources require regular updates in response to emerging NPS and changing trends.
Consultations with Year 8 recipients of ASSIST
Explore ideas about peer supporter training and content for a drug prevention intervention.
• Content suggested included effects of drugs on the body, how drugs cause 'highs', health risks, legal consequences, and harm minimisation; • Specific drugs to focus content on included cannabis, solvents, magic mushrooms, cocaine, speed, mephedrone, legal highs, and steroids.
Observations of current ASSIST practice
Identify aspects of the intervention that work well and could be adapted for use to deliver a drug prevention intervention and with a Year 9 population.
• Flexibility in adapting timings and delivery modes to respond to student engagement is key for successful delivery of training; • Need for clear objectives noting which are essential to deliver.
Interviews with intervention delivery team
Identify possible influences on intervention feasibility and acceptability. For example, explore aspects of ASSIST that could be adapted for use to deliver a drug education intervention and for use with 13-14 year olds, as well as those which might not lend themselves to adaptation.
• Intervention activities need to be interactive; • Successful implementation of intervention requires flexibility in delivery to meet needs of different groups; • Some intervention activities required updating (e.g. ASSIST activity using postcards because peers supporters did not know what they were); • Some intervention activities might be too immature for use with 13-14 year olds; • Delivery of messages about harms of drug use is much more complex than harms of smoking (more compounds with different effects);
Stage two: Co-production
During the co-production process, the intervention development group reflected on findings from stage one and used these to inform the adaptation of content from ASSIST and the development of new content. The group was participatory and collaborative and all members were provided with opportunities to input. This process exploited the intervention delivery team's experience with the setting, target population, and intervention content. For example, it was noted during the stage 1 interviews with the ASSIST team that an important aspect of the intervention for them was providing the peer supporters with interesting and memorable facts about smoking that they could use in conversations with their peers.
"There's key facts within ASSIST … four thousand chemicals [in a cigarette], um, sixty to seventy chemicals cause cancer, and we always get the impotence one as well. So the boys always remember the erectile dysfunction." [T3] "So if we can give them facts that sort of link into what they could be talking about with their friends, it makes it easier for these conversations to happen. In ASSIST, one of the facts they always remember, is that smoking could affect your ability to get an erection. That is the one that sticks with them, and you might not have done the training for ten weeks, and they will still remember that." [T1] "In ASSIST, we know that young people will leave knowing the ingredients of a cigarette, long-term, short-term health effects, is it guaranteed. We know that you'd go up to any young person that had done the training and you'd ask them how many ingredients are in a cigarette and they'd be able to tell you." [T1] This was also observed in field notes of the observations of delivery of the ASSIST intervention made by the research team. During stage two, the intervention development group considered these findings and decided to adapt information from the Talk to Frank website [29] about the risks of drug use into memorable factual Table 1 Results from application of the 3-stage framework for co-production and prototyping in the ASSIST + Frank study (Continued) • Concerns around amount of knowledge required to deliver drug prevention intervention.
Stage 2: Co-production
Meetings of the intervention development group Action research cycle of assessment, analysis, feedback and agreement on the core components of the intervention required to educate peer supporters on the harms of drug use and the skills required to communicate these to their peers.
• Findings from Stage 1 suggested long-term harms to health of low-levels of cannabis are less definitive than those of smoking; • Include content on concerns expressed by young people and harms associated with drug use that they did not know about; • Shift focus towards these concerns and away from harms to health of the most commonly used drug -cannabis; • Highlight the potential immediate harms to health from use of glues, gasses and aerosol (i.e. sudden sniffing death); • Harms associated with drugs being unregulated and illegal: unknown compound and dose, thus unexpected effects are likely; • Potential consequences of sanctions imposed by schools (temporary, permanent exclusion) and poorer educational achievement; • Potential consequences of criminal sanctions on travel and future career options; • Mention harms including increasing parental anxiety, stress and shame; • Draft intervention manuals and associated resources detailing intervention activities and how these should be delivered were produced.
Stage 3: Prototyping Expert review of intervention materials
Identify potential problems or weaknesses in intervention materials prior to piloting.
• Updating of some intervention activities was welcomed; • More detail needed in instructions for delivery team; • Refining of timings for some intervention activities.
Testing of intervention materials with young people Delivery of intervention. Identification of issues around feasibility and acceptability of newly developed intervention content.
• Intervention activities were well received; • Refinements included amending wording, providing more detailed instruction and objectives, and using smaller groups.
Training of intervention delivery team
Simulation of intervention delivery. Identify issues around feasibility and acceptability of intervention content.
• Need for additional drug education training; • Refinements included amending timings, clarifying ambiguities in instructions, changing format of delivery, adding extra content and removing content.
statements. These key statements were then used across several activities within the peer supporter training and added to the peer supporter diaries as a reminder. Examples of the statements include; "Cannabis contains some of the same chemicals as tobacco", "A drugsrelated conviction can stop you from travelling to some countries, such as the USA" and "Giving cannabis to your mates is considered 'supplying' under the law". Both the research and ASSIST teams independently developed adaptations and new content which were shared amongst the group. For example, a member of the ASSIST team had already developed a new mode of delivery for one of the training day activities in ASSIST in order to address an existing feasibility issue. This was incorporated into the adapted activity for the new interventions.
Stage three: Prototyping
A period of prototyping of the intervention content, materials and delivery methods is a necessary next step for identifying early issues with acceptability, feasibility and other potential teething problems so that these can be addressed prior to formal piloting and evaluation.
Expert peer review of intervention content or components is useful for examining key uncertainties that have been identified during development. Expert reviewers should be selected based on the areas of greatest of uncertainty and be independent of the intervention development group. There were two areas of uncertainty identified during the development of +Frank and Frank friends; how newly developed activities fit with the diffusion of innovations theory, and whether the format of activities was age-appropriate and followed suitable timings and sequencing.
We sought expert feedback from the lead author of the ASSIST RCT [21] to examine fit with theory and from the lead trainer at DECIPHer Impact who delivers all training to new ASSIST teams to advise on timings and sequencing. The feedback received included possible minor refinements to the timing of some intervention activities and the presentation of instructions in the intervention manual. In addition it was suggested that consideration was given to 'future proofing' intervention resources by identifying content that may require regular review and updating.
Testing delivery of the draft intervention content or components on a small scale is also recommended. Where possible the intervention should be delivered to a sample of the target population, if not it is advisable to make use of opportunities for simulated delivery. During testing, data should be collected to explore the experiences of those delivering the intervention as well as those receiving it in order to inform refinements. Table 2 shows an example of how intervention content was co-produced over each stage of the framework, including how the iterative process of gathering feedback and making refinements was made during the prototyping stage in response to delivery of an activity from the peer supporter training. The objective of the activity (titled 'What is a drug?') is to define, name and categorise the effects of drugs. A series of insights were generated from testing out delivery with a group of young people, as well as during training of the intervention delivery team, where the trainees practised delivery of intervention activities with each other. Without this period of testing, these issues would not have emerged until formal piloting. These included: trainers being anxious they would have to have an encyclopaedic knowledge about drugs after young people generated over thirty names of drugs in the test phase; underestimating the time taken to list drugs during the activity; and confusions over drugs with a dual effect. Refinements were made to the training manuals and activities were amended to address these findings and the content was tested again.
Reflections on co-production
Interviews with the ASSIST team at the end of stage three suggested they believed co-production created a sense of ownership and buy-in of the intervention, which they were going to be delivering as part of the study: 'Oh I really enjoyed it, I think it was very beneficial, especially because if, we're the ones that'll be ending up delivering it' [T1] 'It's good that you know, I can say that I've kind of contributed towards developing something new' [T2] '. . . The team appreciate being asked as well because you know in the future if we are expected to deliver, knowing that we've been part of it from the start really does help' [T4] Throughout co-production the intervention delivery team had been able to convey the realities of delivering interventions to young people and had highlighted important potential barriers to implementation which were addressed at an early stage.
'I think it's helped to have us involved, just because we've got the hands-on experience of working with young people' [T5] Independent observations by the research team of delivery of the finalised intervention identified that some trainers continued to adapt activities during delivery, after co-production had ended. In the +Frank intervention, across the 15 activities, five were delivered in full, eight had minor deviations from the manual and two were not delivered at all. In the Frank friends intervention, across the 25 activities, 13 were delivered in full, nine had minor deviations and three were not delivered at all. Field notes suggested the delivery team struggled to switch off from an intervention development mindset even after co-production had ended. If carried through to formal piloting, the interventions may not be delivered entirely as intended which may potentially be a barrier to implementation with high fidelity.
Discussion
The three-stage framework presented extends current guidance by providing pragmatic guidance on how to co-produce and prototype public health intervention content and delivery methods before formal piloting. It provides a framework to guide co-production with stakeholders so that intervention content is tailored to the population and setting in order to address implementation issues at the design stage. This is complementary to existing intervention development guidelines which provide information about the use of mapping techniques [1,5,7,10], intervention theory development [2,6,9] and testing [3,8]. Our framework offers insight into how collaboration and co-production with stakeholders can be incorporated into these stages of intervention development.
The incorporation of stages of co-production and prototyping builds on existing literature on Transdisciplinary Action Research [5,12] as well as theories of capacity building noted in community psychology [32], participatory action-research [33], plan-act-study-do cycles in clinical settings [34,35], and the use of quality improvement replications to improve systems [36]. The involvement of key stakeholders in the co-production of intervention content provides a mechanism for tailoring intervention content to the context and target population to maximise acceptability and reduce the likelihood of problems with implementation. A variety of stakeholders should be engaged to ensure that a range of expertise and perspectives relevant to the realities of the intervention problem, target population, and intended delivery setting is represented.
The case study presented here provides an example of co-production with key stakeholders throughout the lifecycle of intervention development to adapt content from an existing effective peer-led smoking prevention intervention to co-produce two new peer-led drug prevention interventions. Based on this experience we offer some reflections on the benefits and potential weaknesses of such an approach.
Benefits of co-production
The involvement of stakeholders with knowledge and experience of existing interventions, the target population, and the delivery setting has the purpose of maximising the acceptability, feasibility and quality of the intervention being developed and its fit with the implementation context. For example, frontline practitioners know the delivery setting, as well as issues that have affected the implementation of previous interventions. In addition, co-production engenders an element of 'buyin' to the intervention and creates a sense of ownership amongst those involved in its development. This can be particularly useful where the intended intervention Table 2 Example of co-production and prototyping of intervention content in ASSIST+Frank deliverers can be identified at the development stage and invited to be involved in the intervention development process. In addition, the involvement of the intended intervention recipients during co-production can help to ensure that intervention content meets their needs and is acceptable and credible.
Weaknesses of co-production
The co-production process is both iterative and fluid. However, there must come a stage in the process where intervention content is consolidated and put to the test. Observations of delivery found that some staff made amendments to activities, after it was agreed that coproduction had ended and the intervention manual finalised. This meant that out of 40 activities 17 (42.5%) were delivered with a minor deviation from the instructions in the manual and five (12.5%) were not delivered at all. This suggests it may be difficult for stakeholders to demarcate when the co-production process has ended, which may be a threat to fidelity if carried through to formal delivery outside of piloting.
There are several potential barriers to co-production including competing priorities and goals and interdisciplinary conflict between the stakeholders involved in the intervention development process. This is more likely when the stakeholders involved are from a range of background fields, bridging both professional and lay perspectives [12]. Another potential barrier is the time consuming nature of co-production which requires active engagement from those involved over an indeterminate amount of time to allow the process to unfold and evolve. Some stakeholders may not have the flexibility within their roles that the PHW ASSIST team had so may not be so heavily involved. There may be some potential limits to the transferability of this approach for the development of other public health interventions. The framework was used to adapt an existing intervention with a strong evidence base and a well-established delivery structure. In addition, the PHW ASSIST team were highly experienced in terms of knowledge and delivery of ASSIST to the target population. These conditions may have contributed to the successful application of the framework within this study.
Conclusions
The framework presented here provides pragmatic instruction on how to coproduce and prototype public health interventions. It complements other intervention development guidance by providing more detail on the process of the early stages of intervention development and co-production that receives limited attention in existing guidance on intervention development [1][2][3][4][5][6][7][8][9][10][11][12]. Future studies should explore its utility in guiding the process of co-production of interventions with different target behaviours, populations and stakeholder groups.
Additional file
Additional file 1: Table S1. Core components of the ASSIST intervention. Table S2. A checklist for the key components of the framework for coproduction and prototyping. Figure S1. ASSIST +Frank logic model. Figure S2. Frank friends logic model. Figure S3. | 2017-09-13T01:30:37.991Z | 2017-09-04T00:00:00.000 | {
"year": 2017,
"sha1": "0fd3739788ac6d814d8ccccf1ab98cb2d22c2e53",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/s12889-017-4695-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fd3739788ac6d814d8ccccf1ab98cb2d22c2e53",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221471049 | pes2o/s2orc | v3-fos-license | Hippocampal alterations in glutamatergic signaling during amyloid progression in AβPP/PS1 mice
Our previous research demonstrated that soluble amyloid-β (Aβ)42, elicits presynaptic glutamate release. We hypothesized that accumulation and deposition of Aβ altered glutamatergic neurotransmission in a temporally and spatially dependent manner. To test this hypothesis, a glutamate selective microelectrode array (MEA) was used to monitor dentate (DG), CA3, and CA1 hippocampal extracellular glutamate levels in 2–4, 6–8, and 18–20 month-old male AβPP/PS1 and age-matched C57BL/6J control mice. Starting at 6 months of age, AβPP/PS1 basal glutamate levels are elevated in all three hippocampal subregions that becomes more pronounced at the oldest age group. Evoked glutamate release was elevated in all three age groups in the DG, but temporally delayed to 18–20 months in the CA3 of AβPP/PS1 mice. However, CA1 evoked glutamate release in AβPP/PS1 mice was elevated at 2–4 months of age and declined with age. Plaque deposition was anatomically aligned (but temporally delayed) with elevated glutamate levels; whereby accumulation was first observed in the CA1 and DG starting at 6–8 months that progressed throughout all hippocampal subregions by 18–20 months of age. The temporal hippocampal glutamate changes observed in this study may serve as a biomarker allowing for time point specific therapeutic interventions in Alzheimer’s disease patients.
Results
Learning and memory retrieval. Cognitive performance was assessed using the Morris water maze (MWM) learning and memory recall behavioral paradigm. No differences in learning were observed between 2-4 month and 6-8 month old AβPP/PS1 and age-matched C57BL/6J mice (Fig. 1A-F). At the 18-20 month age range AβPP/PS1 were slower to learn the location of the hidden escape platform compared to age-matched C57BL/6J mice as indicated by the cumulative distance from the platform (F 1,24 = 5.111, P = 0.33) and the area under the curve (AUC) of this parameter (t 24 = 2.488, p = 0.02) as shown in Fig. 1G. This age group of AβPP/PS1 mice also spent less time searching the target quadrant for the escape platform (F 1,24 = 13.10, P = 0.0014) over the five training sessions (t 24 = 3.824, p = 0.0008 as shown in Fig. 1H. Additionally, AβPP/PS1 mice spent more time navigating the periphery of the maze during the training sessions as indicated by percentage of time in the thigmotaxic zone (F 1,24 = 7.284, P = 0.01) and the AUC of this parameter (t 24 = 3.016, p = 0.006) as shown in In vivo DG glutamate dynamics. Representative glutamate release traces from the DG are presented in In vivo CA1 glutamate dynamics. Figure 4A depicts CA1 representative glutamate release traces for both C57BL/6J and AβPP/PS1 mice across the three age groups tested. At 2-4 months of age no differences in CA1 basal glutamate are observed (Fig. 4B). AβPP/PS1 CA1 basal glutamate increases with disease progression and is elevated at 6-8 (t 20 = 2.618, p = 0.02) and 18-20 (t 22 = 3.946, p = 0.001) months of age compared to age-matched C57BL/6J mice. An opposite effect with disease progression is observed with CA1 stimulus-evoked glutamate release (Fig. 4C). The younger age groups studied, 2-4 (t 20 = 4.834, p = 0.0001) and 6-8 (t 20 = 2.142, p = 0.04) months of age have elevated glutamate release, but is similar to C57BL/6J mice at 18-20 months of age. AβPP/ PS1 CA1 release rate (Fig. 4D) tended to be elevated at the youngest age group only (t 20 = 1.800, p = 0.08). No differences between genotypes are observed at the other age groups. The clearance of CA1 evoked glutamate ( Fig. 4E) is elevated in AβPP/PS1 mice early in disease progression at the 2-4 (t 20 = 2.078, p = 0.05) and 6-8 (t 20 1.612, p = 0.12) month time points, but no difference is observed at 18-20 months of age. CA1 glutamate dynamics are inverted with basal levels increasing and evoked-release decreasing with respect to disease progression.
Amyloid plaque staining. Amylo-Glo RTD plaque staining reagent was used to measure changes in amyloid plaque pathology throughout the hippocampus. Representative 10 × magnification images form the DG and CA1/CA3 hippocampal subregions from all mice studied are shown in Fig
Discussion
One of the first preclinical symptoms associated with AD is attributed to hyperactive hippocampal neuronal networks. Hippocampal hyperactivity is observed in Aβ positive MCI patients, which persists with disease progression despite increasing rates of hippocampal atrophy and dementia scale ratings 9 . Elevated hippocampal activity has also been reported in several mouse models of AD with progressive amyloidosis [24][25][26][27][28] . These changes in neuronal networks are initially observed in mice before plaque deposition 29,30 , supporting a role for soluble Aβ in neuronal hyperactivity. Endogenous Aβ was shown to increase the release probability at excitatory synapses 29,31,32 as well as stimulate synaptic glutamate release 20,21 . However, as Aβ 42 accumulation leads to aggregation, the neuronal proximity to the surrounding plaques determines their activity states. Neurons closest to the plaques develop hyperactive phenotypes while those further from plaques are markedly silenced 33 . This may partially be explained by the accumulation of soluble Aβ isoforms surrounding plaques that colocalize with postsynaptic densities and cause synapse loss 34 . Furthermore, this would create a localized area of intense synaptic activity that can propagate Aβ pathology 35 www.nature.com/scientificreports/ whereby intensified glutamatergic activity and amyloid accumulation extend throughout cortical areas, contribute to seizure activity 38 , and cascade into the degenerative processes observed in AD 39 .
In the present study, learning and memory recall deficits in AβPP/PS1 mice were only observed at the oldest age group studied. Discordant results exist in the literature regarding when AβPP/PS1 mice begin to experience MWM cognitive deficits. Differences in cognition can begin by 6 months of age 40 with others reporting first appearance at 10 months that progresses with age 41 that are in line with the present study and our prior work 22 . Increased thigmotaxic behavior was also observed in the oldest AβPP/PS1 age group studied, similar to previous www.nature.com/scientificreports/ reports in this transgenic AD model 42 . When first navigating the MWM, mice tend to remain close to the wall until they learn to search the middle of the pool for an escape route. While learning the MWM, the thigmotaxic behavior in 18-20 month-old AβPP/PS1 subsided slower and was more prevalent during the probe challenge compared to age-matched C57BL/6J mice. This may be indicative of sensorimotor impairments and anxiety that affected learning and memory in AD mice at 18-20 months of age. AβPP/PS1 mice progressively accumulate soluble Aβ 42 starting as early as 3 months of age 43 . This accumulation eventually determines amyloid burden that becomes visible by 6 months of age and increases throughout the lifespan of these transgenic mice 44,45 . Likewise, the present study indicated no plaque deposition at 3 months of age, but by 6 months of age, plaque accumulation was most prominent in the CA1 followed by the DG with little to none observed in the CA3. As AβPP/PS1 mice reach 18 months of age, the magnitude of plaque deposition increases throughout the hippocampus, and this subregion distribution pattern continues including observable plaque pathology in the CA3.
Because plaque burden is a result of Aβ 42 accumulation in AβPP/PS1 mice 44 , the subregion distribution pattern of plaque deposition is indicative of soluble Aβ 42 concentration. This disease stage dependent progression of amyloid accumulation and deposition explains the basal and evoked glutamate release data reported in this manuscript. Elevated evoked glutamate release was prominent in the CA1 and DG at 2-4 and 6-8 months of age, but this was not observed until 18-20 months of age in the CA3. Elevated basal glutamate follows a similar hippocampal subregion distribution pattern that becomes more pronounced with age. However, this is temporally delayed to 6-8 months which may indicate amyloid accumulation first sensitizes neurons for increased release, then progresses to consistently elevated circulating levels of glutamate. The CA1 was the only subregion where evoked glutamate release declines with age despite increasing plaque deposition. A reduction of CA1 dendritic architecture was linked to enhanced cellular excitability at an age when plaque deposition was present 26 . However, as Aβ 42 accumulation progresses, a threshold may be passed where neurons become hypoactive or synapse loss is too pronounced 30 that diminishes glutamate release. Further studies are required to understand the how hippocampal soluble Aβ 42 levels contribute to changes in glutamatergic neurotransmission.
The pattern of amyloid deposition presented here is similar to the Braak neuropathological staging of hippocampal amyloid progression in AD 5,6 . This staging also coincides with CA1 neuronal loss occurring before other hippocampal subregions 46,47 . It is known that CA1 neurons are more vulnerable to global cerebral ischemia 48 and degenerate faster in epileptic patients 49 . The underlying cause of selective CA1 neuronal degeneration is a result of glutamate-mediated excitotoxic mechanisms involving excessive calcium influx through NMDAR activation, mitochondrial dysfunction, and reactive oxygen species. These events culminate in necrotic cell loss that releases more glutamate into the extracellular space thus propagating damage to surrounding neurons. This process supports our discordant CA1 glutamate observations with increasing basal but decreasing evoked glutamate release with age in AβPP/PS1 mice.
This research builds upon a growing body of literature indicating temporally altered hippocampal glutamatergic signaling during the progression of AD pathology. Our previous studies support hippocampal glutamate levels are still elevated at 12 months of age in male AβPP/PS1 mice 22,23 . Others have shown this is not a sex specific characteristic since female AβPP/PS1 mice have elevated CA1 dialysate glutamate levels at 7 months of age that also decline by 17 months of age 41 . Noninvasive techniques such as glutamate chemical exchange saturation transfer GluCEST also indicate a decrease in hippocampal glutamate in 18-20 months old AβPP/PS1 mice 50 . Interestingly these observations are not specific for progression of amyloid pathology. Electrochemical studies in the 5-8 month old tau mouse model of AD, P301L, develop elevated hippocampal glutamate 51,52 . A separate tau mouse model, P301S, also has elevated hippocampal glutamate at 3 months old, that declines at 18-20 months of age 53,54 as measured by GluCEST techniques. While the amyloid and tau pathology are likely acting through different mechanisms to elicit glutamate release, these studies show a concomitant change in vesicular glutamate transporter 1 that corresponds to the elevated glutamate levels regardless of AD pathology. When considered with the present research, these studies support temporal changes in hippocampal glutamate during AD progression with elevated levels early in pathology that decline in later disease stages. Understanding these types of regional differences may help to refine severity and progression of AD and tailor appropriate treatment options.
In early AD stages, before overt atrophy, overactivation of the NMDAR is hypothesized to impede detection of physiological signals leading to the cognitive impairment observed in AD 39,55 . Accordingly, meta-analysis of memantine treatment, a partial NMDAR antagonist, ameliorates cognitive and functional performance in mild-to-moderate AD patients when administered as monotherapy or in combination with anticholinesterase inhibitors 56 . This treatment only delays cognitive decline and does not have disease modifying benefits 57 . Since memantine modulates glutamate signaling rather than attenuating the glutamatergic tone, the persistently elevated glutamate levels during AD progression may induce excitotoxic effects that accounts for the neuronal, cognitive, and functional loss. As such, drugs that attenuate glutamate release or enhance clearance may provide long-term therapeutic benefits if initiated before signs of cognitive impairment.
conclusion
These data support a growing body of literature indicating hyperactive hippocampal glutamate signaling contributes to AD pathogenesis. The temporal hippocampal glutamate changes observed in this study may serve as a biomarker allowing for time point specific therapeutic interventions that can be tailored for maximal efficacy. Simultaneously monitoring changes in hippocampal glutamate with plaque and tangle pathology may further refine stages of AD progression. Morris water maze training and probe challenge. The MWM was used to assess spatial learning and memory recall. Mice were trained to utilize visual cues placed around the room to repeatedly swim to a static, hidden escape platform (submerged 1 cm below the opaque water surface) regardless of starting quadrant 23,58 .
The MWM paradigm consisted of 5 consecutive training days with three, 90 s trials/day and a minimum intertrial-interval of 20 min. Starting quadrant was randomized for each trial. After two days without testing, the escape platform was removed and all mice entered the pool of water from the same starting position for a single, 60 s probe challenge to test long-term memory recall. The ANY-maze video tracking system (Stoelting Co., Wood Dale, IL; RRID:SCR_014289) was used to record mouse navigation during the training and probe challenge. The three trials for each training day were averaged for each mouse.
Enzyme-based microelectrode arrays. Enzyme-based MEAs with platinum (Pt) recording surfaces ( Fig. 6A) were fabricated, assembled, coated (Fig. 6B), and calibrated for in vivo mouse glutamate measurements as previously described [59][60][61] . One microliter of glutamate oxidase stock solution (1 U/µl) was added to 9 µl of a 1.0% BSA and 0.125% glutaraldehyde w/v solution and applied dropwise to a Pt recording surface. This preparation aides in enzyme adhesion to the Pt recording surface for enzymatic degradation of glutamate to α-ketoglutarate and H 2 O 2 , the electroactive reporter molecule. The other Pt recording site (self-referencing or sentinel site) was coated with the BSA/glutaraldehyde solution, which is unable to enzymatically generate H 2 O 2 from l-glutamate. A potential of + 0.7 V vs a Ag/AgCl reference electrode was applied to the Pt recording surfaces resulting in oxidation of H 2 O 2 . While + 0.7 V is capable of oxidizing potential interferants, such as AA and DA, lower potentials are unable to adequately oxidize H 2 O 2 and subsequently detect glutamate 62 Microelectrode array/micropipette assembly. Glass micropipettes (1.0 mm outer diameter, 0.58 mm internal diameter; World Precision Instruments, Inc., Sarasota, FL) were pulled using a vertical micropipette puller (Sutter Instrument Co., Novato, CA). The tip was "bumped" to create an internal diameter of 12-15 µm. The micropipette tip was positioned between the pair of recording sites and mounted 100 µm above the MEA surface ( Fig 6D) www.nature.com/scientificreports/ soldered to a gold-plated connector (Newark element14 Chicago, IL). The other stripped end was placed (cathode) into a 1 M HCl bath saturated with NaCl that also contained a stainless steel counter wire (anode). Passing a + 9 V DC to the cathode versus the anode for 15 min deposits Ag/Cl onto the stripped wire.
In vivo anesthetized recordings. One week after MWM, mice were anesthetized using 1.5-2.0% isoflurane (Abbott Lab, North Chicago, IL) in a calibrated vaporizer (Vaporizer Sales & Service, Inc., Rockmart, GA) 63 . The mouse was placed in a stereotaxic frame fitted with an anesthesia mask ( Fig. 6C; David Kopf Instru- 64 . Recordings were conducted using a two electrode system whereby a Ag/AgCl reference wire was positioned beneath the skull and rostral to the craniotomy and a working electrode was positioned in one of the hippocampal subregions (Fig. 6D). Constant voltage amperometry (4 Hz) was performed with a potential of + 0.7 V vs the Ag/AgCl reference electrode applied by the FAST16mkIII. MEAs reached a stable baseline for 60 min before a 10 s basal glutamate determination and pressure ejection studies commenced. Once five reproducible signals were evoked, the MEA was repositioned into a new hippocampal subfield, which was randomized for each mouse. The FAST software saved amperometric data, time, and pressure ejection events. Calibration data, in conjunction with a MATLAB (MathWorks, Natick, MA; RRID:SCR_001622) graphic user interface program was used to calculate basal, stimulus-evoked, and clearance of extracellular glutamate. The evoked glutamate signals in each hippocampal subfield were averaged into a representative signal for comparison.
Amyloid plaque staining and semi-quantification. Sections were prepared, stained and quantified as previously described 22,23 . After electrochemistry, brains were removed and post-fixed in 4% paraformaldehyde for 48 h and then transferred into 30% sucrose in 0.1 M phosphate buffer for at least 24 h prior to sectioning. Twenty µm coronal sections through the hippocampus were obtained using a cryostat (Model HM525 NX, Thermo Fisher Scientific). Mounted sections were treated with 10% H 2 O 2 in 20% methanol for 10 min, transferred to 70% ethanol solution for 5 min, and then washed with PBS for 2 min. Sections were incubated for 10 min in Amylo-Glo RTD (1:100; Biosensis, Temecula, CA), submerged in physiological saline for 5 min, and rinsed three times in separate PBS solutions for 2 min. Sections were coverslipped using Fluoromount-G (South-ernBiotech; Birmingham, AL). Staining intensity was controlled for by imaging all sections the next day. Images were captured with an Olympus 1 × 71 microscope equipped with an Olympus-DP73 video camera system, and a Dell Optiplex 7020 computer. National Institutes of Health Image J Software (v. 1.48; RRID:SCR_003070) was used to measure relative staining density by using a 0-256 Gy scale. Staining density was obtained when background staining was subtracted from mean staining intensities on every sixth section through the hippocampus. Individual templates for the DG, CA3, and CA1 were created and used on all brains similarly. Measurements were performed blinded, and approximately four sections were averaged to obtain one value per subject. Amyloid plaques were identified by a dense spherical core of intense staining that were often surrounded by a less compact spherical halo.
Data analysis. Prism (GraphPad Prism 8 Software, Inc., La Jolla, CA; RRID:SCR_002798) software was used for statistical analyses. For glutamate measurements and amyloid plaque staining, hippocampal subregions were examined independently. For statistical analysis, genotypes were compared within age groups and all tests are listed in the figure legends. Outliers were identified with a single Grubbs' test (alpha = 0.05) per group. Data are represented as mean ± SEM and statistical significance was defined as p < 0.05. | 2020-09-03T09:11:33.081Z | 2020-09-02T00:00:00.000 | {
"year": 2020,
"sha1": "84091a9e96aa6cc98adf70a9638b22c1bb9596b5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-71587-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a0c9d6ddddd3516469200be7d38e3b6d0740250",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
119130546 | pes2o/s2orc | v3-fos-license | Density and tails of unimodal convolution semigroups
We give sharp bounds for the isotropic unimodal probability convolution semigroups when their L\'evy-Khintchine exponent has Matuszewska indices strictly between 0 and 2.
Introduction
Estimating Markovian semigroups is important from the point of view of theory and applications because they describe evolution phenomena and underwrite various forms of calculus. Diffusion semigroups traditionally receive most attention [24] but considerable progress has also been made in studies of transition densities of rather general jump-type Markov processes. Such studies are usually based on assumptions concerning the profile of the jump or Lévy kernel (measure) at the diagonal (origin) and at infinity [11,9]. As a rule the assumptions can be viewed as approximate or weak scaling conditions for the Lévy density, to which some structure conditions may be added, see [11, (1.9)-(1.14) and Theorem 1.2]. Typical results consist of sharp two-sided estimates of the heat kernel for small and/or large times.
Transition semigroups of Lévy processes allow for a deeper insight and direct approach from several directions thanks to their convolutional structure and the available Fourier techniques. For instance, the upper bounds for transition densities of isotropic Lévy densities with relatively fast decay at infinity are obtained in [20,21] by using Fourier inversion, complex integration, saddle point approximation or the Davies' method. In this work we study one-dimensional distributions p t (dx) of rather general isotropic unimodal Lévy processes X = (X t , t 0) in R d [26]. We focus on pure-jump isotropic unimodal Lévy processes. Thus, X is a càdlàg stochastic process with distribution P, such that X(0) = 0 almost surely, the increments of X are independent with rotation invariant and radially nonincreasing density function p t (x) on R d \ {0}, and the following Lévy-Khintchine formula holds for ξ ∈ R d , Here and below ν is an isotropic unimodal Lévy measure and E is the integration with respect to P. Further notions and definitions are given in Sections 2 and 3 below. Put differently, we study the vaguely continuous spherically isotropic unimodal convolution semigroups (p t , t 0) of probability measures on R d with purely nonlocal generators. (In this work we never use probabilistic techniques beyond the level of one-dimensional distributions of X.) Our main result provide estimates for the tails of p t and its density function p t (x), expressed in terms of the Lévy-Khintchine exponent ψ. We also use ψ to estimate the density function ν(x) of the Lévy measure ν. Since ψ is radially almost increasing, it is comparable with its radial nondecreasing majorant ψ * , and as a rule we employ ψ * in statements and proofs. The extensive use of ψ (ψ * ) rather than ν is a characteristic feature of our development and may be considered natural from the point of view of pseudo-differential operators and spectral theory [16]. As usual, the asymptotics of ψ at infinity translates into the asymptotics of ν and p t at the origin. Our estimates may be summarized as follows, see Theorem 21, Corollary 23 and (24) for detailed statements. Here ≈ means that both sides are comparable i.e. their ratio is bounded between two positive constants, ψ is assumed to satisfy the so-called weak upper and lower scalings of order strictly between 0 and 2, and ψ − is the generalized inverse of ψ * . The bound (2) holds locally in time and space, or even globally, if the scalings are global. We note that the corresponding estimates of ν, to wit, are simply obtained as a consequence of (2) for small time, and not as an element of its proof (see Corollary 23). It is common for ν to share the asymptotics with p t because ν = lim t→0 p t , a vague limit in R d \ {0}. It is also a manifestation of the general rule mentioned above that ψ * (|x| −1 ) in our estimates reflect the properties of p t (x) and ν(x). The denominator, |x| d , in the estimates comes from the homogeneity of the volume measure in R d (see the proof of Corollary 23) and ψ − (1/t) approximates p t (0) as follows from a change of variables in Fourier inversion (cf. 26, Lemma 16 and Lemma 17). All these reasons make the above bounds most natural. Therefore below we shall address (two-sided and one-sided) estimates similar to (2) and (3) as common bounds. We note that the common upper bounds, motion subordinated by an independent gamma subordinator. As we see from [6,Section 5.3.4], such processes require specialized approach and their transition density and Lévy-Khintchine exponent do not easily explain each other. Bochner's procedure of subordination is strongly rooted in semigroup and operator theory, harmonic analysis and probability [32,27,26]. In the present setting it yields a wide array of asymptotics of ψ, p t and ν, which explains intense current developments. In particular, common bounds were recently obtained for a class of subordinate Brownian motions, mainly for the complete subordinate Brownian motions defined by a delicate structure condition [26]. Highly sophisticated current techniques and results in this direction are presented in [18], see also [6,26]. Our approach is, however, more general and synthetic; we demonstrate that the sources of the asymptotics of (unimodal) p t (x) are merely its radial monotonicity and the scalings of ψ, rather than further structure properties of ψ.
We illustrate our results with several classes of relevant examples. These include situations where former methods cannot be easily applied and the present method works well. In this connection we note that ψ is an integral quantity and may exhibit less variability than ν, in particular the scaling properties of ψ are more easily manageable. Many of our examples are in fact the subordinate Brownian motions, and then ψ(ξ) = φ(|ξ| 2 ), where φ is the Laplace exponent of the subordinator, i.e. a Bernstein function. There is by now an impressive pool of Bernstein functions studied in the literature, with distinct asymptotics at infinity. For instance the monograph [26] gives well over one hundred cases and classes of Bernstein functions in its closing list of examples. Many of these functions have the scaling properties used in our paper. This immediately yields sharp estimates of the Lévy measure and transition density of subordinate Brownian motions corresponding to such subordinators. In comparison, former methods require to first find estimates of the Lévy measure of the subordinator (this is where the completeness of the subordinator plays a role), then to estimate the Lévy density of the corresponding subordinate Brownian motion and then, finally, to estimate its semigroup [10,18,6].
Our results and arguments are purely real-analytic. We circumvent those additional steps and directly estimate the heat kernel by arguments not unrelated to integration by parts. In particular, for subordinate Brownian motions with scaling, we relax the usual completeness assumption on the subordinator. Noteworthy, while on one hand the completeness need not be used when scaling is present, on the other hand the complete subordinate Brownian motions exhibit all the types of asymptotics of ψ, p t and ν of general unimodal Lévy processes with scaling. This is in particular manifested in Corollary 27.
We remark that analogues of the on-diagonal term (2) are often obtained for more general Markov processes via Nash and Sobolev inequalities [3,8,2,29,21]. For our unimodal Lévy processes we instead use Fourier inversion and (weak) lower scaling, see Proposition 19. Also our approach to the off-diagonal term tψ * (|x| −1 )/|x| d is very different and much simpler than the arguments leading to the upper bounds in the otherwise more general Davies' method [8], [12,Section 3]. Our common upper bounds are straightforward consequences of a specific quadratic parametrization of the tail function, which is crucial in applying the techniques of Laplace transform. The common lower bounds are harder, and they are intrinsically related to upper and lower scalings via certain differential inequalities in the proof of Theorem 26.
The (local or global) comparability of the common lower and upper bounds is a remarkable feature of the class of semigroups captured by Theorem 26. We expect further applications of the estimates. For instance, under (weak) global scalings we obtain important metric-type [17] global comparisons p 2t (x) ≈ p t (x) and p t (2x) ≈ p t (x), given in Corollary 24 below. These should matter in perturbation theory of Lévy generators and in nonlinear partial integro-differential equations. Since uniform estimates are important in some applications, in Corollary 24 and elsewhere in the paper the comparability constants are shown to depend in a rather explicit way on specific properties of the semigroups, chiefly on scaling. For instance for the isotropic α-stable Lévy semigroup in R d with 0 < α < 2 [6], we have ψ(ξ) = |ξ| α , and we arrive at with explicit constants given by (17), (26) and (29) below. Noteworthy, our (weak) scaling conditions imply majorization and minorization of ψ at infinity by power functions with exponents strictly between 0 and 2, but do not require its comparability with a power function, see examples in Section 4.1. Furthermore, the exponents α and α in the assumed scalings only affect the comparison constants in common bounds, but not the rate of asymptotic, which is solely determined by ψ.
For convolution semigroups of probability measures more general than unimodal, the structure of the support and the regularity of the Lévy measure plays a crucial role. In particular the directions which are not charged by the Lévy measure see in general lighter asymptotics of p t (dx) [13,31]. In consequence, the estimates of severely anisotropic convolution semigroups require completely different assumptions, description and methods. Our experience indicates that ν surpasses ψ in such cases. Estimates and references to anisotropic ν with prescribed radial decay and rough spherical marginals may be found in [28] (see also [7,31] for more details in the case of homogeneous anisotropic ν).
The structure of the paper is as follows. In Section 2 we discuss first consequences of isotropy and radial monotonicity. In particular we compare ψ with Pruitt-type function h 1 (r) = R d (1 ∧ |x 1 | 2 /r 2 ) ν(dx) and we estimate from above the tail function f t (ρ) = P(|X t | √ ρ) by using Laplace transform and ψ. This and radial monotonicity quickly lead to upper bounds for p t (x) and ν(x). In Section 3 we discuss almost monotone and general weakly scaling functions. In Section 4 we specialize to scalings with lower and upper exponents α, α ∈ (0, 2), and we give examples of ψ with such scaling. To obtain lower bounds for p t (x) and ν(x), in Lemma 13 we recall an observation due to M. Zähle, which is then used in Lemma 14 to reverse the comparison between ψ and the tail function f t . The generalized inverse ψ − plays a role through a change of variables in Fourier inversion formula for p t (0) in Lemma 16 and through equivalence relation defining "small times" (stated as Lemma 17). In Theorem 21 we combine all the threads to estimate p t , as summarized by (2). In Corollary 23 we obtain (3) as a simple consequence of Theorem 21. To close the circle of ideas, in Theorem 26 we show the equivalence of the weak scalings with the common form of bounds for p t and ν. In Proposition 28 we state a connection between ν and ψ for a class of approximately isotropic Lévy densities.
Unimodality
We shall often use the gamma and incomplete gamma functions: Let R d be the Euclidean space of (arbitrary) dimension d ∈ N. For x ∈ R d and r > 0 we let B(x, r) = {y ∈ R d : |x − y| < r}, and B r = B(0, r). We denote by ω d = 2π n/2 /Γ(n/2) the surface measure of the unit sphere in R d . All sets, functions and measures considered below are (assumed) Borel. A (Borel) measure on R d is called isotropic unimodal, in short: unimodal, if on R d \ {0} it is absolutely continuous with respect to the Lebesgue measure, and has a finite radial nonincreasing density function. Such measures may have an atom at the origin: they are of the form aδ 0 (dx) + f (x)dx, where a 0, δ 0 is the Dirac measure, and µ (ε, ∞) < ∞ for all ε > 0. A Lévy process X = (X t , t 0), is called isotropic unimodal (in short, unimodal) if all of its one-dimensional distributions (transition densities) p t (dx) are such. Recall that Lévy measure is any measure concentrated on Unimodal pure-jump Lévy processes are characterized in [30] by unimodal Lévy measures Unless explicitly stated otherwise, in what follows we assume that X is a pure-jump unimodal Lévy process in R d with (unimodal) nonzero Lévy measure (density) ν.
Each measure p t is the weak limit of where ǫ → 0 + and ν * n For r > 0 we define after [23], Clearly, 0 L(r) < h(r) < ∞, L is nonincreasing and h is decreasing. The strict monotonicity and positivity of h follows since ν = 0 is nonincreasing, hence positive near the origin. The first coordinate process X 1 t of X t is unimodal in R. The corresponding quantities L 1 (r) and h 1 (r) are given by the (pushforward) Lévy measure ν 1 = ν •π −1 1 , where π 1 is the projection: [25,Proposition 11.10]. With a typical abuse of notation we let ν 1 (y) denote the (symmetric and nonincreasing on (0, ∞)) density function of ν 1 : Thus, In fact, (7) is valid more generally: for all rotation invariant Lévy measures.
Since ψ is a radial function, we shall often write ψ(u) = ψ(ξ), where ξ ∈ R d and u = |ξ| 0. We obtain the same function for X 1 t . Clearly, ψ(0) = 0 and, as before for h, ψ(u) > 0 for u > 0. We now show how to use h 1 to estimate the Lévy-Khintchine exponent ψ of X.
We define the maximal characteristic function ψ * (u) := sup s u ψ(s), where u 0. The following result is a version of [14, Proposition 1].
Proof. Since h 1 is nonincreasing, by Lemma 1 for u 0 we have We write f (x) ≈ g(x) and say f and g are comparable if f, g 0 and there is a positive number C, called comparability constant, such that Cf (x) for all x. We write C = C(a, . . . , z) to indicate that C may be so chosen to depend only on a, . . . , z. We say the comparison is absolute if the constant is absolute. Noteworthy, while ψ is comparable to a nondecreasing function, it need not be nondecreasing itself. For instance, if ψ(u) = u + 3π[1 − (sin u)/u], then ψ ′ (2π) = − 1 2 < 0. The following conclusion may be interpreted as relation of "scale" and "frequency".
Proof. The constant in the leftmost comparison depends only on the dimension, see (7). The other comparisons are absolute, by Lemma 1 and Proposition 2.
By Corollary 3 and definitions of L 1 , L and h, we obtain the following inequality, Our main goal is to describe asymptotics of ν(x) and p t (x) in terms of ψ * . We start with analysis of the Laplace transform of the integral tails of p t . For reasons which shall become clear in the proof of the next result, we choose the following parametrization of the tails, We consider the Laplace transform of f t , By this, (12) and change of variables we obtain, (The estimate may usually be improved for specific ψ.) We also note that and we obtain, On the other hand, if |x| 1, then ψ x √ λ ψ * |x| √ λ /π 2 ψ * √ λ /π 2 by (10). Thus, where we use (14) and the upper incomplete gamma function Γ(·, ·).
The upper bounds for tails shall follow from this auxiliary lemma.
The following estimate results from (15) and Lemma 5 with n = m = 0.
Here is a general upper bound for density of unimodal Lévy process. (As we shall see in Theorem 21 and 26, a reverse inequality often holds, too.) Proof. By radial monotonicity of y → p t (y), (13) and Corollary 6, Since Tracking constants, e.g., for the isotropic α-stable Lévy process addressed in (4), we get (In fact, for this constant we override (13) by ψ(su) = s α ψ(u) in the proof of Lemma 4.)
Weak scaling and monotonicity
For reader's convenience we shall give a short survey of almost monotone and weakly scaling functions. The results may be considered folklore [5,18], but the actual variants which we need may be difficult to find.
We easily check that φ * is nondecreasing, φ φ * and the following result holds.
Lemma 8. φ is almost increasing with oscillation factor c if and only if cφ * φ.
We easily check that φ * is nonincreasing, φ φ * and the following result holds.
Lemma 9. φ is almost decreasing with oscillation factor C if and only if φ * Cφ.
We note that φ is almost increasing on I with factor c if and only if 1/ψ is almost decreasing on I with factor 1/c. Here is another simple observation which we give without proof.
The following clarification is an analogue of [5, Theorem 2.2.2].
As θ or θ decrease, the scaling conditions tighten. Here is a loosening observation.
Proof. In view of Lemma 10 and Lemma 11, it is enough to study This gives the first implication and the second obtains similarly.
Examples
The Lévy-Khintchine (characteristic) exponents of unimodal convolution semigroups which we present in this section all have lower or upper scaling suggested by (20). This can be verified in each case by using Lemma 11. While discussing the exponents, we shall also make connection to subordinators, special Bernstein functions and complete Bernstein functions, because they are intensely used in recent study of subordinate Brownian motions, a wide and diverse family of unimodal Lévy processes cf. [18]. The reader may find definitions and comprehensive information on these functions in [26]. When discussing subordinators we usually let ϕ(λ) denote their Laplace exponent, and then ψ(x) = ϕ(|x| 2 ) is the Lévy-Khintchine exponent of the corresponding subordinate Brownian motion. We focus on scaling properties of ψ.
5. Let X be pure-jump unimodal with infinite Lévy measure and Lévy-Khintchine exponent ψ. Let a scaling condition with exponent α or α hold for ψ. For fixed r > 0, we let X r be the (truncated) unimodal Lévy process obtained by multiplying the Lévy measure of X by the indicator function of the ball B r , and let ψ r be its Lévy-Khintchine exponent. Since 0 ψ − ψ r is bounded, ψ r is comparable with ψ at infinity, and so ψ r has (local) scaling with the same exponent as ψ. For later discussion we observe that ψ r is not an exponent of a subordinate Brownian motion because the support of its Lévy measure is bounded [26,Proposition 10.16].
Estimates
The following estimate is a version of [33, Theorem 7 (ii) (b)] with explicit constants.
Corollary 15. If ψ satisfies WUSC(α, θ, C) and a is from Lemma 14, then We recall that a reverse inequality is valid more universally, cf. (11). In view of (18) and (20), the lower weak scaling implies power-type asymptotic growth of ψ and so we can use Fourier inversion to estimate p t . Here is a preparation.
Since S(u, ρ) is decreasing in u, we also have S(u, ρ) S(1, ρ) for u 1, thus The proof is complete: 0. Below we shall often consider the (unbounded) characteristic exponent ψ of a unimodal Lévy process with infinite Lévy measure and its (comparable) maximal function ψ * , and denote This short notation is motivated by the following equality, inf{s 0 : ψ(s) u} = inf{s 0 : ψ * (s) u}, 0 u < ∞.
Under the assumptions of Proposition 19, by (22) and (23) we obtain e −1 p t (0) : (We may interchange p t (0) and [ψ − (1/t)] d in formulas.) Also, if α > 0 and ψ ∈ WLSC(α, 0, c), then for t > 0, x ∈ R d , We note in passing that the same argument covers the Gaussian case ψ(ξ) = |ξ| 2 and more general exponents otherwise excluded from our general considerations. We also note that analogues of (25) are often obtained by using Nash inequalities [3,8,2]. For the α-stable process addressed in (4), by using directly Lemma 16 with K = 1 we get and the constant is not too far off from the exact estimate, obtained by direct integration; Note that in this case ψ − (u) = u 1/α . The above change of variables tr α = s nicely counterpoints the proof of Lemma 16 and the role of the quantity tψ * (1/|x|) appearing, e.g., in Lemma 17.
Thus, (25) holds for all t > 0, even if θ > 0, but the constant deteriorates for large t.
The following main result of our paper gives common bounds for unimodal convolution semigroups with scaling. Notably, our second main result, Theorem 26 below, shows in addition that scaling is equivalent to common bounds.
Tracking constants for the isotropic α-stable Lévy process addressed in (4), we obtain We now list a number of general consequences of Theorem 21. We first complement (16) by a similar lower bound resulting from Theorem 21.
By scaling, in particular by Lemma 18, we obtain the following important doubling property, cf. [17] in this connection.
Corollary 24. If ψ satisfies (global) WLSC(α,0,c) and WUSC(α,0,C), then Thus, if θ = 0 in Theorem 21, then the global asymptotics of p t (x) is fully and conveniently reflected by ψ. If θ > 0, then our bounds are only guaranteed to hold in bounded time and space (bounded time for the upper bound). For large times we merely offer the following simple exercise of monotonicity.
The next theorem proves that our definitions quite capture the subject of the study.
Theorem 26. Let X t be an isotropic unimodal Lévy process in R d with transition density p, Lévy-Khintchine exponent ψ and Lévy measure density ν. The following are equivalent: (i) WLSC and WUSC [global WLSC and WUSC ] hold for ψ.
Corollary 27. If the characteristic exponent of a unimodal (isotropic) Levy process X satisfies global WLSC and WUSC (with exponents 0 < α α < 2), then there is a complete subordinate Brownian motion with comparable characteristic exponent, Lévy measure and transition density.
To indicate an application of Corollary 27, we remark that the boundary Harnack principle proved in [19] for complete subordinate Brownian motions under global scaling conditions should now extend to general subordinate Brownian motions with global scaling.
We see from Corollary 27 that the asymptotics of the characteristic exponent, Lévy measure and transition density observed for complete subordinate Brownian motions are representative among all unimodal Lévy processes with lower and upper global scaling.
We close our discussion with a related result, which negotiates the asymptotics of the Lévy density (at zero) and the Lévy-Khintchine exponent (at infinity) under approximate unimodality and weak local scaling conditions. We hope the result will help extend the common bounds.
To clarify, the semigroup of the process X in Proposition 28 is not necessarily unimodal, hence its estimates by f call for other methods, e.g. those based on ν, mentioned in Section 1. | 2014-01-20T16:52:56.000Z | 2013-05-05T00:00:00.000 | {
"year": 2013,
"sha1": "68df5a4454ae3b1aeae070b426d72d9cf4dd7a11",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1016/j.jfa.2014.01.007",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "68df5a4454ae3b1aeae070b426d72d9cf4dd7a11",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257697617 | pes2o/s2orc | v3-fos-license | Hybridization of papain molecules and DNA-wrapped single-walled carbon nanotubes evaluated by atomic force microscopy in fluids
Although various conjugates of single-walled carbon nanotubes (SWNTs) and biomolecules, such as nanobiosensors and nanobiodevices, have been reported, the conjugation of papain and SWNTs have not been reported because of the formation of unexpected aggregates. In this study, atomic force microscopy (AFM) in liquid was used to investigate the interactions between papain and DNA-wrapped SWNTs (DNA–SWNTs) at two different pH values (pH 3.0 and 10.5). The direct AFM observation of the mixture of papain and DNA–SWNTs confirmed the aggregation of papain molecules with DNA–SWNTs in the buffer solutions. The numerous and non-uniform adsorption of papain molecules onto DNA–SWNTs was more pronounced at pH 3.0 than that at pH 10.5. Furthermore, thick conjugates appeared when papain and DNA–SWNTs were simultaneously mixed. The near-infrared photoluminescence spectra of the SWNTs drastically changed when the papain molecules were injected into the DNA–SWNT suspension at pH 3.0. Thus, the regulation of electrostatic interactions is a key aspect in preparing optimal conjugates of papain and DNA–SWNTs. Furthermore, although previous papers reported AFM images of dried samples, this study demonstrates the potential of AFM in liquid in evaluating individual bioconjugates of SWNTs.
Hybridization of papain molecules and DNA-wrapped single-walled carbon nanotubes evaluated by atomic force microscopy in fluids Masaki Kitamura * & Kazuo Umemura
Although various conjugates of single-walled carbon nanotubes (SWNTs) and biomolecules, such as nanobiosensors and nanobiodevices, have been reported, the conjugation of papain and SWNTs have not been reported because of the formation of unexpected aggregates. In this study, atomic force microscopy (AFM) in liquid was used to investigate the interactions between papain and DNA-wrapped SWNTs (DNA-SWNTs) at two different pH values (pH 3.0 and 10.5). The direct AFM observation of the mixture of papain and DNA-SWNTs confirmed the aggregation of papain molecules with DNA-SWNTs in the buffer solutions. The numerous and non-uniform adsorption of papain molecules onto DNA-SWNTs was more pronounced at pH 3.0 than that at pH 10.5. Furthermore, thick conjugates appeared when papain and DNA-SWNTs were simultaneously mixed. The near-infrared photoluminescence spectra of the SWNTs drastically changed when the papain molecules were injected into the DNA-SWNT suspension at pH 3.0. Thus, the regulation of electrostatic interactions is a key aspect in preparing optimal conjugates of papain and DNA-SWNTs. Furthermore, although previous papers reported AFM images of dried samples, this study demonstrates the potential of AFM in liquid in evaluating individual bioconjugates of SWNTs.
Atomic force microscopy (AFM) is a unique tool for observing biomolecules and bioconjugates in aqueous solutions [1][2][3][4] . As most biomolecules deform when they are dried, their native structures or bioconjugates are difficult to observe by AFM. Thus, AFM observation in fluids has been recognized as an attractive approach for analyzing native structures and molecular interactions. For example, the structures of DNA molecules in dried and wet forms were compared by AFM in air and liquids [5][6][7] . The diameters of DNA molecules dramatically decrease when DNA is dried on a mica surface [8][9][10] . Moreover, because proteins are usually utilized in liquid, investigations in liquid have advanced understanding of their native behavior and potential applications [11][12][13][14] . Radmacher et al. directly observed the enzyme reactions in liquids using AFM 12 . These approaches have been further developed using high-speed AFM systems [15][16][17][18][19] .
An effective approach for observing bioconjugates is using AFM in liquids. Thus, the conjugation of DNA and DNA-binding proteins has been intensively studied by AFM in fluids 2,20 , thereby visualizing the binding sites of the proteins. Conjugates of biomolecules and nanomaterials, such as single-walled carbon nanotubes (SWNTs), which have been developed for biological and medical applications are attractive targets for AFM in liquids. Although previous studies have mostly observed DNA-wrapped SWNTs (DNA-SWNTs) in the air, Hayashida et al. observed DNA-SWNT hybrids in aqueous solutions 21 . The diameters of the same DNA-SWNT hybrids drastically changed according to the environment and forces between the hybrids and the AFM probe. Assuming a constant SWNT diameter, the fluctuations in the diameters of the hybrids can be ascribed to the plasticity of DNA molecules on SWNT surfaces.
In this study, we investigated the interactions of papain molecules with DNA-SWNTs using AFM in liquids. Papain is a cysteine protease with a high thermostability [46][47][48] . Although several enzymes lose their activity at 60 °C, papain exhibits sufficient enzyme activity at this temperature. Thus, various biological and medical applications have been proposed considering its thermostability 49,50 . Therefore, the conjugation of papain molecules and DNA-SWNT hybrids can be used as a biosensor for papain enzymes to sense proteins at high temperatures. In addition, papain molecules on SWNT surfaces can become an alternative substrate used to monitor enzyme activation reactions using the change of PL of SWNTs. However, even with the in-depth report on the fabrication of various types of biomolecules and SWNTs hybrids, the fabrication of conjugates of papain molecules and SWNTs is yet to be reported. In particular, attaching papain molecules to DNA-SWNT hybrids can form aggregates of the papain/DNA-SWNT conjugates that can be visibly recognized, thereby resulting in the absence of papain-SWNT hybrids. In addition, the formation of aggregates is an interesting phenomenon leading to speculation that the aggregations were caused by the high pI of papain enzymes of 8.6 51 because of the water solubility of DNA-SWNTs and papain. Therefore, it is necessary to investigate the conjugations at two pH values across the pI of papain enzymes.
In our experiments, mixtures of papain molecules and DNA-SWNT hybrids were observed to evaluate the adsorption of papain molecules on DNA-SWNT hybrids. Although the fabrication of bioconjugates has been widely reported [1][2][3][4] , the microscopic evaluation of the adsorption of biomolecules onto SWNT surfaces using AFM in liquids is yet to be reported. Thus, our approach is valuable for understanding the adsorption mechanism of biomolecules onto SWNTs, and providing useful information for establishing optimal procedures for the fabrication of bioconjugates with SWNTs.
Results and discussion
As mentioned, when we simply mixed DNA-SWNT hybrids and papain molecules in a tube the formation of aggregates was visually recognizable ( Supplementary Fig. S1). We speculated that the aggregations were caused by pH conditions, and thus relatively high pH levels were tested to clearly observe changes so that the mechanism of the aggregations would be easily understood. Figure 1 shows the scheme used in our study of AFM in fluids. In Scheme 1, DNA-SWNT hybrids were first attached to a 3-aminopropyltriethoxysilane (AP)-mica surface set in an AFM liquid cell. The sample was observed once with a 2.0 mL buffer solution. Two types of buffer solutions, namely 10 mM citric acid (pH 3.0) and 10 mM boric acid (pH 10.5), were used to evaluate the effects of pH. Papain molecules were injected into the AFM liquid cell, and the sample was observed again in the buffer solutions. In Scheme 2, the DNA-SWNT suspension was dropped onto the AP-mica surface, and the papain solution was immediately added. In this scheme, the papain and DNA-SWNT hybrids are expected to have sufficient degrees of freedom. Figure 2 shows the results of the Scheme 1 experiments (Supplementary Fig. S2 shows magnified results). DNA-SWNT hybrids were clearly observed in 10 mM citric acid buffer solution (pH 3.0 (Fig. 2a). Subsequently, as 10 μL DNA-SWNT suspension was dropped onto an AP-mica surface and the AFM liquid cell was filled with 2.0 mL buffer solution, similar images were obtained using 10 mM boric acid buffer solution at pH 10.5 ( Supplementary Fig. S3). To reobserve the samples, 50 μL papain solution was injected. The observation was carried out immediately after stabilizing the perturbation caused by the injection. Figure 2b shows a typical AFM Fig. 2b). Moreover, the morphologies differed from those in Fig. 2a, suggesting the partial adsorption of papain molecules on the DNA-SWNT hybrids.
The cross-sections of the observed DNA-SWNT hybrids in the squares are indicated on the right side of each image. For DNA-SWNTs without papain, the heights were approximately less than 2 nm. Meanwhile, the height of the partial area that papain molecules might adsorbed on was almost 6 nm, whereas that of the other areas was less than 2 nm. Similar experiments were carried out at pH 10.5 with boric acid buffer solution. The diameter of DNA-SWNTs partially increased; however, the number of DNA-SWNTs onto which papain molecules were adsorbed was lower than that at pH 3.0. Although the height of SWNTs was affected by differences in chirality, the significantly higher areas than for those without papain molecules were observed in the cross sections. This implied that papain molecules were adsorbed onto DNA-SWNT hybrids at pH 10.5 as well. For the numerical analysis, 90 crosssections from randomly selected 30 hybrids were summarized as histograms since SWNTs are not homogeneously flat. Because the average height of 100 gold colloids was 3.57 ± 0.76 nm (manufacturer height is 5.00 nm), all heights for height analysis were calibrated with this ratio. The average heights and standard deviations of DNA-SWNTs with and without papain molecules were 2.70 ± 1.51 and 5.40 ± 3.67 nm at pH 3.0, and 2.61 ± 1.44 and 4.10 ± 2.83 nm at pH 10.5, respectively. The median heights were 2.49, 4.16, 2.43, and 3.14 nm, respectively ( Supplementary Fig. S4). T-tests from these results revealed that the height of DNA-SWNTs increased after the injection of papain solution under both pH conditions (pH 3.0: N = 90, t (89) = − 6.44, p = 1.00 × 10 −8 < 0.05 , pH 10.5: N = 90, t (89) = − 6.12, p = 1.00 × 10 −8 < 0.05 ). Thus, papain molecules could be adsorbed by DNA-SWNTs at pH 10.5 and 3.0. However, from the histograms, the average height of DNA-SWNTs at pH 3.0 was slightly higher than that at pH 10.5 (N = 90, t (89) = − 2.71, p = 8.04 ×10 −3 < 0.05 ). This implies that more papain www.nature.com/scientificreports/ molecules could attach to DNA-SWNTs at pH 3.0 than at pH 10.5, and hence interactions were more favorable. Further, the isoelectric point (PI) of papain should be considered. The PI of the papain molecules was approximately 8.75, suggesting the negative charge at pH 10.5. For the control experiments, papain molecules and DNA-SWNT hybrids on AP-mica surfaces in fluids were observed by AFM (Supplementary Fig. S3). Globular and rod structures were observed for papain and DNA-SWNTs, respectively. The size of the observed papain molecules was approximately 3-4 nm. When the concentration of papain solutions was dense enough to cover all surfaces of the AP-mica (1.0 mg/mL) the papain molecules were adsorbed on the AP-mica surfaces at pH 3.0 and 10.5. However, since the surfaces were positively charged, a greater amount of papain molecules was observed at pH 10.5 than 3.0 at the low concentration (0.05 mg/mL) (Supplementary Fig. S3b). Some of the papain molecules were adsorbed on the DNA-SWNT surfaces under the Scheme 1 experiments. This suggests the attractive interaction between papain and DNA-SWNT hybrids, which compete with those between papain and AP-mica.
In addition, the reverse procedure in Scheme 1 was examined for comparison. In Supplementary Fig. S5, papain molecules were first deposited on an AP-mica surface, followed by the injection of the DNA-SWNT suspension into the AFM liquid cell. In this scheme, the AP-mica surface is occupied by papain molecules, as shown in Supplementary Fig. S5(a). As a result, several aggregates were observed at pH 3.0. As shown by Supplementary Fig. S5(b), although numerous samples were observed with several tips, the images were influenced by tip artifacts, which implies the tip might have lifted up the debris from the mica surface. Since the tip artifacts were not confirmed in scheme 1, the conjugations might be formed by several DNA-SWNT hybrids instead since the hybrids were immobilized on the mica surface in scheme 1. Therefore, the debris might have ejected from the conjugation. However, it was confirmed that papain molecules tend to adsorb onto the DNA-SWNTs at pH 3.0, whereas these aggregates were not observed at pH 10.5. Figure 3 shows the results of the Scheme 2 experiment (Supplementary Fig. S6 shows magnified results). Figure 3a shows the typical AFM image of a mixture of papain and DNA-SWNT hybrids in the citric acid buffer solution at pH 3.0. Onto the AP-mica surface, 10 μL DNA-SWNT suspension was dropped, immediately followed by the dropping of 10 μL papain solution (1 mg/mL). After 10 min of incubation at 25 °C, 2.0 mL citric acid buffer solution (pH 3.0) was added for the AFM observation. The red markers in Fig. 3a indicate that the DNA-SWNT hybrid was decorated with papain molecules. Six DNA-SWNTs heterogeneously covered with papain molecules were observed in the image. In addition, the observed DNA-SWNT hybrids appear blurry, indicating the tip might have picked up some debris from the surface of the mica for the same reason. Supplemental Fig. 6(a) www.nature.com/scientificreports/ confirms the existence of the DNA-SWNT hybrids (red markers) together with the debris (blue markers). Figure 6(a) also shows that papain molecules attached to DNA-SWNT hybrids at their thicker parts. Figure 3b shows the typical AFM image obtained by similar experiments at pH 10.5 with the boric acid buffer solution. Some DNA-SWNT hybrids were covered with papain molecules. The red markers indicate one DNA-SWNT hybrid with papain molecules. Similarly, thin DNA-SWNT hybrids were observed at pH 10.5. From the cross-sections of the DNA-SWNTs (red markers), parts with larger heights were clearly observed on the conjugates. These parts at pH 3.0 have a larger height than those at pH 10.5. A flat surface of DNA-SWNTs is obtained with the homogeneous attachment of the papain molecules on the surface of DNA-SWNTs. Figure 3c shows the expected interaction between the papain molecules and DNA-SWNTs in citric acid (Scheme 2, pH 3.0) and boric acid (pH 10.5) buffer solutions. Considering the PI of the papain enzyme and DNA of 8.6 and 2.0, respectively, the papain molecules were positively charged at pH 3.0 and negatively charged at pH 10.5, whereas the DNA particles were negatively charged at pH 3.0 and 10.5. This suggests that the papain molecule and DNA-SWNT has an attractive interaction at pH 3.0 and a repulsive interaction at pH 10.5, as shown in Fig. 3c. Figure 3d shows the height histograms of 90 parts (three randomly selected parts of each of the 30 DNA-SWNTs) in Scheme 2. The average heights with a standard deviation of the DNA-SWNTs at pH 3.0 and 10.5 were 7.30 ± 4.81 and 4.60 ± 2.83 nm, respectively ( Supplementary Fig. S7). The median height of the DNA-SWNTs at pH 3.0 and 10.5 were 6.27 and 4.26 nm, respectively (N = 90, t (89) = 5.02, p = 2.60 ×10 −6 < 0.05 ). The numerical analysis supports the electrostatic interactions as major factors in determining the binding of papain molecules to the DNA-SWNT hybrids. For a more in-depth discussion, the histograms in Figs. 2b and 3a, which were obtained at pH 3.0, were compared. The DNA-SWNT surfaces tend to adsorb the papain molecules. The average height in Fig. 2b (5.40 ± 3.67 nm) was smaller than that in Fig. 3a (7.30 ± 4.81 nm), which can be ascribed to the different deposition procedures (N = 90, t (89) = 2.84, p = 5.66 ×10 −3 < 0.05 ). In particular, Fig. 2 shows the results from first attaching the DNA-SWNT hybrids, followed by injecting the papain solution, whereas Fig. 3 denotes the results of immediately mixing the DNA-SWNT suspension and papain solution on the AP-mica surface.
To confirm the effects of pH on the interaction between the papain molecules and DNA-SWNTs, PL measurements were performed on a mixture of DNA-SWNT hybrids and papain molecules. As SWNTs exhibit NIR PL when irradiated with visible light, the adsorption of papain molecules onto the DNA-SWNT surfaces drastically changes the PL spectra, as detected by measuring the NIR PL. In addition, as aggregates are observed after mixing the papain solution with the DNA-SWNT solution, the samples were stirred before the measurements, and the PL spectra were immediately measured. Figure 4a and b show the PL spectra of the mixtures of the DNA-SWNT hybrid and papain solutions at pH 3.0 and 10.5, respectively. For the measurements, 24 μL DNA-SWNT solution was diluted with 1176 μL buffer solutions in a cuvette (final SWNT concentration of 9.8 μg/mL), and the PL spectra were measured once. Furthermore, 24 μL papain solution was injected into the cuvette, and the PL was measured after mixing. Assuming from our results that DNA-SWNTs and papain are rods (500 nm in length, 1 nm in diameter) and spheres (3 nm in diameter), respectively, the molar ratio of SWNTs: papain was estimated to be approximately 1:30. For the PL measurements, the excitation wavelength was 730 nm, and the measured PL range was 900-1200 nm.
Several PL peaks originate from the chiralities of SWNTs. Focusing on the peak at 1125 nm, which originates from the (9, 4) chirality of SWNTs, the PL intensity of the peak increased by 67.1 ± 8.45% after adding the papain solution at pH 3.0, whereas it slightly decreased by 8.24 ± 1.07% after the addition of the papain solution at pH 10.5. Although further research is necessary to understand the mechanism, the addition of papain clearly imposed contradicting results on the change in the PL spectra under two pH conditions. This indicates the PL spectra changed at pH 3.0 while barely changing at pH 10.5 when considering the change in volume caused by the addition of papain solution. Our results from AFM experiments verified the greater adsorption of papain www.nature.com/scientificreports/ molecules at pH 3.0. Thus, the adsorption of papain molecules onto DNA-SWNT hybrids tended to increase the PL intensity. As pH is controlled by the buffer solutions, fluctuations in the pH values before and after injection of the papain molecules were considered negligible. Thus, the results further suggest the potential of the DNA-SWNT hybrids to detect enzymatic reactions of the papain molecules.
Conclusion
In this study, a mixture of papain molecules and DNA-SWNT hybrids was observed using AFM in liquids. The papain molecules were not uniformly adsorbed on the DNA-SWNT surfaces, especially at pH 3.0. The electrostatic interactions between the papain molecules and DNA-SWNT hybrids are considered important parameters for establishing nanobiodevices of papain and DNA-SWNT hybrids. Furthermore, the PL spectra of the DNA-SWNTs effectively detected the adsorption of papain molecules at pH 3.0. This suggests a potential application for detecting enzymatic reactions using DNA-SWNT hybrids. Research, CA, USA) in the liquid state and were used for the silicon cantilever PPP-NCSTR-W (NANOSEN-SORS, Nanoworld Neuchatel, Switzerland). All processes were conducted under pH 3.0 (citric acid buffer solution) and 10.5 (boric acid buffer solution), respectively. Three types of samples were prepared for the AFM analysis. The first sample was prepared as follows: 10 μL prepared DNA-SWNT solution was deposited onto muscovite mica pretreated with AP. Subsequently, the mica surface was washed using 1000 μL buffer solution after incubation for 10 min. Mica was attached to the bottom of a closed fluid cell (CFC, 939.010, Asylum Research, CA, USA) and soaked in 2000 μL buffer solution. AFM measurements were conducted in the fluid at room temperature using the CFC after incubation for 15 min. From the CFC, 1000 μL buffer solution was removed, and a mixture of 50 μL prepared papain solution and 950 μL buffer solution were injected into the container. AFM measurements were conducted in the fluid after incubation for 15 min at room temperature.
Materials
The second sample was prepared as follows: 10 μL prepared DNA-SWNT solution was deposited onto the mica, which was immediately followed by the deposition of 10 μL papain solution onto the mica. Mica was attached to the bottom of the CFC and soaked in 2000 μL buffer solution. AFM measurements were conducted in the fluid using the CFC after incubation for 15 min at room temperature.
Finally, we deposited 10 μL prepared papain solution onto muscovite mica pretreated with AP. Subsequently, the mica surface was washed using 1000 μL buffer solution after incubation for 10 min. Mica was attached to the bottom of a closed fluid cell (CFC, 939.010, Asylum Research, CA, USA) and soaked in 2000 μL buffer solution. AFM measurements were conducted in fluid at room temperature using the CFC after incubation for 15 min. From the CFC, 1000 μL buffer solution was removed, and a mixture of 10 μL prepared DNA-SWNT solution and 990 μL buffer solution was injected into the container. AFM measurements were conducted in the fluid after incubation for 15 min at room temperature.
All experiments were conducted in triplicate. For the structural analysis, 30 DNA-SWNT hybrids of a size of more than 250 nm were randomly chosen. The heights of 90 points from the 30 hybrids were collected. Then t-tests were conducted to determine significant differences. For the calibration of our AFM, 5 nm gold colloids (EM.GC5, BBI Solutions, UK; mean diameter of 4.6-6.0 nm; coefficient of variation:≤ 15%) was diluted 100 times with water. Subsequently, 10 μL solution was dropped onto the mica for the AFM observations in the air. Then 100 gold colloids were randomly chosen, and the average height was determined to compare the height given by the manufacturer and attain the calibration ratio for the actual height of DNA-SWNT hybrids. In this study, all the DNA-SWNT hybrids for the height analysis were calibrated with the ratio. PL spectroscopy. PL spectroscopy measures the emission wavelengths in the NIR region. In this study, PL measurements were performed using the prepared DNA-SWNT and papain solutions. The excitation and emission wavelength ranges were 730 and 900-1400 nm, respectively. All measurements were conducted under pH 3.0 and 10.5, respectively. For the NIR-PL spectroscopy, 1176 μL buffer solution and 24 μL prepared DNA-SWNT solution were mixed in a cuvette, and the spectra were recorded initially. Subsequently, 24 μL papain | 2023-03-24T13:46:07.321Z | 2023-03-24T00:00:00.000 | {
"year": 2023,
"sha1": "cdf617a2da255b0eb9dcd8ebc78d9db7e9a24fa3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "cdf617a2da255b0eb9dcd8ebc78d9db7e9a24fa3",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
49314368 | pes2o/s2orc | v3-fos-license | Average Cost of QuickXsort with Pivot Sampling
QuickXsort is a strategy to combine Quicksort with another sorting method X, so that the result has essentially the same comparison cost as X in isolation, but sorts in place even when X requires a linear-size buffer. We solve the recurrence for QuickXsort precisely up to the linear term including the optimization to choose pivots from a sample of k elements. This allows to immediately obtain overall average costs using only the average costs of sorting method X (as if run in isolation). We thereby extend and greatly simplify the analysis of QuickHeapsort and QuickMergesort with practically efficient pivot selection, and give the first tight upper bounds including the linear term for such methods.
Introduction
In QuickXsort [5], we use the recursive scheme of ordinary Quicksort, but instead of doing two recursive calls after partitioning, we first sort one of the segments by some other sorting method X. Only the second segment is recursively sorted by QuickXsort. The key insight is that X can use the second segment as a temporary buffer for elements. By that, QuickXsort is sorting in-place (using O (1) words of extra space) even when X itself is not.
Not every method makes a suitable 'X'; it must use the buffer in a swap-like fashion: After X has sorted its segment, the elements originally stored in our buffer must still be intact, i.e., they must still be stored in the buffer, albeit in a different order. Two possible examples that use extra space in such a way are Mergesort (see Section 6 for details) and a comparison-efficient Heapsort variant [1] with an output buffer. With QuickXsort we can make those methods sort in-place while retaining their comparison efficiency. (We lose stability, though.) While other comparison-efficient in-place sorting methods are known (e.g. [18,12,9]), the ones based on QuickXsort and elementary methods X are particularly easy to implement 1 since one can adapt existing implementations for X. In such an implementation, the tried and tested optimization to choose the pivot as the median of a small sample suggests itself to improve QuickXsort. In previous works [1,5,3,6], the influence of QuickXsort on the performance of X was either studied by ad-hoc techniques that do not easily apply with general pivot sampling or it was studied for the case of very good pivots: exact medians or medians of a sample of √ n elements. Both are typically detrimental to the average performance since they add significant overhead, whereas most of the benefit of sampling is realized already for samples of very small constant sizes like 3, 5 or 9. Indeed, in a very recent manuscript [6], Edelkamp and Weiß describe an optimized median-of-3 QuickMergesort implementation in C++ that outperformed the library Quicksort in std::sort.
The contribution of this paper is a general transfer theorem (Theorem 5.1) that expresses the costs of QuickXsort with median-of-k sampling (for any odd constant k) directly in terms of the costs of X, (i.e., the costs that X needs to sort n elements in isolation). We thereby obtain the first analyses of QuickMergesort and QuickHeapsort with best possible constant-coefficient bounds on the linear term under realistic sampling schemes.
Since Mergesort only needs a buffer for one of the two runs, QuickMergesort should not simply give Mergesort the smaller of the two segments to sort, but rather the largest one for which the other segments still offers sufficient buffer space. (This will be the larger segment of the two if the smaller one contains at least a third of the elements; see Section 6 for details.) Our transfer theorem covers this refined version of QuickMergesort, as well, which had not been analyzed before. 2 The rest of the paper is structured as follows: In Section 2, we summarize previous work on QuickXsort with a focus on contributions to its analysis. Section 3 collects mathematical facts and notations used later. In Section 4 we define QuickXsort and formulate a recurrence for its cost. Its solution is stated in Section 5. Section 6 presents the QuickMergesort as our stereotypical instantiation of QuickXsort. The proof of the transfer spreads over Sections 7 and 8. In Section 9, we apply our result to QuickHeapsort and QuickMergesort and discuss some algorithmic implications.
Previous Work
The idea to combine Quicksort and a secondary sorting method was suggested by Contone and Cincotti [2,1]. They study Heapsort with an output buffer (external Heapsort), 3 and combine it with Quicksort to QuickHeapsort. They analyze the average costs for external Heapsort in isolation and use a differencing trick for dealing with the QuickXsort recurrence; however, this technique is hard to generalize to median-of-k pivots.
Diekert and Weiß [3] suggest optimizations for QuickHeapsort (some of which need extra space again), and they give better upper bounds for QuickHeapsort with random pivots and median-of-3. Their results are still not tight since they upper bound the total cost of all Heapsort calls together (using ad hoc arguments on the form of the costs for one Heapsort round), without taking the actual subproblem sizes into account that Heapsort is used on. In particular, their bound on the overall contribution of the Heapsort calls does not depend on the sampling strategy.
Edelkamp and Weiß [5] explicitly describe QuickXsort as a general design pattern and, among others, consider using Mergesort as 'X'. They use the median of √ n elements in each round throughout to guarantee good splits with high probability. They show by induction 2 Edelkamp and Weiß do consider this version of QuickMergesort [5], but only analyze it for median-of-√ n pivots. In this case, the behavior coincides with the simpler strategy to always sort the smaller segment by Mergesort since the segments are of almost equal size with high probability. 3 Not having to store the heap in a consecutive prefix of the array allows to save comparisons over classic in-place Heapsort: After a delete-max operation, we can fill the gap at the root of the heap by promoting the largest child and recursively moving the gap down the heap. (We then fill the gap with a −∞ sentinel value). That way, each delete-max needs exactly lg n comparisons.
that when X uses at most n lg n + cn + o(n) comparisons on average for some constant c, the number of comparisons in QuickXsort is also bounded by n lg n + cn + o(n). By combining QuickMergesort with Ford and Johnson's MergeInsertion [8] for subproblems of logarithmic size, Edelkamp and Weiß obtained an in-place sorting method that uses on the average a close to minimal number of comparisons of n lg n − 1.3999n + o(n).
In a recent follow-up manuscript [6], Edelkamp and Weiß investigated the practical performance of QuickXsort and found that a tuned median-of-3 QuickMergesort variant indeed outperformed the C++ library Quicksort. They also derive an upper bound for the average costs of their algorithm using an inductive proof; their bound is not tight.
Preliminaries
A comprehensive list of used notation is given in Appendix A; we mention the most important here. We use Iverson's bracket [stmt] to mean 1 if stmt is true and 0 otherwise. P[E] denotes the probability of event E, E[X] the expectation of random variable X. We write X D = Y to denote equality in distribution.
We heavily use the beta distribution: is the beta function. Moreover, we use the beta-binomial distribution, which is a conditional binomial distribution with the success probability being a beta-distributed random variable. If X D = BetaBin(n, α, β) . For a collection of its properties see [23], Section 2.4.7; one property that we use here is a local limit law showing that the normalized beta-binomial distribution converges to the beta distribution. It is reproduced as Lemma C.1 in the appendix.
For solving recurrences, we build upon Roura's master theorems [20]. The relevant continuous master theorem is restated in the appendix (Theorem B.1).
QuickXsort
Let X be a sorting method that requires buffer space for storing at most αn elements (for α ∈ [0, 1]) to sort n elements. The buffer may only be accessed by swaps so that once X has finished its work, the buffer contains the same elements as before, but in arbitrary order. Indeed, we will assume that X does not compare any buffer contents; then QuickXsort preserves randomness: if the original input is a random permutation, so will be the segments after partitioning and so will be the buffer after X has terminated. 4 We can then combine 5 X with Quicksort as follows: We first randomly choose a pivot and partition the input around that pivot. This results in two contiguous segments containing the J 1 elements that are smaller than the pivot and the J 2 elements that are larger than the pivot, respectively. We exclude the space for the pivot, so J 1 + J 2 = n − 1; note that since the rank of the pivot is random, so are the segment sizes J 1 and J 2 . We then sort one segment by X using the other segment as a buffer, and afterwards sort the buffer segment recursively by QuickXsort.
To guarantee a sufficiently large buffer for X when it sorts J r (r = 1 or 2), we must make sure that J 3−r ≥ αJ r . In case both segments could be sorted by X, we use the larger one. The motivation behind this is that we expect an advantage from reducing the subproblem size for the recursive call as much as possible.
We consider the practically relevant version of QuickXsort, where we use as pivot the median of a sample of k = 2t + 1 elements, where t ∈ N 0 is constant w.r.t. n. We think of t as a design parameter of the algorithm that we have to choose. Setting t = 0 corresponds to selecting pivots uniformly at random.
Recurrence for Expected Costs
Let c(n) be the expected number of comparisons in QuickXsort on arrays of size n and x(n) be (an upper bound for) the expected number of comparisons in X. We will assume that x(n) fulfills for constants a, b and ε ∈ (0, 1]. For α < 1, we obtain two cases: When the split induced by the pivot is "uneven" -namely when min{J 1 , J 2 } < α max{J 1 , J 2 }, i.e., max{J 1 , J 2 } > n−1 1+α -the smaller segment is not large enough to be used as buffer. Then we can only assign the large segment as a buffer and run X on the smaller segment. If however the split is about "even", i.e., both segments are ≤ n−1 1+α we can sort the larger of the two segments by X. These cases also show up in the recurrence of costs: where The expectation here is taken over the choice for the random pivot, i.e., over the segment sizes J 1 resp. J 2 . Note that we use both J 1 and J 2 to express the conditions in a convenient form, but actually either one is fully determined by the other via J 1 + J 2 = n − 1. Note how A 1 and A 2 change roles in recursive calls and toll functions, since we always sort one segment recursively and the other segment by X. The base cases b(n) are the costs to sort inputs that are too small to sample k elements. A practical choice is be to switch to Insertionsort for these, which is also used for sorting the samples. Unlike for Quicksort itself, b(n) only influences the logarithmic term of costs (for constant k). For our asymptotic transfer theorem, we only assume b(n) ≥ 0, the actual values are immaterial.
Distribution of Subproblem Sizes. If pivots are chosen as the median of a random sample of size k = 2t + 1, the subproblem sizes have the same distribution, J 1 , a discrete uniform distribution. If we choose pivots as medians of a sample of k = 2t + 1 elements, the value for J 1 consists of two summands: J 1 = t + I 1 . The first summand, t, accounts for the part of the sample that is smaller than the pivot. Those t elements do not take part in the partitioning round (but they have to be included in the subproblem). I 1 is the number of elements that turned out to be smaller than the pivot during partitioning. This latter number I 1 is random, and its distribution is I 1 D = BetaBin(n − k, t + 1, t + 1), a so-called beta-binomial distribution. The connection to the beta distribution is best seen by assuming n independent and uniformly in (0, 1) distributed reals as input. They are almost surely pairwise distinct and their relative ranking is equivalent to a random permutation of [n], so this assumption is w.l.o.g. for our analysis. Then, the value P of the pivot in the first partitioning step has a Beta(t + 1, t + 1) distribution by definition. Conditional on that value P = p, I 1 D = Bin(n − k, p) has a binomial distribution; the resulting mixture is the so-called beta-binomial distribution.
The Transfer Theorem
We now state the main result of the paper: an asymptotic approximation for c(n).
Theorem 5.1 (Total Cost of QuickXsort):
The expected number of comparisons needed to sort a random permutation with QuickXsort using median-of-k pivots, k = 2t + 1, and a sorting method X that needs a buffer of αn elements for some constant α ∈ [0, 1] to sort n elements and requires on average x(n) = an lg n + bn ± O(n 1−ε ) comparisons to do so as n → ∞ for some ε ∈ (0, 1] is is the expected relative subproblem size that is sorted by X. Here I x,y (α, β) is the regularized incomplete beta function We prove Theorem 5.1 in Sections 7 and 8. To simplify the presentation, we will restrict ourselves to a stereotypical algorithm for X and its value α = 1 2 ; the given arguments, however, immediately extend to the general statement above.
QuickMergesort
A natural candidate for X is Mergesort: It is comparison-optimal up to the linear term (and quite close to optimal in the linear term), and needs a Θ(n)-element-size buffer for practical implementations of merging. 6 To be usable in QuickXsort, we use a swap-based merge procedure as given in Algorithm 1. Note that it suffices to move the smaller of the two runs to a buffer; we use a symmetric version of Algorithm 1 when the second run is shorter. Using classical top-down or bottom-up Mergesort as described in any algorithms textbook (e.g. [22]), we thus get along with α = 1 2 .
Average Case of Mergesort
The average number of comparisons for Mergesort has the same -optimal -leading term n lg n as in the worst and best case; and this is true for both the top-down and bottom-up variants. The coefficient of the linear term of the asymptotic expansion, though, is not a constant, but a bounded periodic function with period lg n, and the functions differ for best, worst, and average case and the variants of Mergesort [21,7,17,10,11].
In this paper, we will confine ourselves to an upper bound for the average case x(n) = an lg n + bn ± O(n 1−ε ) with constant b valid for all n, so we will set b to the supremum of the periodic function. We leave the interesting challenge open to trace the precise behavior of the fluctuations through the recurrence, where Mergesort is used on a logarithmic number of subproblems with random sizes.
We use the following upper bounds for top-down [11] and bottom-up [17] Mergesort 7 x td (n) = n lg n − 1.24n + 2 and (2) 6 Merging can be done in place using more advanced tricks (see, e.g., [15]), but those tend not to be competitive in terms of running time with other sorting methods. By changing the global structure, a pure in-place Mergesort variant [13] can be achieved using part of the input as a buffer (as in QuickMergesort) at the expense of occasionally having to merge runs of very different lengths. 7 Edelkamp and Weiß [5] use x(n) = n lg n − 1.26n ± o(n); Knuth [14, 5.2.4-13] derived this formula for n a power of 2 (a general analysis is sketched, but no closed result for general n is given). Flajolet and Golin [7] and Hwang [11] continued the analysis in more detail; they find that the average number of comparisons is n lg n − (1.25 ± 0.01)n ± O(1), where the linear term oscillates in the given range.
Solving the Recurrence: Leading Term
We start with Equation (1). Since α = 1 2 for our Mergesort, we have α 1+α = 1 3 and 1 1+α = 2 3 . (The following arguments are valid for general α, including the extreme case α = 1, but in an attempt to de-clutter the presentation, we stick to α = 1 2 here.) We rewrite A 1 (J 1 ) and A 2 (J 2 ) explicitly in terms of the relative subproblem size: Graphically, if we view J 1 /(n − 1) as a point in the unit interval, the following picture shows which subproblem is sorted recursively; (the other subproblem is sorted by Mergesort).
Obviously, we have A 1 + A 2 = 1 for any choice of J 1 , which corresponds to having exactly one recursive call in QuickMergesort.
The Shape Function
The (1) We thus have a recurrence of the form required by the Roura's continuous master theorem (CMT) (see Theorem B.1 in Appendix B) with the weights w n,j from above ( Figure 1 shows an example how these weights look like). It remains to determine P[J = j]. Recall that we choose the pivot as the median of k = 2t + 1 elements for a fixed constant t ∈ N 0 , and the subproblem size J fulfills J = t + I with I D = BetaBin(n − k, t + 1, t + 1). So we have for i ∈ [0, n − 1 − t] by definition (For details, see [23, Section 2.4.7].) Now the local limit law for beta binomials (Lemma C.1 in Appendix C says that the normalized beta binomial I/n converges to a beta variable "in density", and the convergence is uniform. With the beta density f P (z) = z t (1−z) t /B(t+1, t+1), we thus find by Lemma C.1 that The shift by the small constant t from (j − t)/n to j/n only changes the function value by O(n −1 ) since f P is Lipschitz continuous on [0, 1]. (Details of that calculation are also given in [23], page 208.) The first step towards applying the CMT is to identify a shape function w(z) that approximates the relative subproblem size probabilities w(z) ≈ nw n, zn for large n. With the above observation, a natural choice is We show in Appendix D that this is indeed a suitable shape function, i.e., it fulfills Equation (11) from the CMT.
Computing the Toll Function
The next step in applying the CMT is a leading-term approximation of the toll function. We consider a general function x(n) = an lg n + bn ± O(n 1−ε ) where the error term holds for any constant ε > 0 as n → ∞. We start with the simple observation that J lg J = J lg( J n ) + lg n = n · J n lg J n + J n lg n = J n n lg n + J n lg J n n.
= J n n lg n ± O(n).
For the leading term of E[x(J)], we thus only have to compute the expectation of J/n, which is essentially a relative subproblem size. In t(n), we also have to deal with the conditionals A 1 (J) resp. A 2 (J), though. By approximating J n with a beta distributed variable, the conditionals translate to bounds of an integral. Details are given in Lemma E.1 (see Appendix E). This yields Here we use the incomplete regularized beta function for concise notation. (I x,y (α, β) is the probability that a Beta(α, β) distributed random variable falls into (x, y) ⊂ [0, 1], and I 0,x (α, β) is its cumulative distribution function.)
Cancellations
Combining Equations (7) and (9), we find c(n) ∼ an lg n, (n → ∞); The leading term of the number of comparisons in QuickXsort is the same as in X itself, regardless of how the pivot elements are chosen! This is not as surprising as it might first seem. We are typically sorting a constant fraction of the input by X and thus only do a logarithmic number of recursive calls on a geometrically decreasing number of elements, so the linear contribution of Quicksort (partitioning and recursion cost) is dominated by even the first call of X, which has linearithmic cost. This remains true even if we allow asymmetric sampling, e.g., by choosing the pivot as the smallest (or any other order statistic) of a random sample.
Edelkamp and Weiß [5] give the above result for the case of using the median of √ n elements, where we effectively have exact medians from the perspective of analysis. In this case, the informal reasoning given above is precise, and in fact, in this case the same form of cancellations also happen for the linear term [5, Thm. 1]. (See also the "exact ranks" result in Section 9.) We will show in the following that for practical schemes of pivot sampling, i.e., with fixed sample sizes, these cancellations happen only for the leadings-term approximation. The pivot sampling scheme does affect the linear term significantly; and to measure the benefit of sampling, the analysis thus has to continue to the next term of the asymptotic expansion of c(n).
Relative Subproblem Sizes. The integral 1 0 zw(z) dz is precisely the expected relative subproblem size for the recursive call, whereas for t(n) we are interested in the subproblem that is sorted using X whose relative size is given by 1 0 (1 − z)w(z) dz = 1 − 1 0 zw(z) dz. We can thus writeā = aH. The quantity 1 0 zw(z) dz, the average relative size of the recursive call is of independent interest. While it is intuitively clear that for t → ∞, i.e., the case of exact medians as pivots, we must have a relative subproblem size of exactly 1 2 , this convergence is not apparent from the behavior for finite t: the mass of the integral 1 0 zw(z) dz concentrates at z = 1 2 , a point of discontinuity in w(z). It is also worthy of note that the expected subproblem size is initially larger than 1 2 (0.694 for t = 0), then decreases to ≈ 0.449124 around t = 20 and then starts to slowly increase again (see Figure 2).
Solving the Recurrence: The Linear Term
Since c(n) ∼ an lg n for any choice of t, the leading term alone does not allow to make distinctions to judge the effect of sampling schemes. To compute the next term in the asymptotic expansion of c(n), we consider the values c (n) = c(n) − an lg n. c (n) has essentially the same recursive structure as c(n), only with a different toll function: Plugging this back into our equation for c (n), we find Apart from the smaller toll function t (n), this recurrence has the very same shape as the original recurrence for c(n); in particular, we obtain the same shape function w(z) and the same H > 0 and obtain
Error Bound
Since our toll function is not given precisely, but only up to an error term O(n 1−ε ) for a given fixed ε ∈ (0, 1], we also have to estimate the overall influence of this term. For that we consider the recurrence for c(n) again, but replace t(n) (entirely) by C · n 1−ε . If ε > 0, 1 0 z 1−ε w(z) dz < 1 0 w(z) dz = 1, so we still find H > 0 and apply case 1 of the CMT. The overall contribution of the error term is then O(n 1−ε ). For ε = 0, H = 0 and case 2 applies, giving an overall error term of O(log n).
This completes the proof of Theorem 5.1.
Discussion
Since all our choices for X are leading-term optimal, so will QuickXsort be. We can thus fix a = 1 in Theorem 5.1; only b (and the allowable α) still depend on X. We then basically find that going from X to QuickXsort adds a "penalty" q in the linear term that depends only on the sampling size (and α), but not on X. Table 1 shows that this penalty is ≈ n without sampling, but can be reduced drastically when choosing pivots from a sample of 3 or 5 elements. (Note that the overall costs for pivot sampling are O(log n) for constant t.) Table 1: QuickXsort penalty. QuickXsort with x(n) = n lg n + bn yields c(n) = n lg n + (q + b)n where q, the QuickXsort penalty, is given in the table.
As we increase the sample size, we converge to the situation studied by Edelkamp and Weiß using median-of-√ n, where no linear-term penalty is left [5]. Given that q is less than 0.08 already for a sample of 21 elements, these large sample versions are mostly of theoretical interest. It is noteworthy that the improvement from no sampling to median-of-3 yields a reduction of q by more than 50%, which is much more than its effect on Quicksort itself (where it reduces the leading term of costs by 15 % from 2n ln n to 12 7 n ln n). We now apply our transfer theorem to the two most well-studied choices for X, Heapsort and Mergesort, and compare the results to analyses and measured comparison counts from previous work. The results confirm that solving the QuickXsort recurrence exactly yields much more accurate predictions for the overall number of comparisons than previous bounds that circumvented this.
QuickHeapsort
The basic external Heapsort of Cantone and Cincotti [1] always traverses one path in the heap from root to bottom and does one comparison for each edge followed, i.e., lg n or lg n − 1 many per deleteMax. By counting how many leaves we have on each level, Diekert and Weiß found [3, Eq. 1] n lg n − 1 + 2 n − 2 lg n ± O(log n) ≤ n lg n − 0.913929n ± O(log n) comparisons for the sort-down phase. (The constant of the linear term is 1 − 1 ln 2 − lg(2 ln 2), the supremum of the periodic function at the linear term). Using the classical heap construction method adds on average 1.8813726n comparisons [4], so here x(n) = n lg n + 0.967444n ± O(n ε ) for any ε > 0.
Both [1] and [3] report averaged comparison counts from running time experiments. We compare them in Table 2 against the estimates from our result and previous analyses. While the approximation is not very accurate for n = 100 (for all analyses), for larger n, our estimate is correct up to the first three digits, whereas previous upper bounds have almost one order of magnitude bigger errors. Note that it is expected for our bound to still be on the conservative side since we used the supremum of the periodic linear term for Heapsort.
QuickMergesort
For QuickMergesort, Edelkamp and Weiß [5, Fig. 4] report measured average comparison counts for a median-of-3 version using top-down Mergesort: the linear term is shown to be between −0.8n and −0.9n. In a recent manuscript [6], they also analytically consider the simplified median-of-3 QuickMergesort which always sorts the smaller segment by Mergesort (i.e., α = 1). It uses n lg n − 0.7330n + o(n) comparisons on average (using b = −1.24). They use this as a (conservative) upper bound for the original QuickMergesort. Our transfer theorem shows that this bound is off by roughly 0.1n: median-of-3 Quick-Mergesort uses at most c(n) = n lg n − 0.8350n ± O(log n) comparisons on average. Going to median-of-5 reduces the linear term to −0.9874n, which is better than the worst-case for top-down Mergesort for most n.
Skewed Pivots for Mergesort? For Mergesort with α = 1 2 the largest fraction of elements we can sort by Mergesort in one step is 2 3 ; this suggests that using a slightly skewed pivot might be beneficial since it will increase the subproblem size for Mergesort and decrease the size for recursive calls. Indeed, Edelkamp and Weiß allude to this variation: "With about 15% the time gap, however, is not overly big, and may be bridged with additional efforts like skewed pivots and refined partitioning." (the statement appears in the arXiv version of [5], arxiv.org/abs/1307.3033). And the above mentioned StackExchange post actually chooses pivots as the second tertile.
Our analysis above can be extended to skewed sampling schemes (omitted due to space constraints), but to illustrate this point it suffices to pay a short visit to "wishful-thinking land" and assume that we can get exact quantiles for free. We can show (e.g., with Roura's discrete master theorem [20]) that if we always pick the exact ρ-quantile of the input, for ρ ∈ (0, 1), the overall costs are x). The coefficient of the linear term has a strict minimum at ρ = 1 2 : Even for α = 1 2 , the best choice is to use the median of a sample. (The result is the same for fixed-size samples.) For QuickMergesort, skewed pivots turn out to be a pessimization, despite the fact that we sort a larger part by Mergesort. A possible explanation is that skewed pivots significantly decrease the amount of information we obtain from the comparisons during partitioning, but do not make partitioning any cheaper.
Future Work
More promising than skewed pivot sampling is the use of several pivots. The resulting MultiwayQuickXsort would be able to sort all but one segment using X and recurse on only one subproblem. Here, determining the expected subproblem sizes becomes a challenge, in particular for α < 1; we leave this for future work.
We also confined ourselves to the expected number of comparisons here, but more details about the distribution of costs are possible to obtain. The variance follows a similar recurrence as the one studied in this paper and a distributional recurrence for the costs can be given. The discontinuities in the subproblem sizes add a new facet to these analyses.
Finally, it is a typical phenomenon that constant-factor optimal sorting methods exhibit periodic linear terms. QuickXsort inherits these fluctuations but smooths them through the random subproblem sizes. Explicitly accounting for these effects is another interesting challenge for future work.
Acknowledgements. I would like to thank three anonymous referees for many helpful comments, references and suggestions that helped improve the presentation of this paper.
D. Smoothness of the Shape Function
In this appendix we show that w(z) as given in Equation (4) on page 8 fulfills Equation (11) on page 16, the approximation-rate criterion of the CMT. We consider the following ranges for zn n−1 = j n−1 separately: • zn n−1 < 1 3 and 1 2 < zn n−1 < 2 3 . Here w n, zn = 0 and so is w(z). So actual value and approximation are exactly the same.
• 1 3 < zn n−1 < 1 2 and zn n−1 > 2 3 . Here w n,j = 2P[J = j] and w(z) = 2f P (z) where f P (z) = z t (1 − z) t /B(t + 1, t + 1) is twice the density of the beta distribution Beta(t + 1, t + 1). Since f P is Lipschitz-continuous on the bounded interval [0, 1] (it is a polynomial) the uniform pointwise convergence from above is enough to bound the sum of w n,j − (j+1)/n j/n w(z) dz over all j in the range by O(n −1 ).
• zn n−1 ∈ { 1 3 , 1 2 , 2 3 }. At these boundary points, the difference between w n, zn and w(z) does not vanish (in particularly 1 2 is a singular point for w n, zn ), but the absolute difference is bounded. Since this case only concerns 3 out of n summands, the overall contribution to the error is O(n −1 ).
Together, we find that Equation (11) is fulfilled as claimed:
E. Approximation by (Incomplete) Beta Integrals
Lemma E.1: Let J D = BetaBin(n − c 1 , α, β) + c 2 be a random variable that differs by fixed constants c 1 and c 2 from a beta-binomial variable with parameters n ∈ N and α, β ∈ N ≥1 . Then the following holds (a) For fixed constants 0 ≤ x ≤ y ≤ 1 holds The result holds also when any or both of the inequalities in [xn ≤ J ≤ yn] are strict.
Proof:
We start with part (a). By the local limit law for beta binomials (Lemma C.1) it is plausible to expect a reasonably small error when we replace E [xn ≤ J ≤ yn] · J lg J by E [x ≤ P ≤ y] · (P n) lg(P n) where P D = Beta(α, β) is beta distributed. We bound the error in the following.
We have E [xn ≤ J ≤ yn] · J ln J = E [xn ≤ J ≤ yn] · J n · n ln n ± O(n) by Equation (5); it thus suffices to compute E [xn ≤ J ≤ yn] · J n . We first replace J by I D = BetaBin(n, α, β) and argue later that this results in a sufficiently small error. We expand = α α + β I x,y (α + 1, β) ± O(n −1 ); (13) recall that I x,y (α, β) = y x z α−1 (1 − z) β−1 B(α, β) dz = P x < P < y denotes the regularized incomplete beta function. Changing from I back to J has no influence on the given approximation. To compensate for the difference in the number of trials (n − c 1 instead of n), we use the above formulas for with n − c 1 instead of n; since we let n go to infinity anyway, this does not change the result. Moreover, replacing I by I + c 2 changes the value of the argument z = I/n of f by O(n −1 ); since f is smooth, namely Lipschitz-continuous, this also changes f (z) by at most O(n −1 ). The result is thus not affected by more than the given error term: We obtain the claim by multiplying with n lg n.
Versions with strict inequalities in [xn ≤ J ≤ yn] only affect the bounds of the sums above by one, which again gives a negligible error of O(n −1 ).
This concludes the proof of part (a).
For part (b), we follow a similar route. The function we integrate is no longer Lipschitz continuous, but a weaker form of smoothness is sufficient to bound the difference between the integral and its Riemann sums. Indeed, the above cited Lemma 2.12 (b) of [23] is formulated for the weaker notion of Hölder-continuity: A function f : I → R defined on a bounded interval I is called Hölder-continuous with exponent h ∈ (0, 1] when This generalizes Lipschitz-continuity (which corresponds to h = 1).
As above, we replace J by I D = BetaBin(n, α, β), which affects the overall result by O(n −1 ). We compute Recall that we can choose h as close to 1 as we wish; this will only affect the constant hidden by the O(n −h ). It remains to actually compute the integral; fortunately, this "logarithmic beta integral" has a well-known closed form (see, e.g., [23], Eq. (2.30)). for any h ∈ (0, 1). | 2018-04-03T01:44:06.886Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "32c021aead112cc7dfb2c91b2ff22e82f1cd5e66",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "43ce69c5604f0202df1417f9978761b8601262bd",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
53015634 | pes2o/s2orc | v3-fos-license | Effectiveness and safety of herbal medicines for induction of labour: a systematic review and meta-analysis
Objective The use of herbal medicines for induction of labour (IOL) is common globally and yet its effects are not well understood. We assessed the efficacy and safety of herbal medicines for IOL. Design Systematic review and meta-analysis of published literature. Data sources We searched in MEDLINE, AMED and CINAHL in April 2017, updated in June 2018. Eligibility criteria We considered experimental and non-experimental studies that compared relevant pregnancy outcomes between users and non-user of herbal medicines for IOL. Data extraction and synthesis Data were extracted by two reviewers using a standardised form. A random-effects model was used to synthesise effects sizes and heterogeneity was explored through I2 statistic. The risk of bias was assessed using ‘John Hopkins Nursing School Critical Appraisal Tool’ and ‘Cochrane Risk of Bias Tool’. Results A total of 1421 papers were identified through the searches, but only 10 were retained after eligibility and risk of bias assessments. The users of herbal medicine for IOL were significantly more likely to give birth within 24 hours than non-users (Risk Ratio (RR) 4.48; 95% CI 1.75 to 11.44). No significant difference in the incidence of caesarean section (RR 1.19; 95% CI 0.76 to 1.86), assisted vaginal delivery (RR 0.73; 95% CI 0.47 to 1.14), haemorrhage (RR 0.84; 95% CI 0.44 to 1.60), meconium-stained liquor (RR 1.20; 95% CI 0.65 to 2.23) and admission to nursery (RR 1.08; 95% CI 0.49 to 2.38) was found between users and non-users of herbal medicines for IOL. Conclusions The findings suggest that herbal medicines for IOL are effective, but there is inconclusive evidence of safety due to lack of good quality data. Thus, the use of herbal medicines for IOL should be avoided until safety issues are clarified. More studies are recommended to establish the safety of herbal medicines.
IntrODuCtIOn
Across the world, the use of unconventional or traditional medical therapies is very high. [1][2][3][4] These non-biomedical remedies are together referred to as complementary and alternative medicines (CAMs). WHO recognises the role of CAM of verified quality, safety and efficacy in ensuring universal access to healthcare. 5 As such, for the period between 2014 and 2023, WHO traditional medicine strategy focuses on harnessing the potential contribution of CAM in healthcare and promoting its safe and effective use. 5 Although this requires rigorous evidence on safety and efficacy of CAM, research in this area remains limited. 5 Herbal medicine or medicinal plant is one of the well-known CAM therapies that involve the use of plants or plant extracts for therapeutic motives. 6 As in the general population, the use of herbal medicines is common among pregnant women globally. [7][8][9][10] The estimated prevalence varies between regions and countries but ranges from 10% to 80%. 11 12 One of the common indications for herbal medicine use during pregnancy is prolonged labour or merely the desire to induce or augment labour for different reasons. 13 14 This practice is well documented and transcends cultural and generational boundaries. 14 From a medical perspective, induction of labour (IOL) changes the physiological processes associated with childbirth in ways that may increase the risk of adverse pregnancy outcomes such as neonatal mortality, fetal distress, premature birth, haemorrhage, strengths and limitations of this study ► Due to safety and ethical reasons, herbal medicines for pregnant women are rarely evaluated through randomised controlled/clinical trials (RCTs). Nonetheless, most of the reviews of herbal medicines during pregnancy are restricted to RCTs. The present review included non-experimental studies to assess a wider evidence base. ► No restrictions were applied on the date of publication, location, study design and types of treatment (herbal medicine used). ► There is lack of data on key outcomes (eg, maternal death and sepsis) and from low-income countries. ► Some analyses did not have sufficient statistical power due to the inadequate number of studies and small sample sizes.
Open access uterine rupture and caesarean section. [15][16][17] Because of this, WHO recommends that labour should only be induced in health facilities with the capacity for continual monitoring and emergency obstetric care. 18 The emphasis on facility-based IOL and close monitoring of pregnant women demonstrates the risks associated with the procedure. Nonetheless, with herbal medicine-induced labour, monitoring of women is often out of the question due to self-prescription. 2 19 So, the use of herbal medicines for IOL is likely to be riskier and it is plausibly an important factor influencing adverse pregnancy outcomes in settings where its use is common.
In vitro studies have confirmed that some of the herbal medicines used during pregnancy have oxytocic properties. 13 20 For instance, a study in Nigeria found that several plants that are used to facilitate childbirth in the country significantly induced muscle cell contractility. 13 However, safety is the main concern as many of the herbal medicines are believed to be poisonous and may contribute to maternal and neonatal mortality as well as morbidity. 21 22 To date, there is mixed evidence from population-based studies regarding the efficacy and safety of herbal medicines for IOL [23][24][25] and yet available data have not been systematically evaluated and synthesised to provide the rigorous evidence necessary to inform decisions. Lack of high quality and consistent data on efficacy and safety of herbal medicines makes recommendations and regulations challenging. 5 Consequently, we conducted a systematic review to explore the effectiveness and safety of herbal medicines for IOL. This review is important to inform the development of guidelines relating to the use of herbal medicines among pregnant women.
MEthODs Design
This is a systematic review and meta-analysis of published literature on effectiveness and safety of herbal medicines for IOL. The reporting of the abstract (online supplementary file S1) and results (online supplementary file S2) are guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses . 26 Data sources and searches We searched in MEDLINE, AMED and CINAHL from 13 February to 22 April 2017 and repeated this on 22 June 2018 using key terms such as herbal medicine, labour and pregnancy outcomes, which were modified in accordance with each database (online supplementary file S3). More papers were identified through scanning the reference list of studies found through the initial search as well as direct searches in the following journals: African Journals Online, Journal for Herbal Medicine, BMC Complementary and Alternative Medicine, Journal of Alternative and Complementary Medicine and Journal of Integrative Medicine.
Inclusion/exclusion criteria
The inclusion criteria were based on participant, intervention, control, outcomes and studies. We considered studies with pregnant or postpartum women as participants. The treatment or exposure was herbal medicines for induction or shortening of labour. For studies that did not explicitly indicate the reasons for use, the name of the medicine was used to determine if IOL was the possible motive. There was no restriction on dosage, but the route of administration was oral. The plants could be either processed or crude and used alone or alongside conventional medicines. An appropriate comparison group comprised either pregnant women who did not use the herbal medicine under consideration or used biomedical drugs exclusively. The maternal outcomes were haemorrhage, sepsis, caesarean section, uterine rupture, assisted vaginal delivery and maternal death; while the neonatal outcomes were stillbirth, premature birth, neonatal mortality, meconium-stained liquor (MSL)/fetal distress, birth defects and referral to neonatal intensive care unit (also known as nursery).
Due to ethical, safety and methodological issues, pregnant women are often excluded from randomised controlled/clinical trials (RCTs) and herbal medicines may not be evaluated through RCTs. [27][28][29] Thus, observational studies are a common source of literature for efficacy and safety of herbal medicines in pregnancy. Accordingly, we considered both experimental and non-experimental study designs. In particular, the following study designs were eligible for inclusion: RCTs, quasi-experimental, cohort, case-control and cross-sectional. We only considered studies published in English or in other languages, but with a detailed English abstract. No restrictions were applied on the date of publication and study setting.
Data extraction
A data extraction form (online supplementary file S4) was developed specifically for this review based on templates developed by the Joanna Briggs Institute and the Cochrane Pregnancy and Childbirth Group. 30 31 The form included specific details about the study design, participants, setting, intervention/exposure, control and outcomes. Owing to the focus of our study (ie, efficacy and safety), 'per protocol' treatment effects were preferred in RCTs. 32 As none of the observational studies reported adjusted effect estimates, crude data were extracted and used in this review. Two reviewers-CZ and CM-separately extracted the data and any differences were resolved by discussion.
Quality/risk of bias assessment Two different tools were used to assess the risk of bias in experimental and non-experimental studies that met the inclusion criteria. CZ and CM independently performed the risk of bias assessment and any disagreements were resolved by discussion. For experimental studies, Cochrane Risk of Bias tool for the RCTs 33 was used and the following domains were assessed: random sequence generation, allocation concealment, blinding of participants, blinding of outcome assessment, incomplete Open access outcome data, selective reporting and other biases (online supplementary file S5). Only abstracts were available in English for two studies 25 34 and hence their risk of bias is largely unclear. The overall risk of bias for the other RCTs is low.
The risk of bias for non-experimental studies was assessed using a standardised critical appraisal tool developed by 'John Hopkins Nursing School'. 35 This tool divides the strength of research evidence into five levels based on the study design. The RCTs occupy the top level (level I) followed by quasi-experimental studies (level II) and other non-experimental studies (level III). The last two levels are for opinion-related papers either based on research evidence (level IV) or individual expertise (level V). The quality of evidence is further graded as high (A), good (B) and low quality or major flaws (C) depending on the risk of bias and scientific basis for the conclusions. Based on this tool, a list of 10 questions (or domains) was developed to guide the assessment (online supplementary file S6). Since the review used crude data, the need to control extraneous variables and whether this was done (if required) were key factors in determining the study grade. For instance, grade C was given to studies in which the treatment and control groups were not comparable and confounders were not adjusted for. Two studies 23 36 received a grade of C and were eventually excluded from the review.
Data analysis
Meta-analyses were performed to compare the onset of labour (effectiveness) and the incidence of adverse pregnancy outcomes (safety) between the users and non-users of herbal medicines for IOL. As variations were expected between studies due to the differences in setting, design and types of herbal medicines, a random-effects model was used to synthesise effects sizes of the studies. 37 Heterogeneity was explored through the I 2 statistic and meta-analysis was conducted regardless of the outcome as random-effects model accommodates statistical heterogeneity. 38 Subject to availability of the sufficient number of studies, subgroup analyses were conducted based on the type of treatment/exposure or study design to explain observed heterogeneity. Potential publication bias was assessed using Egger's test since all analyses had less than 10 studies to use a funnel plot method. 39 40 Summary effects were measured using risk ratios (RRs) and all analyses were performed using Stata/SE V.13.1 software.
Patient and public involvement
As this was a review of existing literature, we did not involve any patient and the public in the design and conduct of the study. However, the development of the review question was informed by the experiences of pregnant women as observed in the literature.
rEsults study selection process Searches in the three databases returned a total of 1421 papers (CINAHL=420, AMED=279 and MEDLINE=723).
After removal of duplicates (n=539), the titles and/ or abstracts of 882 publications were screened and 802 studies were dropped at this stage for various reasons (see figure 1). Full-text articles were retrieved for 80 studies for further eligibility assessment and 71 of them failed to meet the inclusion criteria. Additional potential relevant papers (n=3) were identified through direct searches in journals and reference lists. Twelve papers were appraised in the final stage and two were excluded due to poor methodological quality (see online supplementary file S6). Thus, 10 studies were included in this review.
An overview of the included studies
Online supplementary file S7 presents the characteristics of the studies, such as location, exposure, outcomes and ratings. In brief, of the 10 studies in the review, three were conducted in Iran, two in the USA and one each in South Africa, Israel, Thailand, Australia and Italy. In relation to the World Bank's classification of countries by income, half of the studies were conducted in high-income countries and the other half in upper-middle-income countries (UMICs). No study from low-income countries (LICs) or lower-middle-income countries (LMICs) was included.
Three types of exposures were reported by the studies. An Australian study was concerned with exposure to raspberry leaf. 41 This is one of the common herbal remedies used during pregnancy that is believed to prepare the uterus for childbirth and thereby effectively reduce the length of labour. 14 In this study, exposure was self-reported by the participants as they were given raspberry pills by the nurses to take at home. Eight studies examined exposure to castor oil. 25 34 42-47 The oil is derived from the castor plant's bean and is widely thought to have oxytocic properties. 44 45 In all the studies, pregnant women consumed 60 mL of castor oil, but in one study 43 the treatment was repeated in women who did not deliver within 1 week after the first dose. One study 48 assessed general exposure to herbal medicines, but there are indications in the report that they were for IOL.
Five of the included studies are RCTs and the remaining five are non-experimental, including cohort (3), case-control (1) and quasi-experimental (1) designs. The following pregnancy outcomes were reported by the included studies: onset of labour within 24 hours, caesarean section, haemorrhage, neonatal referral to nursery care, MSL, assisted vaginal delivery, stillbirth, neonatal death, maternal death and uterus rupture.
Outcome 1: onset of labour within 24 hours
Eight studies explored the onset of labour within 24 hours after the use of herbal medicine for IOL. Castor oil was the exposure or intervention in all the studies. As shown in figure 2, herbal medicine users were significantly more likely to give birth within 24 hours than non-users (RR 3.46; 95% CI 1.58 to 7.55). In the subgroup analysis by study design, similar results were observed among experimental studies, but there was no significant difference in onset of labour between users and non-users Open access among the non-experimental studies (online supplementary file S8). Publication bias was not an issue (bias 3.23; 95% CI 0.48 to 5.97), but heterogeneity was significant (I 2 =90.2%, p≤0.001) and this was likely due to variations in study design and or setting.
Outcome 2: incidence of caesarean section
The association between herbal medicine use and occurrence of caesarean section was examined by six studies. A meta-analysis (figure 3) found no significant difference in the rate of caesarean section between the users and non-users of herbal medicines (RR 1.19; 95% CI 0.76 to 1.86). Similar results were observed in subgroup analysis by type of treatment (online supplementary file S9) and study design (online supplementary file S10), except that Mabina et al 48 (eg, any exposure), found a significant difference in the incidence of caesarean section between the study groups. Both heterogeneity (I 2 =45.6%; p=0.102) and publication bias were not significant (Bias=−0.39; 95% CI −4.47 to 3.70).
Outcome 3: incidence of assisted vaginal delivery
In this review, assisted vaginal delivery was defined as the use of medical interventions such as forceps and or episiotomy to aid delivery. This outcome was reported by five studies and a meta-analysis (figure 4) found no significant difference between the users and non-users of herbal medicines (RR 0.73; 95% CI 0.47 to 1.14). Heterogeneity was significant (I 2 =74.4%; p=0.004), but publication bias was not (Bias=−1.87; 95% CI −6.12 to 2.38). Subgroup analyses by type of treatment (online supplementary file S11) and study design (online supplementary file S12) did not substantially change the results.
Outcome 4: incidence of haemorrhage
The occurrence of haemorrhage among users and non-users of herbal medicines for IOL was assessed by four studies and a meta-analysis (figure 5) shows no significant difference between the two groups (RR 0.84; 95% CI 0.44 to 1.60). These results were consistent with those in subgroup analyses by type of treatment (online supplementary file S13) as well as study design (online supplementary file S14). Heterogeneity was almost non-existent (I 2 =0.0%; p=0.802) and publication bias was not significant (Bias 0.49; 95% CI −2.73 to 3.70).
Outcome 5: incidence of Msl
The occurrence of MSL, a strong indicator of fetal distress, 49 was reported by five studies. Overall, there is no significant difference in the rate of MSL between users and non-users of herbal medicines (RR 1.20; 95% CI 0.65 to 2.23) (figure 6). Comparable results were observed in subgroup analysis by type of treatment (online supplementary file S15). However, in subgroup analysis by study design, the experimental studies tended to favour treatment while the non-experimental inclined towards control, but both results were not statistically significant (online supplementary file S16). Publication bias was not significant (Bias=−2.38; 95% CI −6.76 to 2.00), but heterogeneity was high (I 2 =77.9%; p=0.001) probably due to variations across studies.
Outcome 6: neonates' admission to nursery Whether a newborn child is referred to neonatal intensive care unit (also known as nursery) or not is often used as an indicator for well-being. 41 This outcome was reported by three studies and none of them individually found a significant difference in admission to nursery between users and non-users of herbal medicines for IOL. A meta-analysis (figure 7) found no significant difference between the two groups (RR 1.08; 95% CI 0.49 to 2.38). Both publication bias (bias=−1.51; 95% CI −7.66 to 4.64) and heterogeneity (I 2 =0.0%; p=0.482) were not significant. Subgroup analysis was not performed due to the inadequate number of studies.
Other outcomes
The following outcomes were either reported by a single study or there was insufficient data and hence meta-analyses were not performed: maternal death, stillbirth Figure 3 The use of herbal medicines for induction of labour and the incidence of caesarean section. RR, risk ratio. Open access and uterine rupture. A single study assessed maternal death and stillbirth outcomes among users (n=205) and non-users (n=407) of castor oil to induce labour. 43 No maternal death occurred in either group, but one case of stillbirth (0.3%) was reported in the control group. Uterine rupture was reported by two studies in relation to castor oil and only one case was reported among exposed women in one of the studies. 43 47 Overall, no study found a significant difference in any of the three outcomes between users and non-users of herbal medicines for IOL.
DIsCussIOn
We have found that herbal medicines for IOL are effective and there is no concrete evidence of association with adverse outcomes. On efficacy, we have found that women who used the herbal medicines were significantly more likely to give birth within 24 hours than their counterparts who did not use. This corroborates many in vitro studies around the world that have shown that some herbal medicines effectively induce uterine contractions. 13 20 50 For instance, studies in Malawi and Nigeria have established that some medicinal plants commonly prescribed by traditional healers to induce childbirth have oxytocic properties. 13 20 Previous reviews, however, found insufficient evidence for the effectiveness of herbal medicines for IOL. 51 52 This contradiction could be as a result of the differences in inclusion criteria. Most of the related reviews excluded non-experimental studies, 51 52 which are a common source of efficacy data due to safety issues surrounding RCTs for herbal medicines or pregnant women. 27 28 53 Whereas this allowed us to assess a wider evidence base than the previous reviews, we are also mindful of the biases inherent in observational studies. Therefore, a definite conclusion about the efficacy of Figure 4 The use of herbal medicines for induction of labour and the incidence of assisted vaginal delivery. RR, risk ratio. Figure 5 The use of herbal medicines for induction of labour and the incidence of haemorrhage. RR, risk ratio.
Open access
herbal remedies for IOL cannot be put forward based on the present review. [54][55][56] On safety, we did not find any statistically significant difference in the rate of haemorrhage, caesarean section, assisted vaginal delivery, referral to neonatal intensive care unit, MSL, maternal death, stillborn and uterine rupture between participants in treated and control groups. The implication is that herbal medicines for IOL may not be harmful to women or neonates. This observation is consistent with the results that have been reported by other reviews on a related topic. 51 52 Notwithstanding, caution must be exercised in the interpretation of this data because in some outcomes (eg, caesarean section) the difference in the number of cases between treated and control groups was very high. This was also noted by Boltman-Binkowski 51 in her review. Despite lack of statistical significance, she argues that a higher number of adverse outcomes among women who ingested castor oil implies that the link between the two cannot be entirely dismissed. The finding may also be inconclusive owing to lack of data on key outcomes, such as maternal death, sepsis and neonatal death.
The results of this review should be considered in the context of the following limitations and biases. First, although the baseline characteristics of the observational studies were similar across study groups, not all potential confounders were measured. Likewise, of the five RCTs in this review, three were unclear on selection, performance and detection biases while two had unclear attrition, reporting and other biases. Thus, the risk of bias may have been introduced as a result of these poor methodologies. In addition, some analyses lacked adequate statistical power because of small sample sizes or the insufficient number of studies. These issues strongly suggest that the outcomes of this review be treated with considerable caution.
Second, in almost all the studies, herbal remedies were provided at the health facility and pregnant women were somewhat monitored by clinical staff. In this way, many potential adverse events may have been averted or lessened. Nevertheless, this does not entirely represent the reality of the context in which herbal medicines are taken, and thus, the results of these studies may be misleading. In sub-Saharan Africa, for instance, herbal medicines are often taken outside the health facility without the knowledge and support of healthcare providers. 12 48 57 In such situations, the risk of adverse events could be higher than reported by these studies. Figure 6 The use of herbal medicines for induction of labour and the incidence of meconium-stained liquor. RR, risk ratio. Figure 7 The use of herbal medicines for induction of labour and neonatal admission to nursery. RR, risk ratio.
Open access
Lastly, all studies in this review are from higher and UMICs. No study from a low or LMIC was included. This probably suggests lack of studies on this subject in limited-resource settings. Hence, the findings of this review cannot be extrapolated beyond higher and UMICs. Since the issue of safety of herbal medicines in pregnancy relates to maternal as well as neonatal morbidity and mortality, 22 48 58-60 which are principally the problems of LIC, 61 62 high-quality studies that include a range of maternal morbidity and mortality outcomes in LIC are urgently needed. 22 63 COnClusIOns AnD IMPlICAtIOns The evidence from this review suggests that herbal medicines for IOL are effective, but their safety among women and neonates require further exploration. Therefore, we would not recommend the use of these medicines until all the safety concerns are adequately addressed. In the meantime, larger safety and efficacy studies with sufficient statistical power and of high methodological quality should be conducted to improve the evidence base. | 2018-11-09T20:33:54.201Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "4b451e0d38dc94b0e7ca6eecbcbdd99b1734b77a",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/10/e022499.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "4b451e0d38dc94b0e7ca6eecbcbdd99b1734b77a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
71563425 | pes2o/s2orc | v3-fos-license | Practice Trends in the Surgical Management of Renal Tumors in an Academic Medical Center in the Past Decade
Objectives. To evaluate the trends of surgical treatment of the renal tumor in an academic medical center.Methods. Between 2001 and 2010, 505 were treated surgically at the Federal University of Sao Paulo for renal tumors.The following variables were observed and analyzed according to their evolution through time: frequency and types of surgeries performed, operative time, hospital stay, and warm ischemia time for partial nephrectomy. Results. An increase in the frequency of laparoscopic radical nephrectomies, open partial nephrectomies, and laparoscopic partial nephrectomies was observed when comparing the periods from 2001 to 2005 (4.3%, 2.6%, and 12.6%, resp.) and from 2006 to 2010 (13.2%, 18.6%, and 20.2%, resp.; P < 0.001). The average of operative time, hospital stay, and tumor size diminished (from 211.7 to 177.17 minutes, from 5.52 to 4.22 days, and from 6.72 to 5.29 cm, resp.) when compared to the periods from 2001 to 2005 and from 2006 to 2010 (P < 0.001, P = 0.016, P < 0.001, resp.). Conclusion. As time goes by, there has been a significant reduction in the hospital stay time, surgery time, and size of renal tumor in patients treated surgically. The frequencies of minimally invasive and nephron-sparing surgeries have increased over the last years.
Introduction
Surgery continues to be the main form of treatment for a renal tumor.The renal cell carcinoma (RCC) is the main malignant renal tumor and is one of the most lethal urologic cancers.It includes a number of distinct subtypes derived from the various parts of nephrons, each one with a unique genetic base and tumor biology [1].
The goal of the surgery is to remove the tumor with an adequate surgical margin.In 1969, Robson and colleagues established the radical nephrectomy (RN) as "gold standard" procedure to heal renal tumors [2].However, studies have demonstrated that patients who undergo RN have a greater chance of deterioration of the renal function [3].At present, over 60% of renal tumors are incidentally diagnosed in ultrasound scans or in computerized tomographies conducted for other reasons [4].These tumors are smaller and have a lesser chance of developing metastases [5].More recently, based on knowledge about chronic renal disease [6] allied to improved technology, minimally invasive and nephronsparing surgeries are being increasingly used in smaller renal tumors [7,8].
However, despite the significant amount of scientific papers revealing the advantages of minimally invasive and nephron-sparing surgery, most urologists still perform open and radical nephrectomies, even for small renal tumors [9].Moreover, inconsistencies have been pointed out in the treatment of small renal tumors by laparoscopists, as many would remove the entire kidney unnecessarily while open surgeons are more likely to perform partial nephrectomy [10].
The purpose of this study is to evaluate the trends of renal tumor surgical treatment in a teaching medical center in the past decade, where there are experts in both laparoscopic and open surgeries, in order to observe whether and when the benefits of minimally invasive and nephron-sparing surgery were incorporated.
Material and Methods
Between January 2001 and December 2010, all the patients who underwent surgery for renal tumors at the Federal University of São Paulo were entered in a prospective database.A transversal study was performed with data obtained from the database and analyzed retrospectively.The preoperatory evaluation included blood and imaging workup.
The types and frequencies of surgeries performed, and histological types of renal tumors diagnosed were observed according to their evolution over time.The data from 2001 to 2005 (number of surgeries, operative time, hospital stay, and warm ischemia time during partial nephrectomies) were compared to the data from 2006 to 2010.
The Mann-Whitney U test was used to evaluate the differences between the variables analyzed.The comparisons between the proportions were done through the Chi-square test ( 2 ).The results were considered statistically significant when the value was below 5% ( < 0.05).The statistical analysis was performed using PAWS Statistics 17 software.
Results
Between January 2001 and December 2010, 505 patients were treated surgically for renal tumors.The majority were males (58.6%).Average age, operative time, hospital stay, and renal tumor size were 56 years (range from 17 to 84), 190.3 min (range from 55 to 630), 4.7 days (range from 1 to 60), and 5.9 cm (range from 0.5 to 28), respectively.The most frequent histological subtype was the RCC (74.5%) followed by the angiomyolipoma (6.9%).The laparoscopic partial nephrectomy (LPN) was the most performed surgery totaling 32.1% of all surgical procedures.Right after, the open radical nephrectomy (ORN), the open partial nephrectomy (OPN), and the laparoscopic radical nephrectomy (LRN) came each representing 27.9%, 20.8%, and 17% of surgeries, respectively.Other minimally invasive surgical procedures, such as radiofrequency and cryotherapy, were also utilized but in a minority of patients (1.6%) (Table 1).
Figure 1 shows the frequency of surgeries per year for the time period of 2001 to 2010, and it reveals a tendency of reduction in the number of ORNs and an increase in the number of LRNs, OPNS, and LPNs, as time passes by.Such an observation was confirmed when we stratified the comparison into period (Table 2).There was a statistically significant reduction ( < 0.001) in the frequency of ORNs ), with a statistic significance ( < 0.001).
Table 3 shows that when considering the type of surgical procedure (open or laparoscopic), the number was found to be similar (246 open nephrectomies and 248 laparoscopic nephrectomies); however, the average operative time, the hospital stay, and the tumor size were less for laparoscopic procedures when compared to open surgeries revealing a statistical significant difference ( < 0.001, for all three variables).
Discussion
The present study reveals that there has been a change on the way that renal tumor surgery has been approached throughout the past ten years.ORN was the most performed surgery (72.4%) in the year of 2001, whereas in 2010, this same type of surgery had already reached the lowest frequency level among the conducted procedures (11.8%).In 1963, Robson, from the University of Toronto, made RN to be the main surgery for the treatment of renal tumors due to its excellent oncologic results [2].However, more recent studies have revealed that patients who underwent RN presented a greater risk of creatinine high level and of developing proteinuria [11].
During the first years after laparoscopy was introduced into modern surgery, it seemed to be an impossible task conducting a nephrectomy through this method.In 1990, Clayfrom the Washington University School of Medicine made a significant contribution to the development of laparoscopic renal surgery, which made it possible to remove the entire kidney from the abdominal cavity through an incision, placing the organ inside an endobag and mincing it through the aid of a motorized morcellator [12].Subsequent analyses were quick to demonstrate the advantages of laparoscopic procedures compared to open surgery.When compared to classical surgery, the minimally invasive procedures resulted in similar oncological results, reduced blood loss, a smaller quantity of analgesics during postsurgery times, decrease in hospital stay time, and no increase in the risk of postsurgery complications [13,14].The present series demonstrated a significant reduction in operative time, hospital stay time, and tumor size when comparing laparoscopy to open surgeries between the periods of 2001 to 2005 and 2006 to 2010.These data reveal an important evolution in the feasibility of the laparoscopic procedure.
The nephron-sparing surgery for the treatment of renal tumors was initially described by Czerny in 1890, but the high death rate limited its application during a long-time [15].Recently, advances in imaging technology, experience obtained in renovascular surgeries due to other conditions, improvement in prevention methods of renal damage by ischemia, increase in the number of low-grade RCCs found incidentally, and the good results obtained in long term patient survival have made the nephron-sparing surgeries to become an excellent alternative for the treatment of small renal masses [16].The OPN allows for better renal function preservation and similar oncological results to the RN in selected patients [17].The LPN has been recently developed and has incorporated the fundamental principles of open surgery [18].In the last years, nephron-sparing surgeries have gained worldwide acceptance as the method for treating small renal tumors with peripheral location [14].In many academic centers, partial nephrectomies today represent 60%-70% of all surgeries conducted for RCC [19].
In our center, ORN has been substituted by minimally invasive and nephron-sparing procedures throughout time.In 2010, partial nephrectomies were already the most frequent surgeries (72.4%), being that LPN was the most conducted procedure in the whole period (32.1%) representing an ascending curve in the last years (Figure 1).When the number of surgeries is compared according to the time period (2001 to 2005 and 2006 to 2010), we observe a reduction in ORNs and an increase for minimally invasive and nephron-sparing surgeries.
The size of the tumor is the main prognostic factor for RCC.This observation results from several publications about this matter [20], which have been encouraging frequent proposals for the change in the staging of the illness [21].Studies have been demonstrating a significant reduction in the size of tumors during its diagnosis through time [22].Our data also corroborates this fact, revealing a gradual reduction in the size of tumors with a drop of 41.9% during the period of 2001 to 2010 (Figure 2).This fact can be justified by the advances in imaging methods which are providing a gradual increase in the so-called "renal incidentalomas." It is well known that tumors found incidentally are smaller and possess a lower risk of malignancy [23].The reduction in the size of tumors has played an important role in surgical procedures for the preservation of nephrons in the present treatment of renal tumors.In such context, partial nephrectomies play an important role, with a bigger emphasis on LPN, which in dexterous hands has shown better results with respect to OPN in more recent studies [24].Other therapies, such as active surveillance and ablative techniques, have also surfaced in the face of this new reality in the diagnosis of renal tumors [25,26].
Conclusions
There has been a significant reduction in the hospital stay time, operative time, and size of renal tumor in patients treated surgically in the past decade.The frequencies of minimally invasive and nephron-sparing surgeries have increased over the last 10 years.
Figure 2 :
Figure 2: Reduction of renal tumor size in the period from 2001 to 2010.
Table 1 :
Demographic data and tumor characteristics from January 2001 to December 2010.
Table 2 :
Number and proportion of nephrectomies from 2001 to 2005 and from 2006 to 2010.
Table 3 :
Number of openand laparoscopic nephrectomies and comparison of the mean operative time, hospital stay, and tumor size between 2001 and 2010. | 2019-03-08T14:07:01.253Z | 2013-04-07T00:00:00.000 | {
"year": 2013,
"sha1": "938d123a63734e1cd9b83095e17a52b2a34db1a7",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2013/945853.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "938d123a63734e1cd9b83095e17a52b2a34db1a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235221011 | pes2o/s2orc | v3-fos-license | Periorbital injections of botulinum toxin a: a novel therapeutic option for convergence spasm in neuropsychiatric disorders
Purpose Convergence spasm (CS, spasm of near reflex) is characterized by transient attacks of convergence, miosis and accommodation, often associated with functional neurological disorders. To date, no simple and efficient treatment option is available for CS. This study investigates whether periorbital botulinum toxin injections as used in essential blepharospasm are also a treatment option in these patients. Methods All patients with convergence spasm having been treated with periorbital BoNTA injections in the department of neuro-ophthalmology were identified. Data were extracted from patient files concerning details and subjective effectiveness of botulinum toxin injections and relation to psychiatric or neurological disorders. Patients reporting with a history of closed-head trauma or organic neurologic pathologies possibly causing CS were excluded. A telephone assessment with a standardized questionnaire was performed to evaluate mental health issues as a trigger, as well as the long-term effect and satisfaction with periorbital injections. Results Of 16 patients treated with periorbital botulinum toxin injections for convergence spasm, 9 patients reported depression and/or anxiety disorders ongoing or in the past. A median number of 3 injections (range 1–13) was administered with a variable effect (relief of symptoms) between no effect and effect of up to more than 12 weeks. A longitudinal follow-up revealed ongoing symptoms in five patients. Conclusions Periorbital botulinum toxin injections are less invasive than injections in the medial rectus muscle and can be a bridging therapeutic option in patients with CS. Mental health exploration is important due to psychiatric comorbidity.
Introduction
Convergence spasm (also known as spasm of near reflex) is characterized by a transient and intermittent appearance of bilateral convergence, miosis and accommodation mostly symptomatic to patients as blurred or double vision [1][2][3]. First described by von Graefe in 1856, a plethora of neurological and psychiatric diseases have been identified as underlying causes to date, among others including encephalitis, metabolic encephalopathy, tabes dorsalis, post-myelography, vertebrobasilar ischemia, brain stem pathology and multiple sclerosis [4][5][6].
The clinical signs of patients with convergence spasm are often misinterpreted as sixth nerve palsy, the main differential diagnosis, and patients are then subjected to further diagnostic tests or even invasive procedures such as lumbar puncture [4].
However, convergence spasm can often be a presenting symptom of functional neurological disorders. This in turn can be comorbid with stress, anxiety and personality disorders or such features may be absent [7,8].
Several treatment options for both types of convergence spasm have been proposed including cycloplegic eye drops (with resulting blurred vision in the near distance and the 1 3 need for additional plus lenses), prisms or squint surgery, most of them with limited effect [14][15][16][17]. Some patients benefit from repeated intramuscular Botulinum toxin A (BoNTA), injections in one or both M. rectus medialis, as shown in data of 15 patients of our department published in 2002 [18]. Seven of 15 patients reported no long-term effect of intramuscular BoNTA.
However, there are several disadvantages of intramuscular injections in the medial rectus muscle, such as the requirement of special expertise in EMG recording of extraocular eye muscles. In addition, intramuscular injections are not recommended in patients under anticoagulant therapy due to the risk of peri-and retroorbital bleedings.
In our department, periorbital injections of BoNTA of the periorbital regions (M. orbicularis oculi, see Fig. 1) are regularly performed in patients with essential blepharospasm in a dedicated clinic.
Due to the disturbing side effects of cycloplegic eye drops in patients with convergence spasm and the common comorbidity of psychogenic disorders of these two entities, we initiated a periorbital BoNTA therapy in a patient, that could not discontinue his oral anticoagulation. After reporting an unanticipated success in this patient, all patients with convergence spasm were treated with bilateral periorbital injections of BoNTA as first-line therapy since then. Injections in the M. rectus medialis were only performed in patients where the periorbital injections were not successful.
Periorbital injections are generally performed without EMG recording and also in patients under anticoagulation, similar to patients with essential blepharospasm. An initial dose of 12.5-15.0 IE ona-or incobotulinum toxin (Botox ® , Xeomin ® ) per eye was usually administered including pretarsal injections, the dosage could be adapted according to patient needs (Fig. 1).
The aim of this study was to evaluate the potential of periorbital BoNTA injections in patients with convergence spasm as a novel treatment option.
Methods
All files of patients treated with periorbital BoNTA injections in the M. orbicularis oculi ("periorbital injection") for convergence spasm between 2008 and 2019 were analyzed in this study. All patients presented at the clinic of Neuro-ophthalmology and Orthoptics at the Department of Ophthalmology, University of Bonn, which is a tertiary care referral center for botulinum toxin injections with a dedicated weekly BoNTA clinic.
Inclusion criteria were a diagnosis of convergence spasm based on orthoptic and ophthalmological examination followed by at least one periorbital injection of BoNTA.
The patients' files were evaluated retrospectively for referral diagnosis, ophthalmological and systemic diseases and specific psychiatric or neurological disorders at presentation and during the course of visits.
Exclusion criteria were the history of a closed-head trauma or organic neurological diseases possibly causing convergence spam (e.g., several patients with intracranial hemorrhages).
Furthermore, details of BoNTA injections and treatment effects were assessed including subjective change of symptoms, duration of relief and complications. The treatment effect reported by the patients was categorized into three groups: long (12 weeks or longer), intermediate (7-11 weeks) or short-term duration of symptom relief (6 weeks or less).
After a positive vote from the institutional ethics committee (Appr. Nr. 139/19, University of Bonn, Bonn, Germany), a letter of notification regarding our study was sent to all patients in March 2020 with the possibility to decline participation.
All patients were then contacted by a standardized telephone interview for an assessment of their long-term effects of the BoNTA treatments and their current health condition.
Statistical analysis was performed using GraphPad Prism version 8.0 (GraphPad Software, San Diego, CA, USA).
Patient characteristics
A total of 16 patients were identified and included in this study ( Table 1). The median age at first presentation was 40 years (IQR 26; 44.5, range 11-52 years], 9 female).
Two patients were referred due to the diagnosis of convergence spasm, all other patients (n = 14) were referred for either general consultation in our Neuro-ophthalmology clinic or with another diagnosis, e.g., sixth nerve palsy or esotropia. Previous therapeutic approaches without sufficient effect included prisms (n = 3), cycloplegic eye drops (n = 3) and squint surgery (n = 2). Nine patients had not received specific treatment before referral. Duration of symptoms until presentation was between 6 months and 4 years.
Eight of 16 patients presented at baseline with a psychiatric diagnosis in their history or were in current psychological and/or medical treatment with depression (n = 4) and anxiety disorder (n = 1) as the predominant comorbidity. One patient had a mental handicap since birth and two patients were in psychological treatment awaiting a confirmed diagnosis. Eight patients had no history of psychiatric disorders, but three of them indicated extreme mental stress situations when the eye problems started, two patients at work and one in private life (ID 9, 13, 14 in Table 1).
All patients had full orthoptic and ophthalmologic examination at every visit and all refractive errors were fully corrected, except in one patient, where full correction of a unilateral hyperopia was only accepted after two injections of BoNTA (ID 6 in Table 1). In all patients, convergence spasm could be documented initially including miosis and overaccomodation.
Three patients had a history of squint surgery for convergent strabismus in childhood, not related to the actual problems. Two further patients had esotropia, and one patient had an intermittent exotropia.
The most common additional ophthalmological symptom alongside the convergence spasm could be attributed to a dry eye syndrome (n = 10, 62.5%) and was in all cases confirmed in our examination.
BoNTA injections
All 16 patients received periorbital injections with a median number of 3 injections (range 1-13). Figure 1 shows the injections points and dosage of BoNTA for initial treatments. In patients with more than one injection, dosage was modified according to side effects and patients' needs: 13 patients were treated with 12.5 IE per side, in two patients dose was reduced to 7.5 IE/10.0 IE, in one patient dosage was 17.5 IE.
Five patients were treated with only one injection, two of these did not want to continue the treatment, one patient reported long travel times as the reason (ID10), in the other patients, symptoms resolved after one treatment (Table 1).
Periorbital BoNTA injections induced a subjective relief of symptoms for a median of 5.5 weeks, with high interindividual variability (0-12 weeks) ( Table 1).
Five patients had immediate and long-lasting resolution of symptoms (12 weeks or longer), three patients showed benefit over an intermediate period and five had benefit, The first ten patients were additionally assessed longitudinally in the telephone interview (details see Table 2 but the treatment effects lasted less than 6 weeks and three had no benefit (maximum: 1 week). Two of the patients with no benefit did not want to receive a second injection. In those patients receiving more than one injection, injection intervals were 3 months, or more when patients preferred longer intervals. Three patients (ID 1, 3, 8) even had intervals up to 1 year. One patient (ID 9) initially had two injections before his symptoms resolved. This patient came back for another injection 2 years later and for a fourth injection 3 more years later. No patient asked for or had shorter injection intervals than 3 months.
As a BoNTA-associated complication, one patient reported a mild hemifacial weakness and one patient reported a mild and transient ptosis.
Five patients had completed a symptom diary with daily requests to state the effect of the BoNTA treatment using a 0-100% scale of symptom relief. Four patients exhibited an increasing effect in the first weeks with high interindividual variability regarding the subjective effect size of 15-100% and duration of the effect (7 weeks to > 12 weeks).
Of the three patients that did not benefit from the periorbital treatment, one patient (ID 2 in Tables 1 and 2) that reported no effect at all over the time period of 12 weeks suffered from a mental handicap since birth, therefore, possibly being an atypical convergence spasm patient.
Another patient (ID 5 in Tables 1 and 2) received intramuscular injections in the medial rectus muscle after initial periorbital treatment. The switch to MR injections occurred after six periorbital injections and was taken into account due to insufficient relief of symptoms after the last periorbital BoNTA injection. The third patient (ID14) preferred occlusion contact lens although she did not note any side effects.
Clinical follow-up times after the first BoNTA injection ranged from 3 months to 5 years (mean 2.9 years, SD 1.8 years).
Long-term follow-up assessment (telephone interview)
Ten of 16 patients could be contacted for an evaluation of their past BoNTA treatment ( Table 2). The time since Lamotrigine currently superior the last treatment was 6.5 ± 3.2 years (mean ± SD, range 6-139 months). None of the patients declined to be contacted, however, six patients could not be contacted due to relocation or change of telephone numbers. Five of ten patients reported that convergence spasm was still provocable in specific situations. Of these, four patients named stress and strain as a trigger.
Although only five patients are without any symptoms in all situations, the contentment regarding periorbital BoNTA injections among all patients was high: with a median of 8 (range 5-10), patients were happy with the treatment effect. However, the satisfaction regarding the duration of relief was lower with a median of 4.5 (range 2-10).
Discussion
Convergence spasm (spasm of the near reflex) is a complex disorder with unclear etiology. There are no guidelines for therapy yet, while different approaches have been proposed.
In convergence spasm related to functional neurological disease, an associated psychiatric comorbidity may be present but is not always found. Interventions and therapy of the underlying disorders can lead to relief and discontinuation of symptoms [8,12,19,20]. Monocular occlusion has been proposed for symptomatic treatment of double vision but leads to reduced stereopsis and impairs the appearance of patients [21,22]. Orthoptic exercises, prescription of near vision glasses and prism glasses, as well as squint surgery have been reported with only limited effect [14,16]. Pharmacological approaches are not easy since the molecular target is unclear. Cycloplegic eye drops are sometimes used but lead to impaired near vision with the need of additional plus glasses and to problems with glare in the mostly young patients.
This is the first study to describe periorbital BoNTA injections as a treatment option for convergence spasm. Periorbital injections might be advantageous in patients with convergence spasm often associated with comorbidities such as depression, anxiety or post-traumatic stress disorders or occurring in periods of stressful life events to bridge the time until additional interventions as medication, psychologic counseling or stress-reducing strategies are effective [7,8].
Compared to medial rectus muscle injections, periorbital injections are less invasive, less painful, overcorrection resulting in double vision is unlikely and the approach is more widely and easier available since no electromyographic surveillance is necessary. In addition, no discontinuation of anticoagulants is necessary.
There is little information regarding the long-term natural history of convergence spasm published to date. The 10 patients with spasm of the near reflex published by Rutstein et al. [23] mostly had an onset of symptoms within a year, also three patients have complaining of problems for 2/4 and 14 years, respectively. Mean follow-up was 9.6 months (up to 30 months). Our patients were suffering from the disease between 6 months and 4 years prior to their first injection and had clinical follow-ups between 3 months and 5 years. In the telephone interview, five of the ten patients still had the symptoms on provocation after a mean of 8.6 ± 5.1 years (range 1.6-13.7 years) since the first visit at our department, showing that convergence spasm can be a long-lasting problem.
Another interesting finding is the fact, that five patients had esotropia, three of these had surgery in the past, not related to the actual problems. One patient had intermittent exotropia. This may indicate that an underlying strabismus may make patients more prone to develop convergence spasm. In the study of Rutstein [23] describing 17 patients with accommodative spasms, ten of these having convergence spasm, only one patient had strabismus.
The effect of periorbital BoNTA injections in our study is to date not easily explainable, but one hypothesis includes the lower eyelid pressure: Namiguchi et al. have shown a significantly lower eyelid pressure in patients with blepharospasm after BoNTA injections [24]. Another investigation has shown significantly higher eyelid pressure values in patients with dry eye syndrome, which was the most common ophthalmological finding in our cohort [25]. The reduced pressure of the eyelids to the surface can lead to a general relaxation of the periorbital area, and therefore, a possible relaxation also of the extraocular muscles. This may be similar to the previously reported effect of the facial feedback known in patients with depression injected with botulinum toxin [26,27]. However, further analysis regarding the effect of BoNTA injections in general and specifically for these diseases is warranted.
Another explanation that has to be discussed is a sole or main placebo effect. A randomized, double-blind, placebocontrolled study using botulinum neurotoxin in patients with functional movement disorders of other muscle groups has shown no evidence of improved outcome in the treatment group compared to placebo, but potential for improvement of symptoms due to placebo effects affecting both groups [28].
Especially due to the associated comorbidities and possible underlying stress situations as trigger, a "novel" and invasive treatment may have contributed to a positive perception of the intervention in our study. Therefore, further studies comparing this approach in comparison to other interventions are needed. Vice versa it can be concluded that if a high placebo effect is suspected, more invasive treatments, such as intramuscular injections should be initiated cautiously in patients with underlying psychiatric diseases and further studies on placebo therapies are warranted.
An additional risk of this treatment is the recurrence with the possibility of provoking a dependency on the injections and prolongation of the underlying disease course, although we could not see this in our patients. This underscores the importance of detailed consenting and explanation regarding the role as a supporting therapy, as done in our clinic. In contrast, we had several patients with good effect who told us that they would actually not need further treatment. In these cases, we offered to come back whenever they would need it and four patients came back for single injections after one to 3 years. Unlike in patients with blepharospasm, where injection intervals can be reduced to 6 weeks as needed, we would not recommend to do so in patients with convergence spasm.
From our clinical experience, no or only little effect of periorbital BoNTA injections can be expected in patients with a confirmed neurological disease, especially in patients after intracranial hemorrhage. These patients seem to profit from antiepileptic medication or from frosted glasses to prevent double vision in the event of convergence spasm attacks. However, the existence of organic convergence spasm is controversial, since the event of a trauma or accident can also serve as a trigger for functional symptoms, as shown for other neurological diseases [29].
Due to the unknown etiology, physicians should keep in mind that the assessment of neurological and psychiatric diseases as well as psychological counseling if needed tackles the underlying trigger and is, therefore, crucial for the long-term well-being of the patient. In our study, the patients' history was taken in an ophthalmology department, and therefore, possibly not specifically focused on underlying psychiatric or psychosomatic comorbidities, but of the eight patients which were not already in neurological or psychiatric treatment, five had a neurological workup. A precise record of the patients' current life situation, possibly family history and coping strategies nevertheless may reveal further and in some cases treatable coherences for which convergence spasm is only a secondary finding.
This study had limitations. First, the small sample size is a common problem in research on rare diseases. In addition, the retrospective nature of this study warrants further research with a double-masked sham-control approach, e.g., a control group with injection of NaCl would be desirable to confirm that the effect can be ascribed to BoNTA. However, the obvious and distinct effect of BoNTA on eyelid and other muscles rules out some study designs which would be desirable due to high interindividual variability and low sample sizes. In addition, studies to further analyze the co-existence of psychiatric disorders are necessary to give insights to the disease etiology.
Finally, it cannot be distinguished, to what extend these patients' symptoms would have resolved without any treatment or with sole treatment of the underlying comorbidities (if applicable). Nevertheless, we could show for the first time that convergence spasm may be a long-lasting problem for patients.
Conclusions
Periorbital BoNTA injections provide a less invasive therapeutic option than injections in the medial rectus muscle for patients with convergence spasm and it could be helpful to have an easy to perform treatment option in these patients often suffering from double vision and blurred vision for many years. Periorbital BoNTA injections can specifically be an option in patients with psychiatric or psychosomatic comorbidities or for patients with stressful life events which often have already high levels of life quality restriction due to their disease. Since at least a part of the effect might be attributed to placebo, further placebo-controlled studies are needed.
Availability of data and material KH and BW had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. KH and BW conducted and are responsible for the data analysis. Original data will be shared from the corresponding author on reasonable request.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-05-28T13:59:17.740Z | 2021-05-28T00:00:00.000 | {
"year": 2021,
"sha1": "0de292fa0bfb134c4caa7ef962e0aefc3a343d31",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-021-10613-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "0de292fa0bfb134c4caa7ef962e0aefc3a343d31",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13667341 | pes2o/s2orc | v3-fos-license | Microsatellite Instability Occurs Rarely in Patients with Cholangiocarcinoma: A Retrospective Study from a German Tertiary Care Hospital
Immune-modulating therapy is a promising therapy for patients with cholangiocarcinoma (CCA). Microsatellite instability (MSI) might be a favorable predictor for treatment response, but comprehensive data on the prevalence of MSI in CCA are missing. The aim of the current study was to determine the prevalence of MSI in a German tertiary care hospital. Formalin-fixed paraffin-embedded tissue samples, obtained in the study period from 2007 to 2015 from patients with CCA undergoing surgical resection with curative intention at Johann Wolfgang Goethe University hospital, were examined. All samples were investigated immunohistochemically for the presence of MSI (expression of MLH1, PMS2, MSH2, and MSH6) as well as by pentaplex polymerase chain reaction for five quasimonomorphic mononucleotide repeats (BAT-25, BAT-26, NR-21, NR-22, and NR-24). In total, 102 patients were included, presenting intrahepatic (n = 35, 34.3%), perihilar (n = 42, 41.2%), and distal CCA (n = 25, 24.5%). In the immunohistochemical analysis, no loss of expression of DNA repair enzymes was observed. In the PCR-based analysis, one out of 102 patients was found to be MSI-high and one out of 102 was found to be MSI-low. Thus, MSI seems to appear rarely in CCA in Germany. This should be considered when planning immune-modulating therapy trials for patients with CCA.
Introduction
Cholangiocarcinoma (CCA) is a gastrointestinal neoplasia which derives from the biliary epithelium or peribiliary glands within the biliary tree. It is subdivided into intrahepatic (iCCA) and extrahepatic perihilar (pCCA) and distal cholangiocarcinoma (dCCA). The five-year survival rates of patients with CCA remain below 20% despite surgery, and no targeted therapy regimen has demonstrated a therapeutic benefit compared to the standard therapy of gemcitabine and cisplatin, so far [1,2].
In recent years, a new promising approach in cancer therapy has evolved with the emergence of immune-modulating monoclonal antibodies. These agents focus on receptors or ligands of effector cells as targets for cancer immunotherapy by inhibiting immune check points such as the programmed cell death protein 1 (PD-1, CD279) and its ligand (PD-L1) or the protein receptor CTLA4 (CD152). Objective responses in patients treated with anti-PD1 antibodies were seen in patients with advanced unresectable melanoma and non-small-cell lung cancer or undergoing second-line therapy for metastatic renal-cell carcinoma [3][4][5][6]. Moreover, promising results have been shown for various other solid tumors and hematologic malignancies [7].
Notably, the treatment with an immune-modulating therapy, especially when agents are combined, may lead to severe side effects. Furthermore, only a certain percentage of patients seems to respond to the treatment, which highlights the importance to develop predictive biomarkers. Besides the immunohistochemical evaluation of PD-L1 expression, microsatellite instability (MSI) seems to be associated with an improved clinical response rate: MSI induces somatic hypermutation and consecutively neoepitopes, which might create an immune response [8].
According to recent data, the immune-modulating therapy might be a promising alternative to standard treatments for MSI-positive CCA [9]. Moreover, a durable response to immune checkpoint inhibition was shown in a patient with an advanced-stage, microsatellite-unstable CCA [10]. However, as shown in a recent review, data on MSI in CCA is very limited [11]. The study sizes were small, or only certain groups of CCA patients, such as those affected by liver fluke-induced iCCA, which occurs rarely in western countries, were investigated. Given the devastating prognosis currently faced by patients with advanced CCA, further characterization of the group of patients potentially profiting from immune-modulating therapy regimens is urgently warranted.
The aim of the current study was to determine the presence of MSI in all subclasses of CCA by immunohistochemistry and polymerase chain reaction (PCR) in a retrospective analysis at a German tertiary care hospital.
Clinicopathological Characteristics
In total, 102 patients (71 male, 70%) were included. The cohort consisted of intrahepatic (n = 35; 34.3%) and extrahepatic CCA (perihilar: n = 42, 41.2%; distal: n = 25, 24.5%). The majority of the included patients had a moderate tumor stage (Union internationale contre le cancer (UICC) I and II) at the time of resection. The mean age at diagnosis was 66 (range 38-84 years, standard deviation (SD) 10.9). In total, 87 out of 102 patients (85%) had died by the study closure (median survival 16 months, range 0-130 months, SD 25.5). For two patients, no follow-up data were available. Of the remaining thirteen patients who were alive, the median follow up was 37 months (range 18-122 months, SD 31.6). The clinicopathological characteristics are provided in Table 1. Kaplan-Meier graphs are provided in Figure 1.
Microsatellite Instability Analysis via PCR
Besides immunohistochemistry, MSI was evaluated using five quasimonomorphic mononucleotide repeats (BAT-25, BAT-26, NR-21, NR-22, and NR-24) in a pentaplex PCR, as described in Suraweera et al. 2002 [12]. Thereby, one case (Pat ID 121) was found to be MSI-low with alterations in locus NR-21, and one case (Pat ID 105) was diagnosed as MSI-high, since all analyzed MSI biomarkers were found to be unstable ( Figure 3). However, no loss of expression of a DNA repair enzyme was found immunohistochemically in these two cases ( Figure 4). Patient 121 (male, age at diagnosis 49 years) and patient 105 (female, age at diagnosis 75 years) had pCCA.
Discussion
The prognosis of patients with advanced CCA is poor, and new treatment options are warranted. Recent data suggested that patients with microsatellite-unstable CCA might profit from immune-modulating therapy. In the current study, we provide data on prevalence of MSI in the so far largest cohort of western patients including all subtypes of CCA. We thereby observed a very low prevalence of MSI, with 1% of MSI-high and 1% of MSI-low tumors.
Because of the emerging role of MSI in personalized cancer therapy, MSI prevalence was increasingly investigated in recent years. In a study based on exome data of The Cancer Genome Atlas (TCGA), MSI was analyzed in 18 cancer types, and MSI-positive tumors were found in 14 of 18 entities [13]. The proportion of MSI positivity ranged from nearly 30% in uterine corpus endometrial carcinoma, to about 20% in colon adenocarcinoma and stomach adenocarcinoma, to below 5% in kidney renal clear cell carcinoma, rectal adenocarcinoma, prostate adenocarcinoma, ovarian serous cystadenocarcinoma, glioblastoma multiforme lung adenocarcinoma, head and neck squamous cell carcinoma, hepatocellular carcinoma (HCC), lung squamous cell carcinoma, bladder urothelial carcinoma, and lower grade brain glioma. No MSI was observed in breast invasive carcinoma, skin cutaneous melanoma, kidney renal papillary cell carcinoma, and thyroid carcinoma. Two of 338 HCC (0.6%) were MSI-high, which, combined with the data of the present study, indicates a rare occurrence of MSI in hepatobiliary tract cancer. This should be kept in mind when considering implementing MSI testing in diagnostic routines to select patients potentially profiting from immune-modulating therapy.
The low positivity of MSI in CCA revealed in our study seems consistent with earlier data. For example, a study including 37 patients with liver fluke-induced iCCA as well as a study including 38 patients with extrahepatic CCA (eCCA) observed no patient (0%) with MSI-high tumors [14,15]. Notably, some studies found higher rates of MSI-high tumors as well. For instance, MSI-high iCCA were found in 14% and 18% in two studies including 22 patients each [16,17]. Likewise, a study including 28 patients with eCCA from 2002 found two patients (7%) to be MSI-high [18]. Of note, all these studies included Asian patients. Since the etiology might vary in different countries, a comparison of our study and former data is impeded. Because of the retrospective structure of the current study, ethnicity and country of origin could not be assessed. Despite that, the majority of patients usually treated at University hospital Frankfurt consists of Caucasians, and we hypothesize that the results of the current study indicate a very low prevalence of MSI in CCA in western countries.
Besides the geographical differences, the methodological approach of different studies should be considered as well when comparing different studies investigating MSI. In the present study, we used the commonly used detection of MSI via immunohistochemistry (MLH1, PMS2, MSH2, and MSH6) and, in addition, a well-tested method including five quasimonomorphic mononucleotide repeats performing equally to the Bethesda panel in terms of sensitivity and specificity in the detection of MSI in colorectal cancer [12]. Discordant results of immunohistochemistry and PCR-based techniques were observed in this study. Notably, it has been described that in some cases MSI can be detected via a PCR approach without concurrent loss of expression of any of the four mismatch repair proteins, as these have lost their function as a result of mutations [19][20][21]. Thus, discrepancies between the two methods have been described for other entities such as colorectal cancer, indicating to view these techniques as complimentary. We therefore recommend using both methods to investigate the MSI status in CCA in future studies. In other studies on CCA, the number of the MSI markers evaluated by PCR varied between five and more than 10 [15,16]. Besides, Hause et al. included over 200,000 loci and used an exome-based classifier to identify MSI-high tumors [13]. By analyzing MSI on a whole-exome basis, they observed a variation of microsatellite instable loci across cancer types, indicating that loci being inherently stable in one cancer type might be frequently mutated in another. On the basis of these observations, a comprehensive whole exome-based analysis of MSI in cholangiocarcinoma is of great interest.
In conclusion, the data of the current study indicate that MSI occurs rarely in western CCA patients. This should be kept in mind in the planning of future therapy trials including patients with MSI-high CCA.
Patients
Formalin-fixed, paraffin-embedded tissue samples from patients with CCA undergoing surgical resection in the period from 2007 to 2015 were obtained from the archive of the Senckenberg Institute of Pathology, University Hospital Frankfurt. The clinical data (date of birth, gender, tumor stage, tumor size, and follow-up data) were retrieved from electronic medical records. Only patients without prior treatment who underwent surgery with curative intention were included. Pathologic tumor-node-metastasis stages were assessed with the international system for staging biliary tract cancer adopted by the American Joint Committee on Cancer and the UICC (Union Internationale Contre le Cancer), 7th edition [22]. The clinical data and biomaterial were obtained from the tumor documentation and the biobank of the Universitäre Centrum für Tumorerkrankungen (UCT) Frankfurt (University Cancer Center, Frankfurt, Germany). Written informed consent was available from all patients. The study was approved by the institutional Review Boards of the UCT and the Ethics Committee at the University Hospital Frankfurt (Approval No. SGI-07-2016, date 23 February 2017).
Immunohistochemistry
Freshly cut paraffin sections, 4-µm-thick, were stained for MLH1, MSH6, PMS2, and MSH2. The antibodies used are described in Table 2. The immunohistochemical stainings were judged by two pathologists (Ria Winkelmann, Sylvia Hartmann) independently, and the assessment was performed without awareness of the MSI status of each case. CCA were staged as negative if none of the tumor cells were stained. In case of disagreement, the slides were reviewed again, and a consensus was reached. Lymphocytes, stromal cells, and non-neoplastic epithelium served as internal positive controls.
PCR and Microsatellite Instability (MSI) Analysis
MSI analysis was adapted from Suraweera et al. 2002 [12]. In brief, five quasimonomorphic monocleotide markers (BAT-25, BAT-26, NR-21, NR-22, and NR-24) were amplified in a single multiplex PCR reaction. The primer sequences and fluorescence labeling of one primer of each primer pair is given in Table 3. PCR was performed with the Taq PCR Mastermix from Qiagen (Hilden, Germany), according to the manufacturer's directions. All primers were used at a final concentration of 240 pmol/L, except for the primers for BAT-25, which were used at the concentration of 1 µmol/L. Ten to 50 ng of DNA was used as input material. The following PCR conditions were applied: denaturation at 95 • C for 5 min, 35 cycles of denaturation for 30 s, annealing at 55 • C for 30 s and extension at 72 • C for 30 s, and a final extension at 72 • C for 10 min. The PCR products were analyzed on a 3130xl Genetic Analyzer (ThermoFisher Scientific, Darmstadt, Germany) with the 3130 Series Data Collection Software v.3.0. A 50 cm capillary array and the Fragment Analysis 50_POP7 settings were used. For size estimation, the GeneScan 350 ROX TM dye Size Standard (ThermoFisher Scientific) was added to each sample. The final evaluation of the fragment length was done with the GeneMapper Software 5.0 (ThermoFisher Scientific). A sample was considered MSI-high or MSI-low if more than three or one-two markers represented shifted alleles, respectively.
Statistics
Descriptive statistics, such as the calculation of mean value, range, and standard deviation, as well as the calculation of Kaplan-Meier graphs, were determined using BiAS (version 11.01, BiAS for Windows; Epsilon-Verlag, Frankfurt, Germany). The overall survival was calculated as time from surgery to date of death (event) or date of the last follow-up (censored).
Conflicts of Interest:
The authors declare no conflict of interest. | 2018-05-13T23:20:04.555Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "c17d094001ccb921dd2fe5e953cf10a26cd7c849",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/19/5/1421/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f951415b2080e63261efe855c9377f51253cd27",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254350747 | pes2o/s2orc | v3-fos-license | Smart Agriculture Robot Controlling using Bluetooth
— Agriculture is critical to the growth of an agricultural country. Agriculture-related issues have traditionally hampered the country's progress. The only answer to this challenge is smart agriculture, which involves upgrading conventional agricultural processes. When the signal reaches the Arduino, it will send a command to the relay of that specific line field valve to activate it in order to open the valve, as well as a command to the relay of the pump to exchange it directly in order to irrigate that field. Also, if two or three of the three plants' moisture sensors are operational, all three fields can be watered at the same time. As a result, all solenoid valve relays may be activated to open all valves, and the pump can operate to water all three plants. Choosing a proper pump to paint to water all flowers at the same time was an issue at first. The system's software has been set, and the system will no longer function unless (or three moisture sensors) are engaged. However, if one sensor of any line is engaged, the system will no longer function since that sensor may also be malfunctioning. If the water tank level is too low, the system will not function at all, even when all plant sensors are enabled to safeguard the water pump.
I. INTRODUCTION
Irrigation is a critical component of economic development in any growing country. Irrigation specialists have used manual irrigation methods for many years. The manual approach has several limitations and is unreliable for largescale irrigation. Irrigation has a direct influence on ultimate product cost and output. This technique seeks to eliminate the attractive 32 manual irrigation approach, which has to be enhanced over time. The aim was reached after finishing the plan and collecting the components of the smart irrigation system. In addition, all of the requirements were met in order to complete this smart irrigation system and bring it into full production and finalization. Following that, the system was tested, and the end result was as expected. The system will now not function until two or three moisture sensors from any of the three lines of the three fields send a signal to the Arduino indicating that the soil is dry, and the crop requires water. Farmers in the modern period utilize a manual irrigation technique in which they water the ground at regular intervals. This technique appears to use more water, resulting in water waste [1].
Nikesh Gondchawar and his colleagues want to use automation and IoT technology to make agriculture smarter. One of their project's standout features is a smart GPS-based remote-controlled robot that can do duties like as weeding, spraying, moisture detection, bird and animal scaring, vigilance, and so on [2]. On their article, Pooja Mohan Moger and his team experimented with these technologies, which include weeding, turning on/off the water pump, frightening animals and birds, and detecting temperature, humidity, and wetness using suitable sensors [3]. Mahammad Shareef Mekala and his colleagues examined several common uses of Agriculture IoT Sensor Monitoring Network technologies that use Cloud computing as the backbone. This survey is intended to better understand the various technologies and to construct long-term smart agriculture [4]. Praveen Kumar Reddy Maddikunta and his team attempted to investigate the types of sensors suited for smart farming, as well as the possible requirements and problems for operating UAVs in smart agriculture. We've also discovered potential future applications for UAVs in smart farming [5]. The Smart Agriculture communication system based on the Internet of Things is a success for. Atmaja and his team since all data from the sensor is successfully received by the Raspberry Pi and delivered to the database, which can be viewed via the built-in android application and website [6]. Anil Maragur and his colleagues created a vehicle that is driven by a DC motor driver through Bluetooth input. These robots have the benefit of requiring less manpower and labor, making them an efficient vehicle. With the aforementioned issues in mind, a system with the aforementioned features is built [7]. In this work, we insert a novelty that the system will not work until two or three moisture sensors from any of the three lines of the three fields send a signal to the Arduino signaling that the soil is dry, and the crop needs to be watered. When the signal reaches the Arduino, it sends commands to the line field valve's relay to activate it in order to open the valve, as well as to the pump's relay to exchange it directly in order to irrigate that field. Also, if the moisture sensors on two or three of the three plants are working, all three fields can be irrigated The major goal of our work is to cultivate agricultural land, offer to ladder, fertilizing, and watering when the ground becomes wet, seeding automatically, and Robotic Car Controlling the Bluetooth Device [8]- [15].
II. METHODOLOGY
The Microcontroller is the brain of the system, as seen in this block diagram. It collects data and generates outputs in accordance with the program's instructions. We receive data from the Bluetooth module and moisture sensors to detect moisture, and if water is required for the land, the water pump is turned on; otherwise, the water pump is turned off. The Bluetooth module is also used to regulate the entire operation.
This project included four categories of work: Cultivate with relation to the land. The Land's Ladder To determine the soil's moisture content. When the ground is dry, turn on the water pump automatically. When there is enough water in the ground, the sensor will automatically detach, and the water pump will turn off. Apply fertilizer to the land.
We began by presenting our project and its prospects. This chapter will go through Arduino Mega, Motor Driver, Motor, Servo Motor, Water Pump, Soil Moisture Sensor, Relay Module, Solar Panel, Buck Module, Boost Module, and Diode in more detail [16]- [20].
A. Soil Moisture Sensor
A moisture level in the soil is detected by an autonomous moisture detection and watering system. To properly regulate the moisture content of a plot of soil, the automatic moisture detecting, and the watering system can be used in conjunction with a typical automatic watering system. Because plants take water directly from the soil, the autonomous moisture detecting, and the watering system is designed to manage the moisture content of the soil by measuring moisture content inside the soil [23]- [25]. A photovoltaic module has been used in a solar panel, which uses light energy (photons) from the Sun to create electricity via the photovoltaic effect. The majority of modules employ wafer-based crystalline silicon cells or thinfilm cells. A module's structural (load bearing) element might be either the top or bottom layer [26]. In addition, Fig. 2 showed the circuit diagram of this research work, Hardware implementation of our work is given in Fig. 3. In Fig. 3, we can see a display, Bluetooth module, solar panel, and voltages meter. For commanding the farm robot, the Bluetooth module received a command from an Android phone. A voltmeter may also be used to measure solar voltage.
III. RESULTS AND DISCUSSION
In Fig. 4, we can see the boost module, servo motor, and relay module. The Bluetooth module got the order to operate the Servo Motor from the Android phone. Fig. 5 shows a servo motor, a motor driver, and a soil moisture sensor. Soil moisture sensor detects soil moisture, Bluetooth module receives a command from an Android phone to drive a servo motor, and this project is run. Table I represents the cost-effectiveness of this work.
Bluetooth is used to control this project. Solar energy is used to power the system. We have a charge stored in a 12 V battery. When you have finished setting up the entire project, connect it to the Bluetooth module using Bluetooth RC Car Apps.
Our work has the advantages of being extremely easy to control, high dependability, having digital facilities, fast operation, and high efficiency. This mechanism operates in the presence of light. There are two conditions: if the light is above the needed value, moisture performs its function; if the light is below the required system, moisture does not do its function. This technique is designed for small projects such as mushroom farms and indoor farms; thus, if farmers try to use it on a large farm, certain errors may arise. This method does not get local weather, which determines when to irrigate a landscape.
IV. CONCLUSION
Irrigation is a critical component of economic development in underdeveloped nations such as Nepal. Irrigation specialists have used manual irrigation methods for many years. The manual approach has several limitations and is unreliable for large-scale irrigation. Irrigation has a direct influence on ultimate product cost and output. This approach tries to eliminate the allure of hand watering, which must be enhanced over time. The aim was reached after finishing the plan and collecting the components of the smart irrigation system. In addition, all of the requirements were met in order to complete this smart irrigation system and bring it into full production and finalization. Following that, the system was tested, and the end result was as expected. Farmers in the modern period utilize a manual irrigation technique in which they water the ground at regular intervals. This technique appears to use more water, resulting in water waste. MD. Miraj Hossain obtained his master's degree in International Studies from the Tokyo University of Foreign Studies, Japan. Before that, he got a bachelor's degree in International Relations from the University of Dhaka, Bangladesh. During his graduation, he had done a professional course on SPSS Statistics which is a statistical software suite offered by the Dept. of Statistics & Informatics at the University of Dhaka for data management, advanced analytics, multivariate analysis, business intelligence, etc. From that point, he has focused on the multiverse domain of data science and currently working as a data analyst in different Tokyo-based small and large companies. He is a specialist in descriptive and predictive analysis and his interest encompasses a vast array of domains including nuclear science, bioinformatics, health & food production, pharmaceuticals, domestic & international business, international relations & public policy, etc. Currently, he is living in Tokyo, Japan. For the last five (5) years, he has had a resear chcollaboration with Assistant Professor Dr.Abbas. | 2022-12-07T19:08:34.812Z | 2022-11-28T00:00:00.000 | {
"year": 2022,
"sha1": "abe5bbbe8d42566e3574b704e651b374cb522bcc",
"oa_license": "CCBY",
"oa_url": "https://www.ej-eng.org/index.php/ejeng/article/download/2867/1310",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6754286ba3dd3987b36155814a13d6a19db9b03f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
11582730 | pes2o/s2orc | v3-fos-license | Charge-Symmetry Breaking, Rho-Omega Mixing and the Quark Propagator
The momentum-dependence of the $\rho^0$-$\omega$ mixing contribution to charge-symmetry breaking (CSB) in the nucleon-nucleon interaction is compared in a variety of models. We focus in particular on the role that the structure of the quark propagator plays in the predicted behaviour of the $\rho^0$-$\omega$ mixing amplitude. We present new results for a confining (entire) quark propagator and for typical propagators arising from explicit numerical solutions of quark Dyson-Schwinger equations. We compare these to hadronic and free quark calculations. The implications for our current understanding of CSB experiments is discussed.
A wide variety of charge symmetry breaking (CSB) phenomena in nuclear physics appear to be well explained in terms of one-boson exchange potentials, when electromagnetic effects, the neutron-proton mass difference, and isoscalar-isovector meson mixing are included. A recent and extensive review of CSB has recently appeared [1]. Ultimately we would like to understand CSB in terms of electromagnetic effects and the u-d quark mass difference in a microscopic quantum chromodynamics (QCD) based description of the strong interactions. Any potential which gives rise to CSB in the nucleon-nucleon (NN) sytem can be categorized as either a class III or class IV interaction in the conventions of Henley and Miller [2].
In the relatively successful explanation of a number of observed CSB phenomena, calculated contributions from ρ 0 -ω mixing play a major role. These phenomena include: (i) differences between pp and nn scattering lengths and effective range parameters, (ii) binding energy difference of 3 H and 3 He, (iii) analyzing power difference in n-p elastic scattering, and (iv) the Nolen-Schiffer anomaly. In the one-boson exchange model of the nucleon-nucleon interaction the ρ 0 -ω potential is generated by ρ 0 -ω mixing in the intermediate vectormeson propagators [3]. Another source of CSB arising from meson mixing is that from π 0 -η mixing, but this is typically much less important than that due to the ρ 0 and ω. The long standing problem of explaining the binding energy differences of mirror nuclei, (the Nolen-Schiffer anomaly) [4], appears to have been largely resolved as a result of the ρ 0 -ω mixing potential [5]. The theoretical understanding of the difficult neutron-proton analyzing power difference measurements carried out at TRIUMF [6] and IUCF [7] relied on ρ 0 -ω mixing and in particular this was the dominant contribution in the latter. [8,9,10].
Although the situation appears to be very satisfactory, there is however at least one potential problem with the standard evaluations which has been pointed out recently by Goldman, Henderson and Thomas(GHT) [11]. The problem is associated with the assumed off-shell behavior of the ρ 0 -ω mixing matrix element. The standard assumption is that the mixing amplitude is momentum independent with its value extracted from the experimental data on e + e − → π + π − at the ω pole, q 2 ≃ m 2 ω , while the exchanged mesons in Eq. (5) have space-like four-momentum, q 2 < 0, and are therefore highly virtual. In their initial study a simple model was used where the vector mesons are considered as quark-antiquark composites. Because of the the up-down quark mass difference the ρ 0 -ω mixing amplitude is generated by an intermediate quark loop. A ρ 0 of four momentum q µ dissociates into a quark-antiquark pair and then recombines to form an ω, or vice versa. The quark and antiquark momenta are ℓ µ + (q µ /2) and −ℓ µ + (q µ /2) respectively, where ℓ is the loop momentum. There will also be a contribution from electromagnetic effects, but this will be much smaller than that from the quark mass difference. Simple vertex functions were introduced and free quark propagators were used. Once the mixing amplitude was calculated, GHT obtained the coordinate space (static) potential by Fourier transforming the resulting momentum-space potential.
The basic conclusions of the GHT calculation were: (i) the mixing amplitude is strongly momentum dependent, (ii) there is a node in the potential at about 0.9fm and, (iii) because of the node, the potential changes sign and so its importance is greatly suppressed.
The potential implications of this result of GHT are profound in view of the central role meson mixing plays in our understanding of charge symmetry breaking. If these results can be confirmed in other more realistic calculations, it may be necessary to find new sources of CSB in order to explain existing data.
Since the initial investigation by GHT other studies appear to support their basic conclusions. These include investigations of π 0 -η mixing using a quark loop model [12], chiral perturbation theory [13], and a hadronic model [14]. In addition, each of two further studies of ρ 0 -ω mixing using a hadronic model [15] and using QCD sum rules [16] also support the basic conclusions of GHT.
In the present study we investigate the role of quark confinement and the nature of the quark propagator in a q-q based description of the ρ 0 -ω mixing amplitude. This study is motivated by one of the difficulties associated with the GHT calculation, namely that, as their work did not include quark confinement, an unphysical qq-pair production threshold resulted in the timelike region -at q 2 = 4m 2 q . This corresponds to the vector meson being able to dissociate into an on-shell quark-antiquark pair for q 2 ≥ 4m 2 q . We review some necessary formalism before discussing details of our quark propagators.
The interaction Lagrangian densities for the nucleon-ρ and nucleon-ω couplings are These definitions are standard, (although some authors vary in their definitions of the couplings by including additional factors of 1/2). No tensor coupling for the NNω vertex is included since we choose typical coupling constants and form factors determined by the Bonn group [17]. The contribution from ρ 0 -ω mixing to the NN potential, is given by (with Γ µ ≡ iσ µν q ν /2M N and where where ω|H|ρ 0 is the meson mixing amplitude as defined in Ref. [3]. We have introduced the standard Bonn form factors at the nucleon-meson vertices, which for on-shell nucleons are given by With this normalization for the form factors, the appropriate Bonn couplings are g 2 ρ /4π = 0.41 and g 2 ω /4π = 10.6. Also, Λ ρ = 1400 MeV, Λ ω = 1500 MeV, C ρ ≡ f ρ /g ρ = 6.1 and as already stated Our starting point for the present calculations is the expression for the ρω mixing self-energy in terms of a quark-antiquark loop insertion The trace is over spinor indices and the factor of three comes having performed the colour trace. Γ µ ρ (p ′ , p) and Γ µ ω (p ′ , p) are the quark-vector-meson vertex functions for the ρ 0 and ω respectively, S i (q) is the quark propagator with flavour i (i = u, d) and ℓ ± ≡ ℓ ± (q/2). We take for simplicity here where F qρ and F qω are the quark-vector-meson form factors. Since we wish to study the q 2 -dependence arising from the quark propagator in the loop, we follow Ref. [11] and consider these form factors to be functions of ℓ 2 only, i.e., F qρ (ℓ + , ℓ − ) = F qρ (ℓ 2 ) and F qω (ℓ − , ℓ + ) = F qω (ℓ 2 ). We do not wish to introduce any unconstrained q 2 -dependent behaviour through the quark-meson vertex functions.
We can make use of Lorentz invariance and current conservation (i.e., , where this is the definition of the scalar function Π(q 2 ). Note that in the case of the photon self-energy we would require Π(0) = 0 in order that the photon not acquire a mass through self-energy insertions, however there is no such restriction in the present case. Since we are only interested in coupling to conserved external currents, (i.e., since q µ J µ = 0 for the nucleons), we need only retain the g µν Π(q 2 ) part of Eq. (6) in our calculation of the NN-interaction. A comparison of the definition [3] of the mixing amplitude ω|H|ρ 0 with the usual Feyman rules shows that this is identical with the ρω mixing self-energy Π(q 2 ), i.e., ω|H|ρ 0 ≡ Π(q 2 ).
The general form of the quark propagator in Minkowski space is where A(q 2 ) and B(q 2 ) are scalar functions of q 2 and, Z(q 2 ) = 1/A(q 2 ) and M(q 2 ) = B(q 2 )/A(q 2 ). One possible mechanism of quark confinement is that the quark propagator does not have a mass pole, i.e., that the function does not have a pole or is an entire function in the complex q 2 plane [18]. One sees that there is no mass pole when the function Z(q 2 ) goes to zero for q 2 −→ M 2 (q 2 ). One explicit quark model where this occurs can be found in Ref ( [19]) where this property arises from the solution of a model quark Dyson-Schwinger equation.
We can isolate the g µν Π(q 2 ) part of Π µν and we then find in our qq-model that the mixing amplitude can be written as where for the u-quark propagator S u (p) = F u (p 2 )[ p + M u (p 2 )] and similarly for the d-quark. The quark-vector-meson couplings are estimated from a standard quark model analysis of the nucleon-vector-meson coupling. We find that With quark masses of ≃ 400MeV this gives g qρ ≃ 3.9 − 4.3 and g qω ≃ 4.8 − 5.2 as typical ranges for these couplings. Throughout this work we will use then g qρ g qω = 20.0.
In the chiral limit, (i.e., with massless quarks and neglecting electromagnetic and weak effects), the pion decay constant can be related to the quark propagator through the Goldberger-Treiman relation (see, e.g., [20] and references therein) to give Factors of three and four follow from colour and spinor traces respectively.
(Note that this differs slightly from the expression in Ref. [20] which used a dressed axial-vector vertex instead of the appropriate bare one. The bare vertex avoids double counting problems). Similarly, in the chiral limit (see, e.g., [20] and references therein) we obtain an expression for the quark condensate in terms of the quark propagator where the trace is over colour and spinor indices. We use these two results to where f 0 and µ are parameters and m is to be chosen as a typical infrared quark mass. This propagator is confining and analytic everywhere in the complex q 2 plane (i.e., entire). It captures the essential elements of the propagator of Ref. [19] and has the great advantage that with the appropriate choice of gaussian quark-vector-meson form factors we can use Eq. (8) to obtain an analytic expression for the mixing amplitude Π(q 2 ) in both the timelike and spacelike regimes. In practice it is easier to calculate the expressions for f π and the condensate m 0 [i.e., Eqs. (9,10)] after first rotating to Euclidean space.
Evaluating these expressions for our analytic (confining) propagator gives Hence we can now fix the parameters f 0 and µ. Motivated by the infrared values of the dynamically generated quark mass in Dyson-Schwinger equation studies [20] we choose here m = 450MeV for the "constituent" quark mass.
Then substituting the expression for f 0 into the expression for f 2 π in Eqs. (12) lets us solve for µ to give µ = 523.6MeV and hence we then find f 0 = 4.432 × 10 −6 MeV −2 . Thus all parameters are now constrained.
Since the u-d quark mass difference is commonly estimated to be ≃ 4MeV we use m d −m u = 4MeV throughout this work. Hence the "constituent" masses are m u = 450MeV and m d = 454MeV and f 0u = f 0 and µ are as determined above. The simplest and most reasonable assumption is that µ has the same value for the u and d quarks, and hence from Eq.
As discussed above in order to facilitate the integration in Eq. (8), we use F qρ (ℓ 2 ) = F qω (ℓ 2 ) = exp(ℓ 2 /Λ 2 ) with Λ = 1GeV, which is a typically reasonable number for the form factor cut-off. We see that these form factors fall off as gaussians in the spacelike region. The integral in Eq. (8) can be performed easily and one obtains for Π(q 2 ) where λ ≡ 2[(1/Λ 2 ) + (1/µ 2 )]. The mixing amplitude has thus been calculated for all timelike and spacelike q 2 for the analytic (confining) case.
We can also numerically evaluate Π(q 2 ) for numerical solutions of quark .
In Fig. 1 we show the momentum-dependence of the ρ 0 -ω mixing amplitude for the three quark calculations described here, i.e., the confining (analytic), Dyson-Schwinger equation (dse), and heavy-free (free) cases. Also shown for comparison are recent hadronic calculations [15], where an essentially parameter free calculation was made using an NN loop and the n-p mass difference. The two hadron-model curves differ in that one (hadron ff) has the usual Bonn form factor applied at the vertices of the nucleon loop in the spaceklike region as well as at the meson-nucleon vertices. We see that all calculations appear capable of fitting the data with typical parameters and that the momentum dependence is remarkably similar in each case. We see strong momentum-dependence of the mixing amplitude and a change in sign in the vicinity of q 2 = 0. This is also what was seen in the recent QCD sum rules study [16], although there Π(q 2 ) was seen to change sign in the timelike region at q 2 ≃ 0.25 − 0.40GeV 2 rather than the spacelike region.
In Fig. 2 we show the momentum-space potential V (q) given in Eq. (5) for the various models. Also for comparison we show the result obtained when the usual assumption of momentum-independence of the mixing amplitude is made. In Fig. 3a) we show the corresponding coordinate-space potentials V (r) given by Eq. (14) for each of these cases. Fig. 3b) has an enhanced vertical scale in order to show the location of nodes in the potential V (r). We see that all calculations predict strong suppression of the ρ 0 -ω mixing potential compared with that resulting from the usual momentum-independent assumption for the mixing amplitude. We see that the Dyson-Schwinger propagator and the hadronic models predict the opposite sign at small r and have nodes between 0.5 and 0.8fm, whereas the heavy free quark and confining (analytic) quark cases have the same sign as the usual assumed potential, but are very strongly suppressed. It is interesting to note that the heavy free quark case presented here is identical to the case studied by GHT with the exception of having heavier (i.e., 600MeV) quarks. We see that this increase in mass has removed the node in V (r) for the free propagator case. Given the variety of approaches to the calculations, the results are remarkably similar.
The possible implications for our understanding of CSB are far-reaching.
For example, in the case of the ρ-ω mixing contribution to the class IV CSB in n-p elastic scattering [6]- [10] there is a competition between large-r fall-off in the potential and short-distance suppression of the distorted N-N wavefunc-tion. The result is that the conventional contribution peaks around 0.9fm. It is interesting that this is the region where nodes occur. All models predict at least strong suppression in this region. It is clear that the previously assumed theoretical understanding of charge-symmetry breaking phenomena is now questionable. The models and treatments to date appear to be in overall agreement. The crucial question is whether these conclusions will survive future, more elaborate examinations. Fig. 1 The ρ-ω mixing amplitudes calculated in a variety of models, (see text). Also shown for comparison is the experimentally extracted amplitude at the ω-mass-shell point and the usual assumed q 2 -independent behaviour. Fig. 2 The ρ-ω momentum space potentials, V (q), for the mixing amplitudes shown in Fig. 1. V (q) is defined in the text. | 2014-10-01T00:00:00.000Z | 1993-06-18T00:00:00.000 | {
"year": 1993,
"sha1": "e7b792785081876e11cdf3a3c1e880640a6b00e5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nucl-th/9306019",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e7b792785081876e11cdf3a3c1e880640a6b00e5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
261610118 | pes2o/s2orc | v3-fos-license | Nativity differences in socioeconomic barriers and healthcare delays among cancer survivors in the All of Us cohort
Purpose We aimed to assess whether nativity differences in socioeconomic (SES) barriers and health literacy were associated with healthcare delays among US cancer survivors. Methods “All of Us” survey data were analyzed among adult participants ever diagnosed with cancer. A binary measure of healthcare delay (1+ delays versus no delays) was created. Health literacy was assessed using the Brief Health Literacy Screen. A composite measure of SES barriers (education, employment, housing, income, and insurance statuses) was created as 0, 1, 2, or 3+. Multivariable logistic regression model tested the associations of (1) SES barriers and health literacy with healthcare delays, and (2) whether nativity modified this relationship. Results Median participant age was 64 years (n = 10,020), with 8% foreign-born and 18% ethnic minorities. Compared to survivors with no SES barriers, those with 3+ had higher likelihood of experiencing healthcare delays (OR 2.18, 95% CI 1.84, 2.58). For every additional barrier, the odds of healthcare delays were greater among foreign-born (1.72, 1.43, 2.08) than US-born (1.27, 1.21, 1.34). For every 1-unit increase in health literacy among US-born, the odds of healthcare delay decreased by 9% (0.91, 0.89, 0.94). Conclusion We found that SES barriers to healthcare delays have a greater impact among foreign-born than US-born cancer survivors. Higher health literacy may mitigate healthcare delays among US cancer survivors. Healthcare providers, systems and policymakers should assess and address social determinants of health and promote health literacy as a way to minimize healthcare delays among both foreign- and US-born cancer survivors. Supplementary Information The online version contains supplementary material available at 10.1007/s10552-023-01782-z.
Background
According to the National Cancer Institute (NCI), an individual is considered a cancer survivor from diagnosis to the end of life [1].In the United States (US), it is estimated that by 2026, the number of cancer survivors will surpass 20 million, which can be attributed to the ongoing innovation of treatment and early disease detection [2,3].While this increase in survival is optimistic, this population requires additional healthcare services to prevent or manage chronic health conditions, sequelae of cancer treatment, and monitoring for cancer reoccurrence [3].Moreover, given that at least 50% of cancer survivors will experience physical and mental health consequences due to their disease or treatment [4], is important to mitigate healthcare delays in this population.
Overall, healthcare delays (due to the domains of accessibility, financial burden, and social support) can significantly impact cancer survivors, most notably individuals who are ethnic minorities, lower SES, or uninsured [5].Cancer survivors who experience healthcare delays can significantly suffer the consequences as their care is typically time sensitive [6].Because timely cancer care is associated with a favorable prognosis, barriers or delays to treatment can result in a more advanced stage of cancer at the time of eventual care, thus, resulting in poorer outcomes [7].However, physical health is not the only aspect affected; mental health may also suffer at the hands of delayed care [8].For instance, COVID-19 led to numerous appointment cancelations for cancer survivor patients, including self-cancelations that were caused by depression and anxiety symptoms surrounding the pandemic and safe access to care [9].These missed appointments, in turn, exacerbated patients' fears of cancer recurrence as their follow-up care were halted (e.g., laboratory testing, imaging, and appointments), impacting their overall well-being and physical and mental health [10,11].
One factor associated with increases in healthcare delays among cancer survivors is lower SES status, which is associated with all-cause mortality risk, poorer mental (e.g., depression), and physical health outcomes [12,13].Similarly, SES has been linked to a range of cancer outcomes and higher SES (suggesting more financial resources and the ability to afford medical care) is associated with decreased in the length of healthcare delays [14].Because of the longterm and specialized care needed for cancer survivors, they are at risk of experiencing higher financial burden, which in turn, impacts receipt of survivorship care and increases the risk of mortality, and worsens quality of life [15].
A second factor associated with healthcare delays among cancer survivors is a lack of health literacy, which presents a barrier in properly understanding, communicating, and obtaining information required to navigate the complexity of the healthcare systems efficiently to obtain the care needed and to make educated health decisions [16,17].Among the general population, low health literacy has been positively associated with various care delays, including seeking treatment, forgoing care, and struggling with accessibility to needed care and providers [18].As cancer survivors have complex healthcare needs, mastering the skills required for their continual care is important.
Being foreign-born (immigrant) brings an additional barrier to accessing healthcare and consequently promotes healthcare delays [19].These barriers among foreign-born may be partially attributed to a higher likelihood of lack of insurance, healthcare cultural perception, and English skills proficiency [20].Evidence suggests that cancer survivors who are immigrants have a lower quality of life and higher depression symptomology compared to native-born [21].Moreover, foreign-born individuals may experience unique barriers (e.g., language, discrimination, laws and regulations to qualify for services) that impact the quality of care they can receive.For instance, language barriers and insurance difficulties caused by laws and policies may make it much more difficult for an immigrant patient than a US-born patient to receive adequate needed care [22].
While it is well established that SES barriers and a lack of health literacy are associated with healthcare delays, there is limited and inconsistent knowledge of how these associations differ by nativity.To guide the selection of our variables for our model, we used the theoretical framework from Wafula and Snipes for barriers to healthcare among Black immigrants in the US [23].Additionally, we adapted their framework to focus on assessing the association between the barrier factors (i.e., SES barriers, health literacy) and general healthcare delays among cancer survivors and tested the moderating effects of nativity between SES barriers and health literacy with general healthcare delays (Fig. 1).Thus, this study aims to contribute to the current literature by examining whether nativity status modifies the relationship between a combination of SES and health literacy barriers, and healthcare delays in a large national cohort.In this study, cancer survivors were defined as those participants who indicated that they had ever been diagnosed with cancer.Inclusion criteria for our cohort included participants who were ever told by their healthcare provider that had/have cancer.Skin cancer is one of the most prevalent cancers in the US with most cases being reasonably benign basal cells, not often tracked on most cancer registries, and having over 90% 5-year survival rate [24].In addition, previous studies have excluded these cancer survivors as their follow-up care is often reasonably minor [25].Thus, we excluded those with skin cancer and participants with missing data on the healthcare delay survey questions.
Demographics
Personal level characteristics accounted for in this study were age (at survey completion), sex (male vs female), race (Asian, Black, Hispanic, White, multiracial/biracial, other [includes those who selected: none of this, another population, and prefer not to answer]), marital status (married [includes those who selected: married and living with a partner] vs single [includes those who selected: Single, divorced, widowed, and separated]), nativity (US-vs foreign-born), annual income (using quintiles, the lowest quintile were those who reported income of < 35 K vs quintile 2-5 ≥ 35 K), education (college or more vs ≤ high school or equivalent), insured (yes vs no), housing status (own vs rent/other arrangements), employed (yes vs no), current treatment (yes vs no), and cancer type (range of multiple cancer sites).
Health literacy
Health Literacy was assessed using the three-item Brief Health Literacy Screen (BHLS) [26,27], which measures individual needs for help with filling out forms ("How confident are you filling out medical forms by yourself?"),reading health-related documents ("How often do you have someone help you read health-related materials?"),and difficulty learning due to a lack of understanding of written medical documents ("How often do you have problems learning about your medical condition because of difficulty understanding written information?").Response options were on a five-point Likert scale, with options for reading and understanding health documents including "Always," "Often," "Sometimes," Occasionally," and "Never" and for the need for help to fill forms were, "Extremely," "Quite a bit," "Somewhat," "A little bit," "Not at all."The survey question measuring if the participant required help with forms was reversed coded.All items were then summed to create a composite score with higher scores (max = 15), indicating fewer health literacy problems.Following a previous study that assessed health literacy using the BHLS scale by Willens and colleagues, we dichotomized this score, with those who scored ≤ 9 as having limited health literacy and scores > 9 as having adequate health literacy [28] (Supplemental Table 1).
Socioeconomic barriers
Five SES factors (education, income, insurance, housing, and employment status) were dichotomized to create a composite measure following a previous study [29].If individuals selected an income " ≥ 35 K," an education level of "college or more," being insured, owning a home, and being employed, they were coded as having no SES barriers (0).Those who selected either one of the following: an income between " < 35 k," an educational level of " ≤ high school or equivalent," not being insured, not being employed, and/or having a housing status as rent/another arrangement, they were given an additive score ranging from 1 to 5 [29].Due to sparse counts in categories of four and five SES barriers, scores were truncated to range from 0 to 3 or more SES barriers (Supplemental Table 2).
Healthcare delays
Nine questions were used to assess healthcare delays.These questions were obtained from the National Health Interview Survey asking participants if they experienced delays in any healthcare received due to various reasons in the past 12 months [25].These reasons include transportation, living in a rural area where healthcare providers are too far, nervousness about seeing a healthcare provider, could not get time off work, could not get childcare, cannot leave adult unattended due to being a caretaker, could not afford copays, deductible was too high, and could not afford it or had to pay out of pocket for some or all procedures.Response options were "no," "yes," or "don't know."A dichotomized measure was created with those who responded "yes" to one or more reasons as having experienced healthcare delays, and those who responded "no" to all reasons as having experienced no delays, those who reported "don't know" were not counted in this measure.
Statistical methods
To characterize the study population, descriptive statistics were calculated for all demographic variables and variables of interest (nativity, SES barriers, and health literacy).Listwise deletion method was used to address missing data (15.9%).We conducted a post-hoc sensitivity analysis to determine the direction of the potential bias introduced from our listwise deletion method for a complete case analysis.We used a multiple imputation analysis using chained equations (MICE).This approach allowed us to impute missing values based on observed data and estimate relationships between variables.To satisfy the safe data sharing policy of "All of Us," groups with less than 20 participants are reported in tables as ≤ 20 or < x% with another category in the same column/row also showing ≥ (%) to ensure that another count value cannot be used to derive the exact count that is less than n = 20 in the suppressed group.First, a multivariable logistic regression model tested the hypothesized independent relationships between SES barriers, and health literacy with healthcare delays adjusted for covariates (age, sex, ethnicity/race, marital status, treatment status, and cancer type).Secondly, to assess nativity differences in the association between SES barriers and healthcare delays, a product interaction term for nativity and SES barriers (SES barriers*nativity) was included in the model.Furthermore, we assessed for a p-trend in the adjusted and stratified model by introducing SES barriers and health literacy as continuous measures in the model.All statistical analyses were performed using R Jupyter Notebooks embedded in the "All of Us" workbench, with a significance level at alpha 0.05.Odds ratios (ORs) with 95% confidence intervals (CI) and p value are reported.
Results
The median age of the study population (n = 10,020) was approximately 64 (interquartile range [IQR Q1, Q3] 55.5, 71.8) years.The majority of participants were female (66.1%),US-born (92%), and self-identified as White (82.3%).There was a higher distribution of females vs males and other sex, foreign-born vs US-born, and Black cancer survivors vs all other race/ethnicity categories that had three or more SES barriers (Table 1).While a higher proportion of females vs males and other sex, foreign-born vs US-born, and Hispanic cancer survivors vs all other race/ ethnicity cancer survivors had one or more healthcare delays (Table 2).
Results from the multivariable-adjusted model showed that neither nativity (OR 1.04, 95% CI [0.87, 1.25]) nor health literacy (OR 1.20, 95% CI [0.89, 1.59]) were statistically significantly associated with healthcare delays (see Table 3).However, when assessing for a p-trend for health literacy, for every one-unit increase in health literacy there was an 8% (OR 0.92, 95% CI 0.89, 0.95) decrease in the likelihood to experience healthcare delays.Furthermore, compared to those who did not have any SES barriers, those who reported two or three or more were 65% (OR 1.65, 95% CI [1.43, 1.90]) and 118% (OR 2.18, 95% CI [1.84, 2.58]) more likely to experience delays in healthcare, respectively.) as likely to experience healthcare delays as their USborn counterparts who experienced the same levels of SES barriers, respectively (see Table 3).
Assessing for p-trend in the stratified model by nativity, we found that for every additional SES barrier experienced among foreign-born individuals, they were 72% (OR 1.72, 95% CI [1.43, 2.08] vs OR 1.27, 95% CI [1.21, 1.34]) more likely to experience healthcare delays compared to their US counterparts.Finally, in the stratified model, low health literacy was associated with a 41% (OR 1.41, 95% CI [1.02, 1.97]) increase in the likelihood of healthcare delays, and each one-point increase in health literacy score was associated with a 9% (OR 0.91, 95% CI [0.88, 0.94]) decrease in the odds of healthcare delays among US-born cancer survivors.While for foreign-born cancer survivors, low health literacy was not statistically significantly associated with experiencing healthcare delays (OR 0.66, 95% CI [0.34, 1.25]), and for every one-unit increase in health literacy score there Single = includes those who reported: divorced, widowed, separated, and never married.p-values were obtained from chi-square and Kruskal-Wallis tests.Per "All of Us" data use agreement policy, groups < 20 participants are shown as ≤ 20 (%) with a corresponding > (%) category to prevent deriving counts < 20 from other values.No all percentages equal to 100 Q1 = Quartile 1(25%), Q3 = Quartile 3(75%) SES socioeconomic include (education, income, insurance, housing, and employment status) Data values included in these categories: a Other and missing b None of this, another population, and prefer not to answer c Prefer not to answer and missing d Missing, do not know, and prefer not to answer, NA = Missing e Income reported in US dollar, Cancer type "other" also includes esophageal, eye, pancreatic, and stomach cancers was a 2% (OR 0.98, 95% CI [0.90, 1.07]) decrease in the odds of healthcare delays.Although this association did not reach statistical significance (see Table 3).
Discussion
Using data from the All of Us research cohort, we aimed to investigate the associations between SES barriers and health literacy with healthcare delays.We also explored whether there were any differences in these associations by nativity.We found that among all cancer survivors, health literacy (binary) and nativity were not statistically significantly associated with healthcare delays.We also found that experiencing 2 or 3+ SES barriers was significantly associated with an increased likelihood of healthcare delays.Further, at equal levels of SES barriers, foreign-born individuals had significantly higher odds of healthcare delays when compared to US-born individuals.Lastly, in our separate models by nativity status assessing for a trend, we found that health literacy was inversely associated with healthcare delays among USborn cancer survivors only.
Socioeconomic barriers
Our multivariable model suggested that cancer survivors who experienced more than one SES barrier experienced an increase in the likelihood of healthcare delays compared to cancer survivors who experienced no SES barriers.These findings are consistent with previous research that found that low SES cancer survivors are more likely to not receive appropriate follow-up care [30] and that those who experience financial, housing, and employment barriers have a greater likelihood of delaying needed care [31][32][33].Similarly, in another study, cancer survivors from low SES who reported lower income and education levels were less likely to have follow-up care discussions with their medical providers [30].This lack of follow-up discussions can potentially contribute to prolonged delays in preventative care.Regarding education, in a previous study by Gonzalez and colleagues, cancer survivors who had a college degree or higher were more likely to have higher access to care but experienced more delays than those with less than or equal to a high school diploma [34].Perhaps belonging to a higher educational level improves health literacy, enabling cancer survivors to adequately make an informed decision when seeking the appropriate care needed.Among cancer survivors, those with low SES barriers are more likely to delay medical care, preventive care (dental and vision care), and not fill prescription medications due to cost-related concerns [32,35,36].Furthermore, as cancer survivors are met with the unexpected financial costs of cancer treatment, they may worry about struggling to meet their housing and household bills payments, food insecurities, and retirement [37], thus, potentially forgoing or delaying the crucial care they are required to enhance their survival.Similarly, housing insecurities are linked with negative health outcomes and poor access and quality of healthcare [31,38].Lastly, the impact of modifiable factors such as education and SES are inversely associated with experiencing more unmet healthcare needs [39] and a lack of health insurance [39,40].Hence, uninsured cancer survivors have a higher risk for comorbidities, bearing a greater mortality risk than uninsured non-cancer survivors [41].
Nativity differences
Previous research among cancer survivors has shown nativity to be a factor in cost-related barriers among US-born Hispanic cancer survivors [42].In our study, we found that foreign-born individuals who experienced the same level of SES barriers had higher likelihood of experiencing healthcare delays than their native-born counterparts when compared to those who experienced no SES barriers.Previous research that explored nativity differences among cancer survivors is limited and inconsistent.For example, although the results did not reach statistical significance, Diamant and colleagues reported the opposite in a sample of non-cancer survivors, showing that non-native-born were less likely to report healthcare delays compared to nativeborn individuals [43].Whereas, in a study of female cancer survivors that assessed disparities in healthcare access and utilization, they found that non-US-born females were less likely to report having a routine place to go to meet their healthcare compared to US-born cancer survivors [44].This is important, as not having a primary healthcare office to seek care or have routine services can promote delaying accessing the extended care cancer survivors need.In addition, a higher prevalence of sociodemographic and SES barriers (e.g., income, education) was found among foreignborn individuals than among native-born in two North American countries (US and Canada), and disparities in healthcare access were higher among foreign-compared to native-born [20].Thus, supporting the role of SES barriers among foreign-born individuals.
Health literacy
Our study found that, after adjusting for confounding sociodemographic, nativity, and SES barriers, health literacy was not statistically significantly associated with healthcare delays in our entire study cohort.However, we found that solely among US-born participants, limited health literacy was associated with an increased likelihood of healthcare delays compared to adequate health literacy.We also saw a statistically significant monotonic relationship between increased health literacy scores and decreased odds of healthcare delays among US-born.While our findings only reached statistical significance among US-born cancer survivors, the magnitude of our findings was in the same direction among foreign-born cancer survivors.This indicates that regardless of nativity, increasing health literacy could mitigate the impact of healthcare delays.While we controlled for nativity, sociodemographic, and SES barriers, our findings suggest that health literacy could also be associated with cultural differences and language that we were not able to control in our study.For example, cancer survivors with low health literacy may suffer difficulties when trying to decode their symptoms or understand their diagnoses through communication with providers, which could result in healthcare delays and a later stage of disease at diagnosis [45].In the case of cancer survivors, there is a great need for complex health services after they have completed their primary treatment for their illness and subsequent management of their health [46].These deficiencies in communicating efficiently with their providers could increase the risk of healthcare delays.
Strengths and limitations
An important strength of this study is using data from the All of Us research program, as it has a high proportion of enrolled underrepresented minority populations, which increases our sample size and access to geographically and ethnically diverse populations.Similarly, we were able to analyze a relatively large sample with complete socioeconomic and sociodemographic data of foreign-born cancer survivors, which helps to expand the current cancer survivor, cancer health disparities, and health disparities research.Lastly, our results can be generalized to cancer survivors and individuals with similar characteristics and settings that experienced similar SES barriers in the US.
This study is not without limitations.The cross-sectional nature of the design does not allow us to establish a temporal or causal relationship.There is potential for misclassification for some of the factors included in our main independent variables as this data is from self-reported questionnaires.Our sample was also comprised of a higher distribution of individuals with a college degree or higher and a higher income.We were unable able to assess for acculturation; thus, future studies should account for it as it can be a potential confounder in these associations.Using a complete case (CC) analysis method may have introduced bias to our results.We addressed this concern by conducting a sensitivity analysis using multiple imputation (MI) method for our outcome variable.This analysis revealed that there was a slight overestimation in the relationship between SES barriers and healthcare delays among foreign-born cancer survivors, compared to their US-born counterparts with the same level of SES barriers.Despite the small differences observed in the effect estimates, the overall trend and interpretation of the results remained consistent across both the MI and CC analyses (Supplemental Table 3).
Although the All of Us collects data nationwide, these results cannot be generalizable to all cancer survivors in the United States.A clear example of this limitation can be seen in that our sample reported roughly 90% some college or higher degree, whereas in 2021, those who had completed some college, or more were approximately 63% in the general US population [47].Lastly, an additional limitation is that during our study period, the COVID-19 pandemic may have worsened SES barriers and healthcare delays that may have contributed to our findings and may need further exploring.
Implications and future direction
Our results can help guide policymakers to promote the development of policies that aim at eliminating SES barriers.For example, many of the variables that are part of our SES index are system-modifiable factors.Implementation of laws that make education equitable, job creation and training, housing affordability, and universal healthcare are ways in which policies can aid in mitigating these SES barriers.At the healthcare system level, practitioners and systems should recognize that these SES barriers exist and promote solutions.For example, systems can offer transportation services to those who are experiencing SES barriers to lessen healthcare delays [48].Similarly, providing adult-and childcare services can help avoid delays in seeking care [49].
As the All of Us continues to enroll participants and participants complete all surveys, future studies should reassess this association to determine if our findings remain true.Moreover, in future analyses, it is important to consider adjusting for acculturation, as well as other types of stressors such as discrimination, as these experiences can contribute to healthcare delays.Additionally, future studies should aim to explore racial differences between US-born and foreignborn cancer survivors.It is crucial to recognize that races and ethnicities such as Black, Hispanic, and Asian are heterogeneous, varying across cultural and socioeconomic aspects.Thus, understanding the SES barriers associated with healthcare delays among US-born ethnic minorities from different racial and ethnic backgrounds (e.g., Mexican-US-born, Guatemalan-US-born, Chinese-US-born, Nigerian-US-born) compared to foreign-born counterparts (e.g., Mexican-foreign-born, Guatemalan-foreign-born, Chinese-foreign-born, Nigerian-foreign-born) is of extreme importance.
Conclusion
Using data from the All of Us research program, we found that SES-related barriers are significantly associated with healthcare delays in cancer survivors in our study.However, a greater impact was observed among those who were foreign-born.Similarly, we observed a possible protective effect of health literacy on healthcare delays among US-born only.Our study highlights that to mitigate the impact of delayed healthcare, both policymakers and healthcare providers must prioritize addressing the social determinants of health and promoting health literacy in these populations.
Cross-sectional data for this study were obtained from the "All of Us" research program collected by online survey between May 2018 and April 2021.Briefly, this program is open to individuals who are 18 and over and are living in the US.Participants signed a consent form following the Declaration of Helsinki for data collection.The participants' data used are de-identified and available to approved researchers.The All of Us program was approved by the National Institutes of Health (NIH) Institutional Review Board (IRB).
Fig. 1
Fig. 1 Theoretical framework of the impact nativity has on healthcare delays
Table 1
Descriptive characteristics of the sample and their association with SES barriers (n = 10,020)
Table 2
Single = includes those who reported: divorced, widowed, separated, and never married.pvalueswereobtainedfrom chi-square and Mann-Whitney tests.Any healthcare delay includes delays due to transportation, living in a rural area where healthcare providers are too far, nervousness about seeing a healthcare provider, could not get time off work, could not get childcare, cannot leave adult unattended due to being a caretaker, could not afford copays, deductible was too high or could not afford it or had to pay out of pocket for some or all procedures.Per "All of Us" data use agreement policy, groups < 20 participants are shown as ≤ 20 (%) with a corresponding > (%) category to prevent deriving counts < 20 from other values.No all percentages equal to 100 Q1 = Quartile 1(25%), Q3 = Quartile 3(75%)SES socioeconomicData values included in these categories: a Other and missing b None of this, another population, and prefer not to answercPrefer not to answer and missing d Missing, do not know, and prefer not to answer, NA = Missing e Income reported in US dollar, Cancer type "other": includes esophageal, eye, pancreatic, and stomach cancers
Table 3
Results from the multivariable regression analysis of risk factors for health care delay and by nativity status among cancer survivors from the All of Us Research Program p-trends were obtained by assessing SES barriers and Health Literacy as continuous measures Adjusted odds ratios (OR) for: sex, race/ethnicity, age, marital status, active treatment, and cancer type SES socioeconomic, Ref reference group, CI confidence interval | 2023-09-09T06:17:43.402Z | 2023-09-07T00:00:00.000 | {
"year": 2023,
"sha1": "24c4e566739d1824c58104954d196edb0a83ca1a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10552-023-01782-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4f1cbf205dd979518414f11c4fdfcb29a65c7dc",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9542890 | pes2o/s2orc | v3-fos-license | Hypoxia Downregulates MAPK/ERK but Not STAT3 Signaling in ROS-Dependent and HIF-1-Independent Manners in Mouse Embryonic Stem Cells
Hypoxia is involved in the regulation of stem cell fate, and hypoxia-inducible factor 1 (HIF-1) is the master regulator of hypoxic response. Here, we focus on the effect of hypoxia on intracellular signaling pathways responsible for mouse embryonic stem (ES) cell maintenance. We employed wild-type and HIF-1α-deficient ES cells to investigate hypoxic response in the ERK, Akt, and STAT3 pathways. Cultivation in 1% O2 for 24 h resulted in the strong dephosphorylation of ERK and its upstream kinases and to a lesser extent of Akt in an HIF-1-independent manner, while STAT3 phosphorylation remained unaffected. Downregulation of ERK could not be mimicked either by pharmacologically induced hypoxia or by the overexpression. Dual-specificity phosphatases (DUSP) 1, 5, and 6 are hypoxia-sensitive MAPK-specific phosphatases involved in ERK downregulation, and protein phosphatase 2A (PP2A) regulates both ERK and Akt. However, combining multiple approaches, we revealed the limited significance of DUSPs and PP2A in the hypoxia-mediated attenuation of ERK signaling. Interestingly, we observed a decreased reactive oxygen species (ROS) level in hypoxia and a similar phosphorylation pattern for ERK when the cells were supplemented with glutathione. Therefore, we suggest a potential role for the ROS-dependent attenuation of ERK signaling in hypoxia, without the involvement of HIF-1.
Introduction
Embryonic development in its early stages takes place in a hypoxic microenvironment, and oxygen (O 2 ) has a significant impact on cellular differentiation or cell fate decisions [1]. ES are derived from preimplantation blastocyst, and hypoxia is thus suggested to modify cellular differentiation and regulate pluripotency. However, the outcome of stem cell cultivation in reduced O 2 tension remains highly controversial [2][3][4][5].
Cellular response to hypoxia is primarily orchestrated by hypoxia-inducible factor (HIF). HIF is a heterodimeric protein belonging to a family of environmental sensors known as a bHLH-PAS (basic-helix-loop-helix-Per-Arnt-Sim) transcription factors. HIF consists of a constitutively expressed HIF-β subunit and O 2 -regulated HIF-α subunit [6]. Three α isoforms termed HIF-1α, HIF-2α, and HIF-3α are currently described [7]. When O 2 is not a limiting factor, the HIF-α subunit is rapidly hydroxylated by the family of proline hydroxylases (PHD) and targeted for subsequent proteasomal degradation [8][9][10]. Hypoxia inactivates PHD leading to the accumulation of the α subunit. After dimerization with the β subunit in the nucleus, HIF binds to a conserved DNA sequence known as the hypoxia-responsive element (HRE) to transactivate a myriad of hypoxia-responsive genes. HIF-1 heterodimer consisting of HIF-1α and HIF-β is the most important for cell adaptation to hypoxia [11]. Notably, O 2 -dependent hydroxylation is implicated also in the modification of other (non-HIF) targets, and O 2 is also a substrate for other various enzymes such as NADPH oxidases and monooxygenases [12]. Further, sensing and cellular response to low O 2 levels are also associated with the modulation of reactive oxygen species formation (ROS) and alterations in the metabolism of mitochondria, including ATP production [13]. A particular cellular response to reduced O 2 levels is thus complex and might be mediated in both HIF-dependent and -independent fashion.
The self-renewal and differentiation of murine ES cells is regulated by several signaling pathways.Among others, thesignal transducer and activator of transcription 3 (STAT3) and phosphatidylinositol-4,5-bisphosphate-3-kinase/Akt (PI3K/ Akt) signaling maintain ES cells in an undifferentiated pluripotent state [14,15]. Conversely, mitogen-activated protein kinase/extracellular signal-regulated kinase (MAPK/ERK) signaling promotes the differentiation of ES cells [16]. Here, we focus on changes elicited in the activity of the mentioned signaling pathways in wild-type and HIF-1α deficient ES cells upon 24 h cultivation in 1% O 2 . The activity of these signaling pathways is regulated through phosphorylation and by an opposing process of dephosphorylation mediated by phosphatases. DUSPs are the MAPK-specific phosphatases that impede the activity of ERK and also stress-activated kinases. DUSP1, DUSP5, and DUSP6 are suggested as hypoxia-sensitive and responsible for ERK downregulation [17,18]. Further, serine/ threonine PP2A regulates both the MAPK/ERK and Akt pathways, and its modulation by hypoxia is proposed in various models [19,20].
Our study demonstrates the sensitivity of ES cells to chronic hypoxia in the context of the dephosphorylation of ERK and Akt with the possible involvement of ROSdependent mechanisms. In agreement with other studies [21][22][23], we suggest that cultivation in 1% O 2 is in our system also associated with the decline of markers associated with the undifferentiated status of ES cells, despite the impairment of prodifferentiation ERK signaling.
0.5 μg of pT81/HRE-luc construct containing three tandem copies of the erythropoietin HRE in front of the herpes simplex thymidine kinase promoter and the luciferase gene, expression vectors for murine HIF-1α and HIF-2α under cytomegalovirus promoter, and GFP [26] (all generously provided by Professor L. Poellinger, Karolinska Institutet, Sweden), APRE-luciferase gene reporter plasmid (STAT3responsive acute-phase response element, luciferase reporter; APRE-luc [27]; kindly provided by Professor A. Miyajima, Institute of Molecular and Cellular Biosciences, University of Tokyo, Japan) and Renilla luciferase construct (Promega, USA) were used per one well in a 24-well plate. The cell culture medium was changed 6 h after transfection. For experiments involving APRE-luc, the medium was exchanged for fresh complete medium or medium without LIF. Dual-Luciferase Assay Kit (Promega, USA) was used for the evaluation of luciferase activity according to the manufacturer's instructions. Relative luciferase units were measured using a ChameleonTM V plate luminometer (Hidex, Finland) and normalized to the Renilla luciferase expression.
Small
Interfering RNA (siRNA) Transfection. Cells were transfected by commercially available siRNA against DUSP1 (sc-35938), DUSP5 (sc-60555), DUSP6 (sc-39001) transcripts (each consisting of a pool of 3 target-specific 19-25 nt siRNAs designed to knock down gene expression), or related nonsilencing control (all Santa Cruz Biotechnology, USA) using Lipofectamine RNAiMAX Reagents (Thermo Fisher Scientific Inc., USA) according to the manufacturer's instructions. Cells were harvested at the indicated time, and the expression of particular DUSP transcripts was assessed by qRT-PCR, or the expressions of selected proteins and posttranslational modification were analyzed by western blot.
Quantitative Real-Time RT-PCR (qRT-PCR) Analysis.
Total RNA was extracted using the UltraClean Tissue & Cells RNA Isolation Kit (MO BIO Laboratories, USA). Complementary DNA was synthesized according to the manufacturer's instructions for M-MLV reverse transcriptase (Sigma-Aldrich); 0.5 μg of total RNA was used for cDNA synthesis. qRT-PCR was performed in a Roche Light-Cycler 480 instrument using LightCycler SYBRE Green Master Mix (Roche, Germany) according to the manufacturer's instructions. The primers, appropriate annealing temperatures, and PCR product lengths for the determined transcripts are listed in Table 1. The gene expression of each sample was expressed in terms of the threshold cycle normalized to the mean of β-actin and TATA-box binding protein (TBP) expression.
High-Performance Liquid Chromatography (HPLC)
Analysis of ROS Production. The HPLC detection of O 2 − was based on the detection of a specific product, 2hydroxyethidium 2-OH-E(+), which is formed in the reaction of O 2 − with HE as described previously [25,29,30]. Besides specific 2-OH-E(+), also a nonspecific product of hydride acceptors in reaction with HE, ethidium (E(+)), was detected. Briefly, the cells were seeded onto 60 mm tissue culture dishes 24 h before treatment. The cells were exposed to hypoxia for 24 h or treated with 10 mM GSH for 60 minutes. Thirty minutes before the end of the experiment, HE (Sigma-Aldrich, USA) in a final concentration of 10 μM was added to the cells. The cells were washed two times with ice-cold PBS. To extract the HE products, ice-cold methanol was added to the cells for 15 minutes at 4°C in the dark and shaken. The supernatant was transferred to an Eppendorf tube and centrifuged. A 75 μl sample was injected into the HPLC system (Agilent series 1100) equipped with fluorescence and UV detectors (Agilent series 1260) to separate the 2-OH-E(+) and E(+) products. Fluorescence was detected at 510 nm (excitation) and 595 nm (emission); the mobile phase consisted of H 2 O/CH 3 CN. A Kromasil C18 (4.6 mm × 250 mm) column was used as the stationary phase.
2.7. Alkaline Phosphatase (ALP) Activity Determination. ALP activity determination was performed by a standard procedure as presented previously [31,32]. Briefly, the cells were seeded 24 h before treatment and exposed to hypoxia or normoxia for 24 h. After the incubation, the cells were washed twice with PBS and lysed in ALP assay lysis buffer (50 mM Tris pH 7.4, 150 mM NaCl, 1 mM EDTA, and 0.5% NP40; all components Sigma-Aldrich, USA). Protein concentration was determined using DC protein assay (Bio-Rad, USA) kit according to manufacturer's instructions. 5 μg of protein was incubated in a 96-well plate (four parallel wells in each group) at 37°C with ALP substrate (4-p-nitrophenylphosphate; Sigma-Aldrich, USA) for 30 minutes. The reaction was stopped by adding 3 M NaOH and the optical densities were measured at 405 nm (620 nm reference) using a microplate reader (Hidex, Finland).
Statistical
Analysis. Data are expressed as mean + standard error of the mean (SEM). Statistical analysis was assessed by T test or by one-way analysis of variance ANOVA and Bonferroni's Multiple Comparison posttest. Values of P < 0 05 were considered statistically significant ( * P < 0 05)
Hypoxic Response in Wild-Type and HIF-1α-Deficient ES Cells.
To determine the stabilization of HIF-1α and its role in hypoxic response in our system, we cultivated wild-type and HIF1α-deficient ES cells in normoxia and in the presence of 1% O 2 for 24 h. In parallel, ES cells were also treated by hypoxia mimetics for 24 h (CoCl 2 -0.3 mM, DFO-0.05 mM, DMOG-3 mM, and JNJ-0.2 mM); concentrations were selected according to a comprehensive literature search and previous study [22]. As anticipated, exposure to hypoxia led to the stabilization of HIF-1α in wild-type but not in HIF-1α −/− ES cells. The level of HIF-2α in hypoxia-treated cells was equal in wild-type and HIF-1α −/− ES cells (Figure 1(a)). In a similar manner, treatment with hypoxia mimetics also resulted in HIF-1α stabilization with a certain variability in the efficacy of the employed compounds and with CoCl 2 having the most pronounced effect (Figure 1
Hypoxia but Not HIF Stabilization Decreases ERK
Signaling. The cultivation of ES cells in complete medium (supplemented with serum and LIF) results in the activation of several pathways that are responsible for ES cell maintenance. In normoxia, STAT3, Akt, and MAPK/ERK pathways are phosphorylated and, hence, active. Here, we aimed to assess how incubation in 1% O 2 affects the phosphorylation of these signaling components. The phosphorylation of STAT3 remained unchanged by hypoxia. In contrast, the phosphorylation of Akt and its downstream target GSK3β at inhibitory serine 9 was decreased in hypoxia-exposed ES cells ( Figure 2(a)). Notably, we observed a strong decrease in the phosphorylation of ERK and its upstream kinase MEK. The dephosphorylation of RAF1 was also apparent at both inhibitory S259 and activating S338 residues ( Figure 2(b)). The attenuation of ERK signaling correlated with the downregulation of its downstream target EGR1 ( Figure 2(b)). Dephosphorylation was not dependent on HIF-1α in any of the examined proteins.
To further analyze STAT3 signaling, we employed APRE-luc, a luciferase reporter system that responds to the transcriptional activity of STAT3. Wild-type and HIF-1α −/− ES cells were transfected with APRE-luc and, after media change, were further cultivated in the presence of LIF-free medium or complete medium for 24 h in either normoxia or hypoxia. APRE-luc activity was significantly upregulated in an HIF-independent manner in the presence of LIF, which is a known inductor of STAT3 signaling. This effect was preserved also in hypoxia (Figure 2(c)).
Next, we assessed the effect of HIF stabilization mediated via exogenous HIF-1 or HIF-2 expression. ES cells were transiently transfected by the vectors constitutively expressing mHIF-1α or mHIF-2α. Cells treated with PEI and transfected with GFP expression vector served as a control. We observed stabilization and increased levels of HIF-1α and HIF-2α, as well as the upregulation of HIFmediated transcription activity; however, the phosphorylation of ERK kinase remained unaffected in both cases (Figures 2(d) and 2(e)).
The administration of the hypoxia mimetics CoCl 2 , DFO, DMOG, and JNJ for 24 h did not mimic the effect of hypoxia in the sense of ERK and Akt dephosphorylation. The phosphorylation of ERK kinase was upregulated following CoCl 2 and DFO treatment and remained unaffected after the addition of DMOG and JNJ. Phosphorylation of Akt was upregulated by DFO and remained unchanged following treatment with other mimetics (Figure 2(f)).
3.3. Hypoxia Upregulates DUSP1 but DUSP6 Has the Most Prominent Effect on ERK Dephosphorylation. Further, we investigated the role of DUSP phosphatases in hypoxiadriven ERK dephosphorylation. Firstly, the effect of hypoxia on DUSP1, 5, and 6 expressions was evaluated by qRT-PCR. The exposure of ES cells to hypoxia for 24 h increased DUSP1 expression (Figure 3(a)). Similar results were obtained using wild-type or HIF-1α deficient cells. In contrast to DUSP1, the level of transcripts of DUSP5 and 6 remained unaffected in hypoxia, again independently of the presence of HIF-1α (Figures 3(b) and 3(c), resp.). Next, we addressed the question of which DUSP has the most prominent effect on ERK dephosphorylation. We employed the RNA silencing of selected members of the DUSP family and screened for ERK phosphorylation. The transfection of ES cells by siRNA against DUSP1, 5, or 6 resulted in their decreased expression (Figures 3(d), 3(e), and 3(f), resp.). In the next step, we analyzed the effect of DUSP silencing on ERK phosphorylation status. Only the silencing of DUSP6 had a prominent effect on augmenting ERK phosphorylation. It was also correlated with the downregulation of DUSP6 protein (Figures 3(g), 3(h), and 3(i)).
Hypoxia Downregulates ERK Phosphorylation
Regardless of DUSP6 Silencing. The RNA silencing of DUSP6 elevated the phosphorylation of ERK both in normoxia and hypoxia; however, increase in ERK phosphorylation following the DUSP6 silencing in hypoxia was not statistically significant (Figures 4(a) and 4(b)). Neither hypoxia-induced ERK dephosphorylation nor the reduction in EGR1 level were abolished following the DUSP6 silencing, as determined by western blot (Figure 4(a)). DUSP6 protein and ERK level were also determined in hypoxic conditions. Cells were cultured in 1% O 2 for 3, 6, 12, and 24 h. Western blot analysis revealed the hypoxia-mediated downregulation of DUSP6 protein in ES cells at all examined time points. Similarly, ERK phosphorylation was also progressively downregulated in hypoxia (Figure 4(c)).
Okadaic Acid Partially Rescues Hypoxia-Mediated
Dephosphorylation. In an effort to clarify the involvement of PP2A phosphatase in the hypoxia-mediated regulation of ERK and Akt activation, we firstly examined its mRNA level. The gene expression of PP2A was not significantly modified by hypoxia either in parental wild-type or in HIF-1 −/− cells ( Figure 5(a)). Second, we employed OA, a potent inhibitor of PP2A. Before exposure to hypoxia or standard cultivation, OA was added to media in a 10 nM concentration. The inhibition of PP2A increased the phosphorylation of ERK independently of normoxic or hypoxic conditions and abolished its downregulation in hypoxia ( Figures 5(b), 5(c), 5(d), and 5(e)). The phosphorylation of RAF1 at S259 remained intact upon treatment with OA ( Figure 5(b)). The effect of OA on Akt phosphorylation was even more pronounced in hypoxia. Treatment with OA also increased the basal level of MEK phosphorylation, which was, however, reduced in hypoxiacultivated samples. Similarly, OA induced the upregulation of RAF1 at S338 but did not prevent the reduction observed in hypoxic samples (Figures 5(b), 5(c), and 5(d)).
Hypoxia Reduces the ROS Level in ES Cells Affecting ERK and Akt Phosphorylation.
To determine the involvement of hypoxia-induced changes in the cellular redox environment, we employed the ROS sensitive probe HE. Following the hypoxic cultivation, we observed a significant decline in both O 2 -specific ( Figure 6(a)) and -nonspecific HE oxidation products (Figure 6(b)) as assessed by HPLC analysis. The supplementation of cells with intracellular antioxidant GSH significantly reduced the amount of HE oxidation products in a manner similar to hypoxia. Further, we sought to elucidate the effect of GSH or β-ME supplementation (10 mM, 60 minutes) on the phosphorylation of selected signaling components of the ERK and Akt pathways. Following the treatment with GSH or β-ME, we revealed robust dephosphorylation of the ERK pathway and its upstream kinases, as well as Akt kinase. The level of STAT3 phosphorylation remained unaffected by this intervention (Figures 6(c) and 6(d)).
Hypoxia Attenuates Markers Associated with
Undifferentiated State of ES Cells. Given the suggested importance of hypoxia in stem cell cultivation, we aimed to investigate whether the attenuation of prodifferentiation ERK signaling in hypoxia affects the selected markers of stem cell maintenance. The gene expressions of critical regulators of the undifferentiated state and stem cell markers octamer-binding transcription factor 4 (OCT4), Nanog, zinc finger protein 42 homolog (ZFP42), and tissue nonspecific alkaline phosphatase (TNAP) were significantly decreased in hypoxia (Figures 7(a), 7(b), 7(c), and 7(d)), contrary to the early differentiation marker fibroblast growth factor 5 (FGF5), which was upregulated (Figure 7(e)), as determined by qRT-PCR. Protein levels of Oct4 and Nanog were reduced in hypoxia-cultivated cells as determined by western blot (Figure 7(f)), as well as alkaline phosphatase activity (Figure 7(g)).
Discussion
Properties of stem cells are maintained by numerous factors including O 2 level. Mouse ES cells are routinely cultivated in media supplemented with serum and LIF, which support their undifferentiated state. LIF acts mainly through activating STAT3 signaling, while serum activates other pathways including MAPK/ERK and Akt [14,16,33]. Here, we show that chronic hypoxia (1% O 2 for 24 h) decreases the phosphorylation of ERK and Akt but not STAT3 and attenuates markers associated with undifferentiated state. As the most striking effect of hypoxia was observed on ERK signaling, we further focused on this pathway in particular, with a minor interest also in Akt. Interestingly, chronic hypoxia downregulated not only ERK but also its upstream kinases, MEK and RAF. As mentioned before, cellular response to hypoxia is a complex process. A decrease in O 2 level inhibits PHD, which in turn leads to HIF stabilization and transactivation of its target genes, for example, PGK1 or VEGF [34]. GDF15 is known to be upregulated by various stressors including O 2 deprivation, without evidence of direct HIF involvement [35,36]. This process is universal virtually for every cell type, including ES cells.
To understand the mechanisms of the observed ERK signaling inhibition by hypoxia, we first aimed to assess whether HIF activity alone causes the dephosphorylation of ERK signaling pathways. We employed HIF-1α deficient ES cells, the exogenous expression of HIF-1α and HIF-2α, and treatment with the most commonly used inhibitors of PHD, (hypoxia mimetics) DMOG, JNJ, CoCl 2 , and DFO [22,37,38]. Neither HIF-1α depletion nor HIF upregulation by overexpression or hypoxia mimetics manifested an effect on the studied ERK dephosphorylation. Therefore, we suggest that hypoxia and not HIF itself is responsible for the observed attenuation of ERK signaling. Moreover, DFO and CoCl 2 even induced ERK phosphorylation. This is in agreement with other authors reporting that iron chelation strongly activates MAPK [39][40][41][42]. Interestingly, the mechanism of ERK induction proposed by Huang and colleagues suggests that DFO inhibits DUSP1 and therefore supports ERK phosphorylation [43]. However, in our experiments, we observed the induction of DUSP1 following DFO and CoCl 2 treatment (data not shown, manuscript in preparation). ROS-dependent activation of ERK signaling has been described multiple times [44,45], and we earlier reported that in our system, ERK is also activated in a ROS-sensitive manner [25]. Therefore, we hypothesized that the activation of ERK might be mediated by drug-induced ROS elevation [46,47]. Further, we aimed to establish whether the downregulation of ERK signaling is associated with the activity of phosphatases. In agreement with literature, the roles of selected MAPK-specific DUSPs and PP 2A were here analyzed. The elevated expression of DUSP1 following hypoxic treatment was reported earlier [48] as well as the upregulation of ERK-specific DUSP6 in the presence of 1% O 2 , which was mediated in an HIF-1-dependent manner [18]. We also tested DUSP5 as a representative of nuclear inducible DUSP with specificity only towards ERK; the higher efficiency of DUSP1 in the dephosphorylation of p38 and c-Jun N-terminal kinase was described by other authors [49,50].
In our experiments, hypoxia increased the expression of DUSP1 in an HIF1-independent manner, while the mRNA expressions of DUSP5 and DUSP6 remained unchanged upon 24 h hypoxic incubation. Others suppose that DUSP1 is induced by hypoxia in a program of gene expression controlled by HIF-1 [51,52]. Although we do not rule out this possibility, on the basis of our results, we propose that HIF-1 is dispensable for DUSP1 gene expression in chronic hypoxia. Interestingly, in contrast to our results, Bermudez and colleagues reported the upregulation of both DUSP5 and DUSP6 mRNA levels following 24 h cultivation in 1% O 2 [18]. This divergence might be attributed to differences between the melanoma and adenocarcinoma cell lines employed in mentioned study and the ES cells used in our experiments.
Next, we employed siRNA silencing to investigate the involvement of DUSPs in the regulation of ERK phosphorylation. The downregulation of DUSP1 and DUSP5 by specific siRNAs did not have a profound effect on ERK phosphorylation. In contrary to this, the siRNA silencing of DUSP6 resulted in upregulation of the phosphorylated form of ERK, suggesting its involvement in ERK regulation in mouse ES cells. Hypoxia, however, downregulated ERK phosphorylation regardless of DUSP6 silencing. Moreover, hypoxia did not increase DUSP6 expression on either the mRNA or protein levels. It was reported that DUSP6 is subject to extensive posttranscriptional and posttranslational modifications and that mRNA level might not correspond to the protein level due to several feedback loop mechanisms that are likely to promote the proteasomal degradation of DUSP6 via ERK phosphorylation [18,53]. Thus, it is possible that the expression of DUSP6 takes place as a part of early hypoxic response and, after 24 h, returns to its basal level. However, we observed the downregulation of DUSP6 on the protein level even after 3 h cultivation in 1% O 2 . Taken together, we conclude that selected DUSP1, 5, and 6 do not play a major role in the downregulation of ERK phosphorylation during chronic hypoxia in our system. Consistent with this is the fact that not only ERK but also its upstream kinases MEK and RAF, which are not recognized as a substrate for DUSPs [54], showed a decline in phosphorylation status in hypoxia.
Further, we decided to elucidate the role of PP2A in chronic hypoxia. PP2A is one of the most abundantly expressed serine/threonine protein phosphatases that regulates the MAPK/ERK pathway in both a positive and negative fashion [55] and is also involved in the dephosphorylation of Akt [56]. Although PP2A can be induced by various stressors including hypoxia in both in vivo and in vitro models [19,20], in our experiments, its mRNA level was not altered after hypoxic cultivation. We employed the PP2A inhibitor OA to further investigate the involvement of PP2A in the dephosphorylation of the ERK and Akt pathways observed in hypoxia. The hypoxia-induced impairment of ERK and Akt phosphorylation was reversed in the presence of OA. The dephosphorylation of MEK was partially rescued, but it was still downregulated by hypoxia, despite the general increase in phosphorylation, similarly to the situation with the S338 RAF1 residue. These findings suggest several modes of action of PP2A in our experiments. PP2A might dominate as a negative regulator of ERK signaling through the direct dephosphorylation of ERK and MEK, as proposed by previous studies [57,58]. However, the increased phosphorylation of S338 at RAF1 in OA-treated ES cells suggests intervention on the level of RAF1 or, even more likely, upstream, as S338 is not reported as a PP2A target. This is in accordance with the study by Sawa and colleagues [59]. Moreover, OA also promotes the phosphorylation of epidermal growth factor receptor (EGFR) and thus activates EGF signaling, which is also connected with ERK and Akt upregulation [60]. However, we were not able to detect changes in EGFR phosphorylation in our experiments (data not shown). As the inhibitory phosphorylation of RAF1 at the S259 site which is the PP2A target [61] remained unaffected by OA treatment, we also propose that this type of RAF1 regulation has negligible importance in our system. Interestingly, S259 phosphorylation is mediated by Akt [62]; however, the phosphorylation of Akt was upregulated following OA treatment in hypoxia without the effect of RAF1 phosphorylation. This suggests the limited significance of crosstalk between PI3K/Akt and MAPK/ERK at the level of Akt and RAF1 in ES cells. Taken together, our results indicate that the impairment of ERK signaling takes place on the level of RAF1 or above via a hypoxia-driven independent mechanism that might, to a certain extent, include the involvement of PP2A.
A plausible explanation for our observations might be in the decline of intracellular ROS in cells cultivated in hypoxia. ROS are currently recognized as an important modulator of various intracellular signaling pathways [63]. Many growth factors and cytokine receptors possess cysteine-rich motifs susceptible to oxidation that may result in changes in the structure and function of the protein and lead to the activation or inhibition of several signaling pathways [64]. A proportionate level of ROS is also required for the formation of disulfide bonds in order to achieve the suitable intermolecular conformation of signaling proteins and is also vital in the process of intramolecular dimerization and protein-protein interactions (e. g., with scaffold proteins), which are necessary for proper signal transduction [65].
Previously, we and other authors demonstrated that antioxidants and inhibitors of ROS-producing enzymatic sites abolish MAPK and Akt activation. Conversely, interventions that lead to increased ROS production or the direct exposure of cells to oxidants such as H 2 O 2 also activated MAPK as well as Akt in a number of different studies including ours [25,63,64].
To compare the effects of hypoxia-mediated decreases in ROS, we treated cells with GSH. GSH is an essential component of the ROS buffering intracellular system and a first line of defence antioxidant [66]. Cells supplemented with exogenous GSH manifested phosphopatterns of ERK and Akt signaling identical to those of cells cultivated in hypoxia. These results were further confirmed with treatment of cells with reducing agent and thiol antioxidant β-ME. Therefore, we suggest that hypoxia-mediated ROS depletion is significantly involved in the downregulation of ERK and Akt signaling in conditions of chronic hypoxia.
Signaling through ERK kinase is typically regulated by mitogens and as such is associated with cell proliferation. The inhibition of mitogen-induced ERK signaling thus also attenuates cell division [67]. The described mechanisms may therefore serve as part of the processes that keep the low proliferation rate of stem cell in their niche, even in the presence of relatively high concentrations of growth factors. Multiple lines of evidence show that cultivation in the range between 1 and 5% of O 2 supports the maintenance of ES cells in a pluripotent state, prevents their differentiation, and even reprograms the partially differentiated cells to a stem cell-like state [2,5,68]. In contrast to studies proposing hypoxia to have a beneficial effect on stem cell maintenance, others reported that cultivation in reduced O 2 tension supports rather ES cell differentiation [3,4,69]. Although ERK signaling contributes preferably to differentiation in the context of ES cells, the hypoxia-mediated ERK attenuation observed in our experiments did not support the undifferentiated state, as shown in the reduced transcripts of markers associated with stem cell signature. This is in agreement with our earlier study in which neither PHD inhibition nor 1% or 5% hypoxia prevented the downregulation of markers associated with pluripotency induced by the depletion of LIF [22]. Here, we report similar results, even in the presence of complete medium, as shown by reduced mRNA levels of OCT4, NANOG, ZFP42, and TNAP, diminished Oct4 and Nanog protein and reduction in alkaline phosphatase activity. Thus, in our system, hypoxia is not a supportive factor with respect to the maintenance of markers of undifferentiated ES cells after 24 h cultivation in 1% O 2 . PI3K/Akt was shown to be critical for supporting the self-renewal of ES; therefore, the observed reduction in transcripts associated with undifferentiated status might be attributed to hypoxia-mediated impairment of Akt and its downstream targets such as Nanog [15,33]. However, it should be emphasized that downregulations of markers associated with undifferentiated state are highly dependent on hypoxia level, length of hypoxic incubation, cell type, and specific culture conditions, and might have only be transient [21]. It is of particular interest that we did not observe the downregulation of STAT3 signaling in hypoxia, which is central to maintenance of the undifferentiated state and pluripotency [14,70]. This is in contrast Figure 6: HPLC determination of specific 2-OH-E(+) (a) and nonspecific product E(+) (b) of HE oxidation in cells cultivated in normoxia or hypoxia or supplemented with GSH. Data are presented as mean + SEM from at least three independent experiments. Statistical significance was determined by one-way analysis of variance ANOVA and post hoc Bonferroni's multiple comparison test ( * P < 0 05). Effect of GSH and β-ME supplementation on phosphorylation of MAPK/ERK, Akt, and STAT3 (c, d). Total level of β-actin was used as a loading control. with a study by Jeong and collaborators, who also reported the hypoxia-induced differentiation of ES cells. However, in this study, the differentiation of ES was connected to the HIF-1-mediated suppression of LIF receptor transcription, which in turn attenuated STAT3 activation [3]. As we showed on the phosphorylation level and by luciferase reporter assay, STAT3 signaling is not compromised by hypoxia in our system, regardless of the presence of HIF-1. Notably, in our previous study, we also reported the resistance of LIF-induced STAT3 phosphorylation to changes in intracellular redox status [25]. This notion is of particular interest as persistent STAT3 phosphorylation is a hallmark of several cancers and leads to the gene expression responsible for malignant cell proliferation and resistance to apoptosis, as well as increased invasion and migration [71]. Further, it should be emphasized that STAT3 signaling is crucial for the regulation of cancer stem Figure 7: Relative expression of OCT4 (a), NANOG (b), ZFP42 (c), TNAP (d), and FGF5 (e) in wild-type ES cells determined by qRT-PCR. Data are presented as mean + SEM from at least three independent experiments. Statistical significance was determined by T test ( * P < 0 05). Protein level of Oct4 and Nanog in wild-type ES cells cultivated in normoxia or hypoxia as determined by western blot (f). Total level of β-actin was used as a loading control. Determination of alkaline phosphatase activity in wild-type ES cells cultivated in normoxia or hypoxia (g). Data are presented as mean + SEM from at least three independent experiments. Statistical significance was determined by T test ( * P < 0 05). cells in a similar way as for the regulation of ES cells [72]. The relative indifference of STAT3 phosphorylation to changes in redox environment thus might serve as one of the driving forces of cancer progression associated with poor prognosis.
Here, we report conclusively that chronic hypoxia attenuates the phosphorylation of ERK and Akt in ES cells independently of the presence of HIF-1α. On the basis of our results, we suggest that ROS plays a considerable role in the phosphorylation of ERK and Akt in ES cells, as is demonstrated by the similar effect of hypoxia-induced ROS depletion and GSH supplementation on ERK and Akt signaling cascades. However, our data do not exclude the involvement of other different mechanisms.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper. The authors alone are responsible for the content and writing of the paper. | 2018-04-03T00:11:22.254Z | 2017-07-27T00:00:00.000 | {
"year": 2017,
"sha1": "28940fc68eed2e137c5ccfbdd0693cd11870883a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2017/4386947.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f92ef7172623027933fcebf9524f42f09a06e4b5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
117942880 | pes2o/s2orc | v3-fos-license | COSMOLOGICAL ENTROPY AND SEEKING OF GENESIS OF TIME
Influenced with symmetry of entropy and time in nature, we tried to invoke relation between entropy and time in spacetime with new dimension. And also provided how Hubble‟s constant related with the entropy of universe.Wediscussed about how entropy of universe behaves at different temperatures and at different age‟s of universe. We showed that age of universe is equivalent to Hubble‟s constant. And showed how naturally entropy arrives from the manipulations in gravity from Einstein‟s equation “00”. And from these results we concluded that universe is isotropic, homogeneous with negative space curvature i.e. K= -1 but not flat K=0 (which doesn‟t explain acceleration and deceleration of universe). From these results of gravity, entropy, temperature and time we discussed the genesis of time and proposes that at absolute zero temperature universe survives as a superconductor and that particular temperature is called as “Critical Absolute Temperature (TAB). And genesis of time occurs at first fluxon repulsion in the absolute zero temperature of universe.
INTRODUCTION
"Science is nothing more than a refinement of everyday thinking." We see the universe the way it is because we exist. Of course there may be some irony in that statement in fact, scientifically is it true? Perhaps answer is not apt scientifically. But one thing is clear from the scientific literature which is prevailing till now is: 1) Universe has a beginning [9].
2) A hot big bang occurs after the beginning of the universe.
In order to explain the ideas that how time begins in the universe, it is necessary first to understand the generally accepted history of the universe according to what is known as the hot big bang model. The singular beginning of evolution of universe is coined with the name Big Bang. Before the big bang, the universe is thought to have had zero size, and after the Bang it has been infinitely hot. At this the universe would have contained mostly extremely light particles that are affected by only weak force and gravity and there are antiparticles. As the universe continued to expand and the temperature to drop, the rate at which particles were being produced in collisions would fallen below the rate at which they were being destroyed by annihilation. In these particles neutrinos and anti-neutrinos would not have annihilated with each other because these particles annihilate with themselves and with other particles very weakly. So it was expected that they still be around today. Neutrinos are not mass less, but they have a small mass of their own. They could be of the form of "dark matter " , with sufficient gravitational attractions to restrict the expansion of the universe and cause it to collapse [10].
In this circumstances what is the roll of time? Was it begins with universe? How entropy plays an important role in explaining time. And how gravity is most efficient generator of entropy in universe? These topics are discussed throughout the paper.
A pioneer work done in [10] paper that defined Einstein equation as equation of state. By considering space time as solid, in [8] paper a brilliant work had done to establish relation between Einstein equation and entropy. Motivated from these papers we established a relation between matter, geometry and entropy from Einstein"s equation"00". Fruitfully that relation defined the isotropic, homogeneous and negative curvature of space-time. [2] "Destroy all scientific knowledge"s and just pass all things are made up of atoms to next generation."
CONVENTIONS OF PARTICLES AFTER BIG BANG
Big Bang is monstrous container of entropy, another such example is black hole. The big bang played out the particles. In the thick of microscopic particles, allow for a particle P that is blown from the explosion with a velocity v→c. Annihilation of particles and antiparticles blown from the big bang explosion ispossible only in straight line and these particles follows straight path until any disturbance face them.
Lett1be the time taken by the particle P to travel a distance "vt" after big bang explosion. Initially it was at rest and time as t0. Now from Einstein"s relativistic equation we have As v→c, Δ t →∞ (since t0 = 0, initial time of particle). The proper time of particle "P'tends to∞, when the velocity of the particle tends to "c". Similarly the proper time of "n" such particles that travel a distance "vt" is given by The proper time of particles travelling with light velocity tends to infinity. That means to an observer traveling with speed of light those particles seems to be stationary. This happens due to dilation in the time as from (1). Since Δ t →∞, therefore from (1) time passes slowly for those particles and the concept of time dilation is applicable to all of the particles which travel with velocity equal to "c".
The path of particles travelling with light velocity is the straightest possible world line or null. This is generalization of the straight line. If m is mass of particle "P", then relativistic energy related to that particle is given as: D e c e m b e r 2 9 , 2 0 1 4 Let particle travels a distance that is equal to the radius of expansion of universe then from Schwarzschild radius , M is replaced with relativistic mass, then: Let ds is the small distance travelled by particles blown from big bang. If particle travels a distance equal to the radius of the expanding universe then the total distance s travelled by particle from initial position 0 r to r is: (7) Depicts the total distance covered by particle from explosion. Relative expansion of universe as from Robertson-Walker Scale factor [6] is: If particle travels a distance that is equivalent to t (commoving distance from big bang) term in equation (8) 1 v c and p distance at present time. In (9) provides the total distance travelled by particle "P" i.e. radius of expanded universe at certain time "t", as in scale factor at. With these conventions of particles we proceed to know more about the state of particle with entropy. [1] "The ultimate source of order, of low entropy, must be the big bang itself."
NITTY-GRITTY OF ENTROPY OF THE UNIVERSE
The whole universe is considered as an isolated system, which cannot exchange heat or mass with its surroundings. The universe is constantly losing usable energy and never gaining it. We logically conclude the universe is not eternal. The universe had a finite beginning --the moment at which it was at "infinite entropy" (its most disordered possible state). Thermodynamics first law states that total energy of the universe is constant and thermodynamics second law states that entropy of the universe always increases [7].With this background of entropy we will try to know about the entropy of universe and its relations with age of universe i.e. time. In initial stages of universe entropy is very high, with time, entropy decreases and it reaches to zero. Brian Greene in [1] showed symmetry between time and entropy as in figure 1. It means that the entropy and time in universe shows symmetry. D e c e m b e r 2 9 , 2 0 1 4
Figure 1:Symmetry between entropy and time of universe
The Big bang explosion increased the temperature of dense universe to very high that that produced large amount of heat. The infinitesimal entropy of the system can be given as: Where dQ (PdV) is change in heat corresponding to temperature "T" and the total entropy of the universe is integration of such infinitesimal systems and is given by: Where "A" represents different types of particles say 1, 2, 3…, and "S" represents the entropy. In another way it is represented as Where dT is the temperature change)
"What is the origin of cosmological entropy?"
Entropy is defined as the measure of the unusable energy in a closed or isolated system. We have an expression [5] to determine the entropy of the universe at a particular time t of universe and given by: Where, S = Entropy of universe.
By introducing Robertson-Walker scale factor with Hubble"s constant in (13), we have: (1), we can see drastic change in entropy of universe at different instances of time. This drastic change in entropy is due to the acceleration of universe and formation of massive objects in it. Relativistic nature of entropy is shown in relation that obtained from (14) and (2): (17) Can be interchanged to obtain a relation that determines time at certain given entropy of universe.
Equations (13), (14), (15), (16) and (19) can be used successfully to calculate the entropy, the relative change in entropy, acceleration of the universe and age of the universe. From equations (7) and (19) the radius of expanded universe or distanced travelled by the particle (red shift) "P " with an acceleration equal to the rate of expansion of universe can be calculated as: As the expansion of the universe continues its temperature decreases. To calculate the temperature at particular time [5] or at particular entropy or at particular acceleration of universe following relation were derived.
The entropy, the age and Hubble"s constant of the universe were calculated from equations (13) (1) are very much in agreement with the same values in the literature.It successfully provides the information about the age of universe which is equivalent to the Hubble"s constant as quoted in [6]. the present age of universe is 13.7 billion years then its temperature is 0.7 * K, entropy is 1.21x10 -4 J/ * K and Hubble"s constant value is 4.33x10 20 s i.e. equivalent to 13.7 billion years. With this we have established a relation between time of universe, entropy of the universe and temperature of the universe. Above established equations would be utilized in the next sections to know how entropy, time and temperature of universe effect the affect the gravity and space-time.This is possible only when above equations are linked with general theory of relativity. D e c e m b e r 2 9 , 2 0 1 4 [1] "Gravity is the most efficient generator of entropy in universe.
""Entropy is a property of matter and energy."
In the literature of astrophysics and cosmology, Einstein"s equation in general theory of relativity is cornerstone. He formulated an equation that depicts relation between space-time geometry and energy. That equation [6] is as follows: Tc . Division of one tensor by another is not supported in general. This is because of the impossibility of knowing which component of the numerator is being acted upon the denominator in solving it. The only exception is division by a scalar. In this case, the tensor is simply being rescaled and each component of the original tensor is divided by the scalar value. That"s why we took initial values of Einstein"s equation. Now Einstein"s equation "00" is as follows: From equation (10) we introduce entropy term in (25) and also in Einstein"s equation"00" (29), thus obtained result is: Here Einstein"s equation provides an origin for cosmological entropy as that expressed in above equations. Semi classical Einstein"s equations were unable to provide entropy accompanying the production of matter. But above equation incorporates that notion and able to provide entropy accompanying the production of matter. Equation (30) gives a relation between gravity and entropy i.e. an inverse relation. If gravity at particular region in universe is more, then entropy at particular region will be less. Equation (30) . Entropy is a parameter which naturally present in universe and it trigger the expansion & contraction of universe. In following paragraphs we will see how entropy and matter defines space geometry. Above equation is interpreted as:
Figure 2.Negative curvature in space time (K=-1)
Since K = +1, -1, and 0: are the values of curvature as described in [6]. Now we have only two possibilities of K. When K=0 then entropy will be -∞, as per physics there is no -∞, so entropy is zero. If K=-1, then entropy will be positive. As a result above equation defines space as hyperboloid as it possess negative curvature i.e. K=-1. Thus entropy, geometry and matter in equation fortuitously defined space with negative curvature. These results with respect to the space-time, entropy, and temperature of universe governs the big bang, time of inflation, expansion of universe and contraction of universe ( figure 4).
In nutshell, it is possible to explain the life of universe by using equation (34). Equation (34) is called as entropy generator equation. This equation thrive homogeneity, isotropy and contribute to the expansion of universe (figure 2). In figure 2, the black dot represents a system of particle that drifted along with the expansion of universe and current position of it shows its state in the universe. In the next section from the equation (34) which explains the life of universe, an attempt is made to know the genesis of time in universe.
GENESIS OF TIME
"If I can"t picture time, I can"t imagine time." Numerical valuescalculated for entropy in table 1 shows that entropy of universe decreases with time due to decrease in temperature as from (10). From (30) & (31), as entropy of universe decreases effect of gravity increases. That effects total volume of universe and due to increase in gravity, universe get contracted to a point (30) i.e. K=0 (zero curvature). At K=0state,the gravity, density and pressure would be very high. Equation (24) andnumerical values in table 1, as entropy reaches to zero, then from Third law of thermodynamics the temperature of the universe approaches to absolute zero degree Kelvin. This much temperature is not experienced till now. All the matter in universe is concentrated at this highly dense point. Second Law of thermodynamics state that heat could never spontaneously move from a colder body to a hotter body. There is no trace for hotter bodies at this stage of universe. So, as a system approaches absolute zero, it should eventually draw energy from whatever systems are present near to it. In case of our universe it is not possible since there is no energy left at absolute zero state. If we think for a while it will draw from neighbor that is meaningless because in section 4 we assumed adiabatic expansion of universe and also we don"t know anything about its neighbor.
Since absolute zero state of universe doesn"t possess energy, therefore particles are at rest. Previously proper time of particle Δt →∞, but now Δt is zero i.e. Δt= t0. In this circumstance the point at which particles concentrated with so much density felt a high pressure. This condition is in such a way that, a small disturbance changes the total configuration of that particular system i.e. Dominos effect. This particular situation is called as big crunch at which whole universe is crunched to a pointK=0. As explained above whats the matter a minute disturbance that creeps toward big bang. In big crunch entropy of the system is zero. These are the conditions which explains singularity. Hence big crunch represents a singularity. Black hole is another example for singularity.
Absolute zero Kelvin is a temperature at which if any metal exist then, that conducts with zero resistance effectively. Such types of materials are known as superconductors. Generally absolute zero temperature is impossible practically. But that happens in the singularity which we defined previously i.e. in the case of big crunch and black holes. Overall what we want to say is "big crunch acts like perfect superconductor". Superconductors conduct super currents at very low temperature.
Since big crunch possess absolute temperature, as a result it also conduct super currents. This conductivity is known as superconductivity. Superconductivity excludes of interior magnetic fields. So this big crunch excludes of interior magnetic fields. Big crunch now consists of particles like fluxons (like mesons) around which super current exist. And the temperature at which big crunch becomes a superconductor is known as "Critical Absolute Temperature" of Big Crunch denoted by "TAB"because it becomes so at absolute temperature.And big crunch adumbrated as in figure 3.
Figure 3. Big Crunch as superconductor (exhibiting properties of superconductivity)
Suppose a minute disturbance is introduced due to high pressure and repulsion between electric fluxons, then the superconducting stage would become unstable. This creates an electric flux in big crunch. These electric fluxons possess positive half spin, due to that they repel from each other. Along with high pressure this repulsion between electric fluxons increases instability. Since magnetic fluxonsposses negative half spin which were in excluded magnetic field, always attract electric fluxons. That further adds to instability. In big crunch the first fluxon particle repulsion establishes space and time coordinates. This first repulsion between fluxons moves them to another fluxon thus time begins at this repulsion.Space and time both took birthwith first repulsion between fluxons. "Hence genesis of time occurs at first repulsion between fluxons". This instability at sudden increases exponentialy and rises the temperature of that highly dense state of crunched universe. Though there is exponential rise ininstability, the temperature doesn"t change suddenlyfrom "Critical Absolute Temperature" denoted by "TAB"and superconducting state continues because big crunch is a mixture of different particles. The time taken to overcome "Critical Absolute Temperature" is known as inflationary time or time of inflation. This inflation increases entropy of the system i.e. highly dense state. At sudden a big explosion takes place that is big bang. The time interval between Big Crunch and Big Bang is known as time of inflation (figure 4).
It is clear from above discussions that time of the universe starts with the first fluxon particle repulsion in superconducting state of Big Crunch. This sharply points the genesis of time of the universe. The very origin of the universe takes places before the inflationary period but the shape of universe still remains with curvature K=0. Only after big bang the curvature becomes K=-1 and universe starts expands as explained in the previous sections.
CONCLUSION
Triumphantly we discussed topics relating entropy, temperature of universe, time, gravity, matter and Hubble"s constant. We established equations relating entropy of universe, time of universe and Hubble"s constant, andcalculated those parameters at different temperatures of universe. The numerical values of entropy of universe, time of universe and Hubble"s constant at different temperatures of universe are in agreement with the numerical values in the literature. Table-1 provides information about age of the universe which is in accordance with the Hubble"s constant i.e. Age of universe () () at at . The symmetric nature of entropy and time gives an idea about how entropy and time of universe are related to each other. This relation is very much helpful in calculating the entropy of universe at certain temperature and age of universe. By using general theory of relativity we explained the relativistic nature of entropy which is new in scientific literature. We established relation between gravity and entropy in Einstein"s field equation. From Einstein"s equation "00" D e c e m b e r 2 9 , 2 0 1 4 we obtained an equation that generates entropy and we can call equation (34) as entropy generator in space-time. Equation (34) shows the isotropic,and homogenous space with negative curvature (hyperboloid) that is most possible curvature of universee. This is possible only when Einstein"s equation is represented in terms of entropy.
With the symmetric nature of entropy and time, and from entropy generator equation in space-time we tried to explain the genesis of time. Entropy, gravity and negative curvature of universe are main reasons that are responsible for accelerating and decelerating universe. At Big Crunch due to zero entropy it would have absolute zero temperature at which Big crunch acts likesuperconductor with "Critical Absolute Temperature". At this state due to very high pressure and superconductivity, creates a disturbance in the Big Crunch which creeps down through inflationary phase to Big Bang. With the first repulsion between fluxons in superconducting state of Big Crunch due to minute disturbance leads to genesis of time of universe. | 2019-04-17T15:40:23.579Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "505abbf7f57db414ed8d0fdfc1da8e1c14b6e5db",
"oa_license": "CCBY",
"oa_url": "http://japlive.com/index.php/jap/article/download/2218/pdf_115",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7503faa9ca77ff48b79d93395e23d6d5d4af5e29",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
8596330 | pes2o/s2orc | v3-fos-license | A motile Chlamydomonas flagellar mutant that lacks outer dynein arms.
A new Chlamydomonas flagellar mutant, pf-28, which swims more slowly than wild-type cells, was selected. Thin-section electron microscopy revealed the complete absence of outer-row dynein arms in this mutant, whereas inner-row arms and other axonemal structures appeared normal. SDS PAGE analysis also indicated that polypeptides previously identified as outer-arm dynein components are completely absent in pf-28. The two ATPases retained by this mutant sediment at 17.7S and 12.7S on sucrose gradients that contain 0.6 M KCl. Overall swimming patterns of pf-28 differ little from wild-type except that forward swimming speed is reduced to 35% of the wild-type value, and cells show little or no backward movement during photophobic avoidance. Mutant cells will respond to phototactic stimuli, and their flagella will beat in either the forward or reverse mode. This is the first report of a mutant that lacks dynein arms that can swim.
Analysis of flagellar mutations in Chlamydomonas should
greatly simplify the assignment of specific functions to flagellar structures. Mutants have previously been isolated that lack either a part or all of the radial spokes ( 12,22), central pair microtubule complex (1,17,22), inner dynein arms (9), and outer dynein arms (9). A correlation between alterations in flagellar motility and lack of a particular structure has not been possible, however, because all of these mutants are virtually paralyzed. This suggests that all of the above-mentioned structures are required for motility, and a defect in any one renders the entire system inoperative. An alternative explanation may be that previous investigations specifically selected nonmotile mutants, and hence may have selected mutations with pleiotropic effects on motility as well as structure. The recent discovery of extragenic suppressors of mutations without radial spokes and central pairs (4) supports this conclusion, since these suppressors restore partial motility without restoring the missing structures. This led us to search for new mutations that alter normal motile behavior without causing complete paralysis, in hopes that specific changes in motility might be correlated with specific structural defects. This paper describes a new mutant that can swim despite the absence of outer-row dynein arms. Analysis of the motility of this mutant indicates that a broad range of beat patterns can be sustained by inner-row arms alone, and that only beat frequency, bend amplitude during reversal, and overall swimming speed appear to be affected directly by the lack of outerrow dynein arms. Mutagenesis and Selection: Wild-type Chlamydomonas strain 137c mt ÷ was mutagenized by irradiation from a 15 W ultraviolet lamp at a distance of 12 cm for 6 min. After mutagenesis, cells were spread on 0.8% agar plates containing M medium (medium I of Sager and Granick, reference 18), and colonies with a diameter smaller than that typical of wild-type colonies were transferred to liquid culture for further analysis. In addition to many mutants that totally lack flagella or have completely paralyzed flagella, a few were selected that displayed motility patterns different from those of wild-type cells. One such mutant is the subject of this paper.
Materials
Tetrad Analysis: Gametogenesis was induced in M medium without nitrogen (M-N), cells were mated, and zygotes were matured in the light on M-N plates (18,20). Zygotes were transferred to 4% agar + M plates for germination and dissection of tetrad products. Tester strains of Chlamydomohas that contain linkage group marker mutations were obtained from Dr. E. Harris, Chlamydomonas Genetics Center, Duke University. Before linkage group analysis, new mutants were backcrossed repeatedly with wild-type cells to improve meiotic viability and to ensure that the observed motility phenotype resulted from a single mutation.
Dark-field Microscopy: To analyze the speed and pattern of forward swimming, cells were grown for 3 d in M medium to a density of ~107 cells/ ml and were photographed under constant illumination from a 60-W tungsten source by use of the dark-field setting of a Zeiss phase condenser. Illumination was adjusted to provide greatest intensity from one direction, which induced positive phototaxis of cells in the observed field. To record a relatively synchronous phototactic response over the entire field of observation, ceils were darkadapted for 30 s and then illuminated for 5 s before they were photographed. l-s exposures were made on TRI-X film at a magnification of 27. Swimming speeds were estimated by the measurement of the path length of swimming traces on prints at a final magnification of 120.
For analysis of flageUar waveforms, cells were trapped between the slide and coverslip in a thin layer of medium and were thus prevented from moving rapidly or leaving the focal plane. Flagellar waveforms were recorded as multiple exposures with stroboscopic illumination provided by a Strobex lamp and Strobex model 236 power supply (Chadwick-Helmuth Co., Inc., El Monte, CA). Photographs were made on TRI-X film with dark-field illumination provided by an Olympus dark-field condenser and a Zeiss 40x neofluar lens (0.75 numerical aperture).
Flagellar Reactivation:
The isolation and demembranation of flagella, and the preparation of slides and reactivation solutions, followed the methods of Witman et al. (22). Axonemes were reactivated by a 1:50 dilution into reactivation solutions containing 1 mM ATP, 30 mM HEPES, 5 mM MgSO4, 1 mM dithiothreitol, 25 mM KCI, and EDTA and CaCI2 to produce free Ca ++ concentrations in the range of 10 -8 to 10 -3 M (2, 22).
Electron Microscopy: Samples of flagella and axonemes were fixed at 0*C in 1% glutaraldehyde for 1 h, washed, and post-fixed in 1% OsO4 for l h before dehydration and embedding in Epon 812. Fixatives were buffered with 30 mM cacodylate, 5 mM MgSO4, pH 7.4. Thin sections were stained with uranyl acetate and lead citrate, and examined on a Philips 201 electron mciroscope (Philips Electronic Instruments, Inc., Mahwah, N J) operated at 80 kV.
Isolation and Fractionation of Flagella: Cells were grown to ~5 x 107 cells/ml in M medium supplemented with sodium acetate. The cells were concentrated on a Pellicon filter (Millipore Corp., Bedford, MA), washed, and deflagellated with 4 mM dibucaine in l0 mM HEPES, 5 mM MgSO4, l mM phenylmethylsulfonyl fluoride, and 2.5% sucrose. Flagella were demembrahated in 0.25% Nonidet P-40 in HMDEKW (10 mM HEPES, 5 mM MgSO4, 0.1 mM dithiothreitol, 0.1 mM EDTA, 25 mM KCI, l mM phenylmethylsulfonyl fluoride, 5 ~M peptstatin, pH 7.4), washed in HMDEKP, and extracted with 0.6 M KCl in HMDEKP for 20 rain on ice. Extracted axonemes were spun at 35,000 g for 30 min, and the dynein-containing superuatant was saved. Before sucrose gradient analysis, this supernatant fraction (the KCI extract) was dialyzed against 0.Ix HMDEKP for 4 h at 4"C and spun at 35,000 g for 30 min to remove a large portion of the nondynein protein, which is insoluble at low ionic strength (data not shown). The dialyzed KCI extract was layered onto a linear 0-20% sucrose gradient containing 0.6 M KCl in HMDEKP. Gradients were spun at 37,000 rpm for 12 h at 4"C in an SW 41 rotor (Beckman Instruments, Palo Alto, CA) and collected in 0.36-ml fractions from the bottom of the tube. Bovine liver catalase ( l 1.3S; reference 13) and bovine thyroglobulin (-19.2S at pH 7.4; reference 6) were run on separate gradients as sedimentation standards.
A TPase and Protein Assays: ATPase activity was assayed in 40 mM HEPES, 5 mM MgSO4, 0.1 mM EDTA, 25 mM KCI, 1 mM ATP, pH 7.4, at 20°C. The inorganic phosphate released over the indicated time interval was measured by the method of Taussky and Shorr (19). When sucrose gradient fractions were assayed, the enzyme samples contained 0.6 M KCI. The inclusion of 10-ul samples (wild-type dyneins) or 50-ul samples (pf-28 dyneins) in the 0.5-ml assay volume increased final KC1 concentrations to 37 or 85 mM, respectively. Protein concentrations were estimated by the Bradford dye binding method (3) by use of ovalbumin as a standard.
Mutant Selection
The mutant described in this paper, pf-28, was initially selected on the basis of small colony size on 0.8% agar + M medium, and was then observed to swim at a reduced speed in liquid culture. It was subcloned, and a representative clone was repeatedly backcrossed to the wild-type parent of opposite Abbreviations used in this paper. HMDEKP, a solution containing HEPES, MgSOa, dithiothreitol, EDTA, KCI, phenylmethylsulfonyl fluoride, and pepstatin; HMW, high molecular weight. mating type. Colony morphology and swimming phenotype were found to co-segregate 2:2 with wild-type, which indicates that the alteration in motility is the result of a single Mendelian mutation. Tetrad analysis has shown that pf-28 is not in linkage groups I or IX, and is thus not allelic to pf-22 or pf-13, the other outer-arm dynein mutations (9). The genetic markers used and the resulting tetrad ratios (parental ditype/ nonparental ditype/tetratype) were msr-1 (l:l:10), ery-3 (l:l:10), and pf-22 (2:3:1) for linkage group I, and sr-l (3:2:6) for linkage group IX. The genetic locus of this mutation has not as yet been mapped.
Swimming Phenotype
The swimming pattern and velocity of pf-28 cells were compared with those of wild-type cells using dark-field light microscopy. As illustrated in Fig. l, both cell types generate a gently waving trace which results from the helical swimming path typical of normal forward swimming (10). The pf-28 cells, however, move only one-third as far as the wild-type cells during a l-s exposure. Measurement of swimming traces such as those in Fig. l gave an estimated speed of 215 + 14 #m/s (mean + SD; n = 30) for wild-type cells and 76 _+ 5 um/s (n = 30) for pf-28 cells. Except for this difference in swimming speed, no obvious differences in swimming patterns of the two cell types were noted.
The beat frequency of live, swimming cells was estimated to determine whether the noted reduction in swimming speed was correlated with a lower beat frequency. Cells were observed under strobe illumination, and the flash rate was adjusted to match the beat frequency of individual cells. Wildtype cells of strain 137c swam with an average frequency of 40 Hz, whereas the beat frequency of pf-28 cells was only 22 Hz under the same conditions. Thus, the slower swimming speed of mutant cells is largely due to a 45% reduction in beat frequency.
Electron Microscopy
Demembranated axonemes from pf-28 and wild-type cells were prepared for thin-section electron microscopy as described above. Examination of a large number of transverse sections ofpf-28 axonemes, as seen in Fig. 2, a and c, revealed the complete lack of outer-row dynein arms in this mutant, whereas outer-row arms are easily seen in wild-type axonemes ( Fig. 2 d, arrowhead). Electron-dense material at an outer-arm location was observed on one doublet per axoneme in 28 of 88 cross-sections of pf-28 axonemes. In most cases, this material was identified as the interdoublet bridge, previously described by Hoops and Witman (8), which replaces outerrow arms along one doublet of wild-type flagella (arrows, Fig. 2 a). Analysis of longitudinal sections also supports the conclusion that no outer-row arms are present in pf-28, since outer-row arms are clearly visible along wild-type doublets (Fig. 2f), but periodic structures that resemble outer-row arms are never observed along pf-28 doublets (Fig. 2e). To test for the potential loss of a labile outer-arm structure during axoneme preparation, freshly isolated flagella were also fixed and prepared for thin-section electron microscopy. Although resolution was reduced by the dense flagellar matrix, inner arms were visible but no outer arms were seen (Fig. 2 b).
The morphology of inner-row dynein arms in pf-28 axonemes was indistinguishable from that of wild-type inner arms, at the resolution available in thin sections. In cross- sections (Fig. 2, c and d), the inner-row arms appear as electron densities projecting from the inner edge of the Atubule of each outer doublet into the lumen of the axoneme.
1230
THE JOURNAL OF CELL BIOtOGY • VOLUME 100, 1985 Unlike outer row arms, which form cross-bridges in 5 mM MgSO4 similar to those previously described in mussel gill cilia (21), inner arms rarely appeared to interact directly with the B-tubule of an adjacent doublet under the fixation conditions used. The similarity in electron density of the inner and outer rows of dynein arms, as viewed in cross-sections of wild-type axonemes, suggests that they contain an approximately equal mass per unit length of axoneme. In longitudinal sections, however, there are differences in the appearance of inner-and outer-row dynein arms. Whereas outer-arm material is distributed in discrete electron densities separated by electron lucent spaces (Fig. 2f), inner-arm material is more evenly distributed along the A-tubule (Fig. 2, e and f).
G e l Electrophoresis
To determine which proteins were missing from pf-28 flagella, samples were prepared for SDS PAGE. Since complete resolution of the multiple high molecular weight (HMW) dynein bands cannot be achieved with the Tris-glycine buffeting system of Laemmli (11), we modified the separating gel by adding 3 M urea. This greatly improves the resolution of dynein at protein loads sufficient for Coomassie Blue staining, but alters the mobility of some bands. Molecular weights were therefore estimated by use of gels run without urea. Coomassie Blue-stained gels of flagella, axonemes, and a 0.6 M KCI extract of axonemes from wild-type and pf-28 cells (Fig. 3) show that three prominent HMW proteins are missing in the mutant without outer arms. These three proteins correspond to bands I, II, and V of Piperno and Luck (15) and have been previously identified as outer-arm components (9, 15). They could not be detected in pf-28 KCI extracts even after sensitive silver staining of the gel (Fig. 3, lane 6'). All of the other previously identified outer-arm dynein polypeptides, with molecular weights of 83,000, 70,000, and 15,000-20,000, are also absent (not shown).
Several HMW proteins are retained by pf-28 flagella (Fig. 3, lane 4), and are thus presumed to be proteins of the inner dynein arm. Detergent extraction removes the major membrane glycoprotein, improving the resolution of HMW dynein bands in axonemal samples (Fig. 3, lanes 3 and 5) and revealing four prominent HMW bands in pf-28 axonemes and KCI extracts: bands III, IV, VI, and VII in order of decreasing molecular weight. (The nomenclature used here probably corresponds directly to that of Piperno and Luck (15), although an exact comparison using the same electrophoretic techniques has not been performed.) The apparent stoichiometry of these bands, assuming equivalent Coomassie Blue binding, is 1:1:2:2 in flagella and axonemes but only 1:1:2:1 in KCI extracts. Only half of band VII is extracted, which suggests that band VII (and possibly band VI as well) may consist of two nonidentical proteins that co-migrate in our gel system.
A TPase Activity and Sucrose Gradient Analysis
The ATPase activity and protein content of these flagellar fractions were tested to determine the fraction of the total Mg-ATPase activity contributed by inner-and outer-row arms ( Table I). The specific activity of pf-28 axonemes (presumed to consist solely of inner arm dynein) was only 18% of that of wild-type axonemes (inner-and outer-arm dyneins). To normalize the data from wild-type and mutant axonemes, total axonemal ATPase activity was divided by the amount of protein left in the outer-doublet fraction which followed KC1 extraction. This generates a relative measure of the ATPase activity per axoneme and shows that only 12% of the total (wild-type) activity is contributed by inner-row arms (pf-28 activity) under the assay conditions employed.
ATPases were further purified by sucrose density gradient centrifugation, and fractions were analyzed for protein concentration and ATPase activity. These gradients were run in the presence of 0.6 M KCI to minimize nonspecific proteinprotein interactions. Thyroglobulin (19.2S) and catalase (11.3S) were run on identical gradients as sedimentation standards, but since the effect of high ionic strength on the sedimentation rates of these standards is not known, the apparent sedimentation coefficients can only be used as rough approximations of true sedimentation rates. KC1 extracts of wild-type axonemes contain two major ATPases with apparent sedimentation rates under these conditions of 20S and 12S (Fig. 4a), which correspond to the 18S and 12S outerarm ATPases previously described (14,15). Inner-arm ATPase activity cannot be detected in gradients of wild-type dyneins (see below), but is apparent in KCI extracts from pf-28 axonemes that contain two major ATPases with apparent sedimentation coefficients of 17.7S and 12.7S (Fig. 4b). The latter values will be used as provisional designations until more accurate measurements can be made. One should note that the ATPase activity scale in Fig. 4 b is expanded 10-fold from that in Fig. 4a, although protein concentrations are reported at the same scale. The specific activities of peak gradient fractions for each ATPase are listed in Table I. Under standard conditions, the inner-arm dyneins have extremely low specific activities (0.07 umol inorganic P/mg per min for 17.7S dynein and 0.04 #mol inorganic P/mg per min for 12.7S dynein), which accounts for the inability to detect these enzymes in gradients of wild-type dyneins. Values for the specific activities of wild-type 20S and 12S dyneins are similar to those reported by other investigators (14,15).
Flagellar Waveform
Flagellar waveforms of wild-type and pf-28 flagella were compared to determine whether a lack of outer-row arms alters the normal beat pattern (diagrammed in Fig. 5 a) during either forward (asymmetrical) or reverse (symmetrical) beating. Multiple-flash images of living cells revealed very similar waveforms during normal forward swimming of wild-type (Fig. 5 b) and pf-28 (Fig. 5 c) cells. Beat patterns of both strains contain large, prominent principal bends and small or no reverse bends, and progress through the same effective and recovery stroke positions.
A brief period of flagellar reversal can be induced by bright illumination of Chlamydomonas after a period of dark adaptation (Fig. 5 d). During this photophobic response, wild-type cells swim backward for a short distance before resuming forward motility, whereas pf-28 cells.appear to freeze for a similar time interval. At high magnification, it is apparent that pf-28 flagella do pass through a transient phase of symmetrical waveform (Fig. 5 e), but they remain in a "V" configuration during reversal and have a much smaller bend amplitude than do wild-type flagella. For more detailed and controlled comparisons between wild-type and pf-28 flagellar waveforms, isolated axonemes were reactivated in solutions containing varying concentrations of free Ca ++ (2). Our observations of wild-type axonemes agree with previously published results (2,22). In short, axonemes reactivated below 10 -5 M Ca ++ display a highly asymmetric beat pattern composed of a large principal bend and little or no visible reverse bend (Fig. 6a). At 10 -7 M Ca ++, the average beat frequency of wild-type axonemes was 57 Hz, and principal bend angles ranged from 135 to 175"; reverse bends as large as 20* were occasionally recorded. These beat parameters, along with data from live cells, are summarized in Table IE As calcium concentrations were increased above 10 -6 M, beat frequency decreased until most axonemes were quiescent at 10 -5 M Ca ++. At higher calcium concentrations, axonemes resumed beating in a symmetrical mode (Fig. 6 b) which resembled the waveform of wild-type flagella during phototactic reversal (Fig. 5 d).
Data obtained from pf-28 axonemes reactivated in vitro are in close agreement with in vivo observations. The beat frequency of pf-28 axonemes reactivated at 10 -7 M Ca ++ was 26 Hz, or 45% of the wild-type frequency, whereas the waveform of pf-28 axonemes reactivated at low calcium concentration (Fig. 6c) differed slightly from a typical wild-type pattern in that principal bend angles were usually smaller and reverse bends were larger than wild-type controls (Table II). Mutant axonemes became quiescent at 10 -s M Ca ++ and resumed symmetrical beating at higher calcium concentrations (Fig. 6 d).
The percentage ofpf-28 axonemes that could be reactivated at 10 -4 M Ca ++ was generally much lower than the percentage (usually 70-90%) reactivated at 10 -7 M Ca ++, however, and reactivation often ceased after a few minutes of observation under these conditions. Some component of the motile ma-chinery, involved only in reactivation at high calcium concentrations or at particular risk under those conditions, apparently has a greater lability in pf-28 axonemes. Those pf-28 axonemes that do reactivate at 10 -4 M Ca** have much smaller bend amplitudes than do wild-type axonemes at 10 -4 M Ca ++ (compare Fig. 6, b and d), in agreement with observations of live cells.
D I S C U S S I O N
A new Chlamydomonas mutant has been isolated, pf-28, which is totally deficient in outer-row dynein arms (Fig. 2). The absence of outer-arm material in pf-28 was confirmed by SDS PAGE (Fig. 3), which revealed no outer-arm dynein polypeptides even after sensitive silver staining of the Coomassie Blue-stained gels. In contrast, the previously isolated mutants without outer arms, pf-13, pf-13a, pf-22, and pf-22a, all retain outer arms on at least 5% of their outer doublet microtubules (9). These other Chlamydomonas mutants without outer arms, and the only available mutant without inner arms (pf-23), have totally paralyzed flagella (9). A complete suppression of motility in flagella that lack only a single row of arms suggests that both rows must be functional to generate organized flagellar bends. On the contrary, sea urchin spermatozoan axonemes continue to beat at a reduced frequency after extraction of their outer-row arms (7), which indicates that inner-row arms have some capacity to function independently. The ability of pf-28 cells to swim at a reduced speed, but with a pattern otherwise indistinguishable from that of wild-type cells, demonstrates unequivocally that inner- row arms can act independently to sustain flagellar beating. Most of the cells in Fig. 2, a and b are progressing uniformly in one direction in response to a positive phototactic stimulus (see Materials and Methods). Thus, the phototactic reorientation of pf-28 cells under these conditions does not differ from that of wild-type cells. In addition, if either wild-type or pf-28 cells are dark-adapted for 1 to 2 min, bright illumination will induce a typical reversal response (5) in which the flagella temporarily beat in a symmetrical "flagellar" mode, then resume their normal, asymmetrical "ciliary" beating mode. These results indicate that calcium-mediated changes in flagellar waveform, such as the reversal response (2, 5) and phototactic turning (10), do not require any functions provided exclusively by outer row dynein arms.
We have identified specific effects of the pf-28 mutation on flagellar beat parameters by examining both live cells and reactivated axonemes. During forward swimming, pf-28 flagella beat with an asymmetric waveform that closely resembles that of wild-type flagella, but with a frequency reduced to ~55% of the wild-type frequency. Isolated pf-28 axonemes, reactivated at 10 -7 M Ca ++, also beat at only 40-50% of the wild-type frequency, and display waveforms that differ only slightly from the typical wild-type pattern. The primary effect of a lack of outer-row arms on normal forward motility is thus a reduction in beat frequency to half the normal value, a result identical to that observed previously by outer-arm extraction of sea urchin axonemes (7). Although pf-28 cells display flagellar reversal during photophobic episodes, their flagellar beat patterns differ from the typical wild-type response in two aspects. Average bend angles of wild-type flagella are considerably larger than those of pf-28 flagella, and the base of wild-type flagella are parallel during reversal, whereas those of pf-28 are held in a "V". This suggests that outer-row arms may play a direct role in flagellar reorientation during a reversal response. Alternatively, complete reorientation may require greater bend amplitudes and their resultant hydrodynamic forces.
The total absence of outer-arm components in pf-28 flagella greatly facilitates the analysis of additional dynein ATPases (presumably inner-arm components) whose presence in extracts from wild-type flagella is masked by the much higher specific activities of the outer arm ATPases. When KC1 extracts of pf-28 axonemes are applied to sucrose gradients, two peaks of ATPase activity are seen, with sedimentation rates intermediate to those of the two outer-arm ATPases (Fig. 4). We have not subjected these ATPases to detailed sedimentation rate analyses, and since the conditions used in these experiments do not approximate S20.w conditions, we cannot assign accurate S values as yet. By assuming that high ionic strength does not alter the S values of our standards, we obtain provisional values of 17.7S and 12.7S for the sedimentation rates of the ATPases remaining in this mutant. Previous comparisons of Chlamydomonas wild-type ATPases with those of a mutant without inner arms, pf-23, described innerarm dyneins with sedimentation coefficients of 10-1 IS and 12.5S (9,16). A 10-1 IS ATPase was also isolated from wildtype flagella by Pfister et al. (14). The relationship between these previously described ATPases and the dyneins of pf-28 have not been established, although preliminary electrophoretic analysis (not shown) indicates that the polypeptides associated with our 12.7S and 17.7S activities are missing in pf-23, and that both of these ATPase complexes are therefore components of the inner-arm row.
It is clear that gross similarity in the swimming behavior of pf-28 and wild-type cells and the general equivalence of their flagellar waveforms shows that a wide range of normal motility functions are retained by pf-28 flagella. Outer arms are not required for these functions, which suggests that both arm rows may contribute little more to flagellar motility than interdoublet sliding forces, which are then controlled by other axonemal structures to generate bends.
We wish to thank Dr. Ted Clark for helpful discussions and critical reading of the manuscript. This work was supported by National Institutes of Health fellowship GM08595 to Dr. Mitchell and National Institutes of Health grant GM14642 to Dr. Rosenbaum. | 2014-10-01T00:00:00.000Z | 1985-04-01T00:00:00.000 | {
"year": 1985,
"sha1": "52fb4e308801dee4166a4d341cefaaba6ccd016d",
"oa_license": "CCBYNCSA",
"oa_url": "http://jcb.rupress.org/content/100/4/1228.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "52fb4e308801dee4166a4d341cefaaba6ccd016d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
37171897 | pes2o/s2orc | v3-fos-license | Multidisciplinary retroperitoneal and pelvic soft-tissue sarcoma case conferences: the added value that radiologists can provide.
Clinical Vignette: A 50-year-old woman presents to the emergency department with increasing abdominal pain. Abdominal computed tomography imaging reveals an expanded inferior vena cava-filling defect that is suspicious for a retroperitoneal sarcoma, possibly a primary leiomyosarcoma of the inferior vena cava. The surgery team discusses the case with the radiologist, and all agree that there are multiple challenges with obtaining a tissue diagnosis and determining resectability. Thus, it is decided that this patient should be discussed at a multidisciplinary case conference. In the present article, we feature a case-based scenario focusing on the role of the radiologist in this type of multidisciplinary team.
INTRODUCTION
Multidisciplinary case conferences (mccs) are a growing area of physician collaboration designed to allow for evidence-based and patient-centred management in specialized cases. The goal is to improve patient outcomes in specific malignancies, including hepatobiliary cancer; colorectal and gastrointestinal cancer; breast cancer; melanoma; and genitourinary, gynecologic, and hematologic malignancies 1 . Given the need for specialized care and close interaction with various physicians to treat those complex tumours, hospitals across North America are adopting this method of practice.
Multidisciplinary case conferences gather specialists from multiple health care teams including surgery, medical and radiation oncology, nursing, social work, nuclear medicine, pathology, and radiology. Currently, evidence shows that mccs lead to changes in patient diagnosis, improve patient management, and increase satisfaction for patients and physicians alike, although no prospective studies have demonstrated improvements in patient survival 1,2 . The ultimate goal is to collaborate as trained specialists to maximize patient care and improve outcomes.
Cancer Care Ontario, an agency of the Ontario provincial government, is responsible for improving cancer services. In 2009, Cancer Care Ontario recommended that all patients diagnosed with sarcoma be reviewed at a mcc and that treatment be supervised by an experienced multidisciplinary sarcoma team in a specialized centre 3,4 . Moreover, as part of Cancer Care Ontario's Provincial Sarcoma Services Plan, 3 centres in the province of Ontario have been designated sarcoma centres of excellence.
Clinical Vignette
A 50-year-old woman presents to the emergency department with increasing abdominal pain. Abdominal computed tomography (ct) imaging reveals an expanded inferior vena cava (ivc)filling defect [ Figure 1(A)] that is suspicious for a retroperitoneal sarcoma, possibly a primary leiomyosarcoma of the ivc. The surgery team discusses the case with the radiologist, and all agree that there are multiple challenges with obtaining a tissue diagnosis and determining resectability. Thus, it is decided that this patient should be discussed at a multidisciplinary case conference. Retroperitoneal soft-tissue sarcomas, as presented in the foregoing clinical vignette, are rare tumours that account for fewer than 1% of all adult cancers. They often require complex management, thus calling for specialist care.
Although the role of radiologists in this specialized multidisciplinary setting is not always well known by all physicians, it is clear to those involved in sarcoma mccs that radiologists play an essential role in supporting the medical and surgical decision-making process. Because the subtype and anatomic location of musculoskeletal softtissue sarcomas-and also retroperitoneal and pelvic sarcomas-affect both the surgical approach and the potential need for neoadjuvant therapy, the differential diagnosis made by the radiologist is an essential part of management. In the present article, we discuss the various roles of the radiologist, and the added value that the radiologist brings to specialized sarcoma mccs and to patient care.
THE RADIOLOGIST'S ROLES Diagnosis
An important role of radiology is to provide a succinct diagnosis or differential diagnosis. In the clinical vignette, location is key for the accurate diagnosis of a leiomyosarcoma. The initial imaging modality used is often ct, and identifying the tumour as "intravascular" and "within the ivc" is highly specific for the diagnosis, with a limited differential 5 . In other cases, a retroperitoneal mass can be extremely large, with obliteration of fat planes, making the exact site of organ origin difficult to determine. In such situations, magnetic resonance imaging can be useful in delineating tissue planes and determining the organ of origin, allowing for a more accurate diagnosis. Magnetic resonance imaging can also be helpful in preoperative planning for tumours that abut bone or involve vascular structures or nerves 6 .
Tissue Biopsy
In the clinical vignette, the patient's case was discussed at the sarcoma mcc, and a tissue diagnosis was required. A preoperative tissue diagnosis is often essential to determine treatment. Some lesions might be benign, requiring simple observation; others might require neoadjuvant therapy. Still others could turn out to be another malignancy, such as lymphoma, which requires very different management and has a different prognosis.
Image-guided biopsies should be performed only after consultation with the surgical oncologist for appropriate biopsy planning, because some lesions, based on radiologic findings, might not require biopsy at all; patients can proceed directly to surgery. For example, in some institutions, a retroperitoneal liposarcoma might not require presurgical tissue diagnosis. A high-grade heterogeneous dedifferentiated retroperitoneal liposarcoma can be confidently diagnosed on cross-sectional imaging and, after discussion at the mcc, can be resected en bloc when neoadjuvant therapy is not indicated 4 , avoiding tissue biopsy and eliminating the risk of seeding the biopsy track. In cases of low-grade welldifferentiated liposarcoma as seen on imaging, pretreatment tissue diagnosis is sometimes avoided because of the theoretical risk of targeting the benign or low-grade tumour component, resulting in a false-negative biopsy.
When the diagnosis is not pathognomonic on imaging, or when preoperative neoadjuvant therapy is planned, core-needle biopsy is recommended. In other centres, all masses deemed to possibly represent sarcomas are biopsied, and molecular testing is performed (fluorescence in situ hybridization). In lipomatous tumours, fluorescence in situ hybridization utilizing MDM2 and CDK4 amplification will differentiate between benign lipomas and malignant liposarcomas 7 .
Careful biopsy planning by radiologists and surgical oncologists together will allow for high diagnostic yield. Radiologists carefully review preoperative imaging and selectively target the most aggressive-appearing portion of the lesion (for example, the soft-tissue component, avoiding areas of cystic necrosis or degeneration) that will give the most accurate representation of the tumour type and grade. Wu et al. 8 evaluated the yield for diagnosing sarcomas in the extremities and confirmed that multiple cores of the highgrade portion provide the highest diagnostic yield. Those findings can be extrapolated to abdominal and pelvic softtissue sarcomas. Furthermore, core biopsies are essential for diagnostic yield because pathologists require tumour architecture to establish a diagnosis. Fine-needle aspiration samples are not useful and are not recommended for that reason. Risk of needle-track seeding is minimal and should not be a reason to avoid a biopsy when necessary to yield a diagnosis 4,6 .
The surgeon's input into the biopsy plan is critical, and clear communication with the radiologist is key because treatment of the biopsy tract is required. At our institution, the radiologists and surgeons have developed a standardized sarcoma biopsy planner ( Table i). The biopsy planner includes patient and imaging information: patient details, referring physician, reference imaging, lesion size and location. It also includes the biopsy plan: patient positioning, imaging modality to be used, and the planned needle approach.
In our clinical vignette, the patient with the ivc mass was discussed at the mcc and, because of disease extent, preoperative neoadjuvant therapy was indicated, and a tissue diagnosis was required. The surgeons and radiologists discussed possible approaches for biopsy. Two approaches were considered: ct-guided retroperitoneal or transluminal performed by interventional radiology. Both options were feasible, but a ct-guided percutaneous right paravertebral approach, with the patient in the prone position, was chosen based on the subsequent posterior or retroperitoneal surgical approach and planned radiotherapy for treatment of the biopsy tract. The coaxial biopsy system is the standard ct-guided biopsy technique for consistent selection of the peripheral exophytic component [ Figure 1(B)] to improve diagnostic yield and limit injury to the ivc vessel wall. Radiology often uses a coaxial biopsy technique to significantly lower the risk for seeding 9,10 . The biopsy plan and approach were documented using the sarcoma planner, and at the mcc, the pathologist confirmed the diagnosis of an ivc leiomyosarcoma.
Preoperative Planning
Complete resection is the cornerstone of cancer management and offers the best chance of cure 4 . Radiologists play an essential role at mccs to help determine the likelihood Other comments? CT = computed tomography; MR = magnetic resonance; PET = positron-emission tomography; Gd = gadolinium. A B of resectability. Specifically, radiologists help to determine if vital organs are invaded by tumour and cannot be resected. To ensure negative margins, en bloc resection with adjacent structures or organs (and thus multi-visceral resections) are often required 9 . Imaging helps to define involved margins and structure boundaries to help decide which adjacent tissues might need to be resected together with the tumour. In tumours that are borderline resectable, or that are resectable but of high grade, neoadjuvant radiation or chemotherapy might be recommended before surgery. For leiomyosarcomas of the ivc, surgical resectability depends on the location: 50.8% arise from the middle ivc, 44.2% from the lower ivc, and 4.2% from the upper ivc (the latter being least amendable to surgical resection) 5 . Resection of the tumour with the ivc en bloc, with venous reconstruction, is often required. If the tumour is borderline resectable, the goal of preoperative radiation or chemotherapy (or both) is to shrink the tumour to allow for surgical resection 4 .
Assessing Response to Neoadjuvant Treatment
Radiologists are involved in patient management when assessing the imaging response to neoadjuvant therapy. Re-staging scans are performed with the goal of reassessing tumour size and margins with adjacent organs. Assessment for distant metastatic disease is also required before surgery. Surgeons rely on cross-sectional imaging to show response to treatment and to evaluate the extent of resection that will allow for complete en bloc resection and potential cure, especially in the case of a tumour previously deemed borderline resectable.
Evaluation of Postoperative Complications
Radiologists play an important role in the detection of acute and long-term complications. Acute postoperative complications vary depending on the site of the sarcoma and the resection that was performed. The organs most commonly resected en bloc with a retroperitoneal sarcoma are, in order of frequency, the kidney, colon, spleen, pancreas, small bowel, and diaphragm 11 . Thus, complications range from bleeding, pneumothorax, pancreatic leak, and gastrointestinal anastomotic leak, to abscess formationall of which can be readily detected by imaging. Long-term complications can include adhesions, bowel strictures, small-bowel obstructions, and abdominal wall incisional hernias. Computed tomography imaging is the modality most commonly used for the assessment of complications, although magnetic resonance imaging can be helpful as well, particularly for evaluation of pelvic organs.
Posttreatment Surveillance
Risk of recurrence unfortunately does not plateau after complete resection of a soft-tissue sarcoma; recurrences can happen 15-20 years later. The Retroperitoneal Sarcoma Consensus statement therefore recommends indefinite follow-up for this patient population 4 . Regular follow-up surveillance includes ct imaging of chest, abdomen, and pelvis every 3-6 months for 2-5 years and then annually.
Evidence of local recurrence without distant metastatic disease can be an indication for re-excision 6 . Such cases are often re-presented at mccs for discussion of management. In those cases, radiology plays an essential role in determining if an image is more likely to represent local recurrence or simply postoperative fibrosis. Imageguided biopsy is useful and often necessary to confirm recurrence in those situations and helps to guide treatment strategies, given that re-excision is often challenging. If resection is being considered, neoadjuvant radiation or chemotherapy (or both) might be considered, depending on the modalities previously used for the primary sarcoma.
If, at cross-sectional imaging, the recurrence is found to be unresectable because of invasion of vital structures or concomitant distant metastases, radiation therapy might still be an option, depending on the patient's symptoms and on whether radiation treatment was administered to the original tumour. For disseminated metastatic disease, supportive care might be considered 9 . Palliative surgery is often unsuccessful and is not recommended 4,6 .
For the patient in the clinical vignette, a primary surgical resection was successfully performed. No immediate postoperative complications occurred, and initial follow-up surveillance did not reveal recurrent disease. The patient then underwent postoperative adjuvant radiation therapy. Follow-up is currently being conducted by the surgical oncologist, and the patient is doing well, with regular active surveillance and no evidence of recurrent disease.
SUMMARY
As illustrated by the clinical vignette, radiologists are integral members of the mcc team and-from initial diagnosis and surgical planning to follow-up surveillance-provide added value in the management of patients diagnosed with abdominal and pelvic sarcomas. Clear and standardized communication between radiologists and multiple health care professionals is essential in the setting of a mcc and optimizes patient care. Creation of care maps is important to ensure that these rare sarcomas are treated in a facility that is equipped and organized to provide excellent multidisciplinary care to improve patient outcomes. | 2018-04-03T03:57:16.703Z | 2017-06-28T00:00:00.000 | {
"year": 2017,
"sha1": "53c35367e823f006b2d4631f175072552051160f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3747/co.24.3478",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "277f53550dfca2f5fe40378dae107bac345c8371",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266792982 | pes2o/s2orc | v3-fos-license | Fat-forming solitary fibrous tumor of the orbit with typical imaging findings
Purpose We describe a case of fat-forming solitary fibrous tumor (SFT) of the orbit with typical findings on imaging that may improve the awareness of orbital fat-forming SFT. Observations An 88-year-old female presented with exophthalmos and pain in her right eye. Preoperative imaging showed an oval, well-defined mass with soft-tissue density, interspersed with a well-circumscribed lesion. The lesion showed low-density in computed tomography (CT) scans, hyperintense in T1/T2 weighted images of magnetic resonance imaging (MRI) scans and hypointense in fat-suppressed images of MRI scans. The tumor was removed en bloc and diagnosed as low-grade malignant fat-forming SFT by pathological examination. There was no evidence of recurrence 9-month postoperatively. Conclusions The imaging feature of orbital fat-forming SFT is a well-defined solid tumor interspersed with adipose tissue. Such findings are vital for the preoperative diagnosis and the choice of the treatment.
Introduction
][4] The limited understanding of orbital fat-forming SFT is primarily due to the absence of comprehensive clinical and radiographic data.This study presents a case with typical imaging characteristics, aiming to enhance the accuracy of diagnosis and treatment for this particular disease.
Case report
An 88-year-old female patient suffered from exophthalmos in her right eye for approximately 20 years (Fig. 1A), with pain intensifying over the last year.The patient had not undergone any specific treatment previously and had experienced vision loss five years prior.The eye movements of the right eye were restricted in all directions.An immobile and slightly tender hard mass was detected in the supraorbital areas, resulting in the subocular displacement.Computed tomography (CT) and magnetic resonance imaging (MRI) revealed a substantial isodense or isointense mass, interspersed with a well-circumscribed lesion (Fig. 1B and C).The lesion showed low-density in CT scans (Fig. 1B), hyperintense in T1/T2 weighted images of MRI scans (Fig. 1C and D) and hypointense in fat-suppressed images of MRI scans (Fig. 1E).Additionally, a non-enhancing necrotic region with irregular and indistinct boundary was observed (Fig. 1F).
The patient underwent an anterior orbitotomy via a trans-eyebrow approach, which revealed an oval-shaped, enveloped and wellcircumscribed mass.The tumor was removed en bloc and measured 5 × 4 × 4 cm (Fig. 2A).In that case, we did not perform extra frozen sections perioperatively.Upon cross-sectioning, the specimen showed signs of tissue hemorrhage and necrosis (Fig. 2B).Hematoxylin-eosin (HE) staining identified oval and spindle tumor cells.The tumor cells were in a hypercellular or hypocellular distribution in different areas (Fig. 2C).Besides, there are some staghorn vessels decentralized in the tumor which characterized by ordinary SFT.The tumor also contained clusters of mature adipocytes (Fig. 2D) and necrotic tissues (Fig. 2E).Immunohistochemical analysis showed that the tumor was negative for MDM2 and CDK4, positivity for CD34, BCL2, CD99, and STAT6 (Fig. 2F).The positive rate of Ki67 was 10 % and some of tumor cells grew actively with nuclear atypia.Based on these pathological results, the patient was diagnosed with low-grade malignant fat-forming SFT.At last follow-up 9 months later, the patient's right eye showed no obvious exophthalmos.And there was no well-defined tumor-like lesion on CT findings.All the data showed no recurrence of the patient.
Discussion
Similar to previous reports, [2][3][4] the principal manifestations of orbital fat-forming SFT predominantly resulted from the mass effect.The patient presented with ptosis, pain, and visual impairment attributed to tumor compression.However, these symptoms were nonspecific and not enough to make a diagnosis.
To our knowledge, our study for the first time showed the detailed preoperative imaging characteristics of orbital fat-forming SFT.Firstly, the lesion demonstrated characteristic imaging features of orbital SFT.Specifically, it presented as an oval, well-defined mass with soft-tissue density which exhibited significant enhancement on contrastenhanced MRI images.It also did not display evident bone destruction on CT scans. 5Meanwhile, the tumor also exhibited imaging features of adipose tissue which showed hyperintense on T1/T2 weighted images and hypointense on fat-suppressed images of MRI scans.These imaging features played a crucial role in preoperative diagnosis.3][4] Therefore, the absence of adipose tissue imaging does not exclude a fat-forming SFT diagnosis.This discrepancy might relate to patient-specific disease progression and size of adipose tissue.3][4] Only in our case was there a large adipose tissue sized 5 mm, while the other cases consisted of scattered adipose cells.
Fig. 1 .
Fig. 1.A) Clinical photograph of the patient.B) CT scan depicted a substantial mass of soft-tissue density, intermingled with a well-circumscribed hypointense lesion.C-E) Magnetic resonance imaging (MRI) indicated the presence of adipose tissue within the tumor.F) MRI finding demonstrated a necrotic lesion within the tumor.
Fig. 2 .
Fig. 2. Gross appearance and histopathological examination.A) In gross examination, the lesion is well circumscribed with a glistering surface.B) The sectioned tumor specimen showed extensive necrosis and cystic degeneration.C) Hematoxylin-eosin (HE) staining showed the fields of dense tumor cells and sparse cells arranged within a collagenous, vessel-rich stroma.D-E) In low magnification, mature adipose tissue and a necrotic tissue are indicated by the asterisks.F) Immunohistochemical test showed the tumor was positive for STAT6. | 2024-01-07T16:30:47.136Z | 2024-01-01T00:00:00.000 | {
"year": 2024,
"sha1": "2db4a4aa4a43e177dcbcca813118af05863c2e37",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ajoc.2024.101992",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c8bf8d25a320ed73dbcda961c3cff1c9b4d299ec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
151675368 | pes2o/s2orc | v3-fos-license | Proprioception And Self-Awareness For Psychophysical Integration
Due to massive overuse of technology, teenagers nowadays have less opportunities to become familiar with their body (Kurzweil, 2008; Combi, 2000) and to control internal and external proprioceptive stimuli (Rose, 2010, Cuomo, 2007; Ivanenko et al., 1999, 2000; Allum, 1998; Massion, 1992; Goodwin et. al., 1972). This can cause difficulties in the processing and integration of information at a physical, cognitive and emotional level (Macdonald,1992; Livingstone, 2008; Coklar & Sahin, 2011). International research confirms that today more than ever educators have the fundamental task to help each child to regain the perception of his or her body, its spatial orientation and the perception of its single parts, in order to be able to integrate in a whole concept both physic and psychic dimensions. To this end, a pilot study has been conducted with a group of Italian students in the final year of secondary school in Florence. The pilot study spanned six physical education sessions and involved participants between 18 and 19 years of age. The purpose of the test was to study how a protocol of basic static and dinamic balance exercises, along with breath exercises and proprioceptive awareness stimulated through meditative practices, could influence the moods and wellbeing of the students.
Introduction
Virtualization of everyday life entails a progressive and worrisome distance from self-perception and from body contact seen as a means of expression of identity as well as a privileged channel of communication and relationships. Excessive use of new technologies among the new generations causes the decrease in empathetic capacity at different levels: from sharing real experiences with true friends to the movement and the perception of the body in space. As a result, young people often experience an increased difficulty in feeling comfortable with themselves and with others. eISSN: 2357-1330 responsibility of the Conference Organization Committee 2 Italian Ministry of Education guidelines have highlighted since 1982 the importance of selfconsciousness through physical activities, as they provide each student with personal management tools (Miur, 2009;D.P.R. no. 908, 1982).
Problem Statement
It is a well-recognised fact that teenagers nowadays have less opportunities to become familiar with their body due to their massive overuse of technology (Kurzweil, 2008;Combi, 2000) and control internal and external proprioceptive stimuli (Rose, 2010, Cuomo, 2007Ivanenko et al., 1999Ivanenko et al., , 2000Allum, 1998;Massion, 1992;Goodwin et. al., 1972). The detachment induced by virtual realities can cause difficulties in the processing and integration of information at a physical, cognitive and emotional level (Macdonald,1992;Livingstone, 2008;Coklar & Sahin, 2011).
Research question
Can a protocol of static and dynamic balance exercises, combined with breath and proprioceptive awareness have a positive influence on teenagers' mood states?
Purpose of the study
The purpose of this study is to reflect on the necessity of leading students back to their own corporeality in order to help them regain possession of both neuromotor signals that come from within the body and control of the stimuli coming from external sources. The objective is to reinforce bodyawareness, a fundamental skill which is at risk in our contemporary and evermore virtual society.
Research methods
For our pilot study two separate groups of young adults aged between 18 and 19 years of age were created. The experimental group was composed by 14 students of a final year class in secondary school. The control group was composed by 12 students of the same age and from the same school.
The experimental group was administered the following tests: -Pre-and post-balance tests (Fukuda Unterberger test) (experimental group); -Pre-and post-tests on mood states (POMS) (experimental and control group); -Written feedback from the partecipants (as a qualitative assessment of the experimental group).
The control group only participated in the POMS.
The pilot study comprised six sessions, each lasting two hours, and it took place from January 12, 2016 to March 1, 2016.
Each session was divided into three phases: I phase -static and dynamic balance exercises with eyes open and with eyes closed; II phase -breathing exercises (Van Lysebeth,1978;Middendorf, 2005;Ferraro, 2008); III phase -making a connection with parts of the body and acquiring awareness of the positioning of the body in space (Ivanenko, Grasso, & Lacquaniti, 1999, 2000 3 through three steps: 1-counsciousness of the different parts of the body, of the space orientation and of unity of the body as first channel of communication, 2-consciousness of self-identity trough name and gender, 3-consciousness of those interior resources (Franca,2014) conductive to psychophysical integration. During the training many videos and images were shared on a Whatsapp experimental chat group created specifically for this project (Appendix 1).
Findings
FUKUDA UNTERBERGER -The hypothesis was that kids would improve their perception of their body in space while working on proprioception. The test required that they kept a central position in the circle and axis 0°.
The students were asked to march on the spot, eyes closed, and to remain in centre of the circle while counting 40 steps and trying not to move ahead or behind from the centre of the circle drawn on the The results indicate that the training facilitated an improvement of the awareness of linear movements away from the center of the circle, but not of angular movements compared with 0° ( Fig. 1).
POMS test -The Analysis of variance (ANOVA) with repeated measures was adopted to investigate the differences between the pre and post-training points for both groups (experimental and control). All the sub-scales dealing with the experimental group were analysed with Test Within Subject and Test Between Subject, showing significant results related to the factor Tension-Anxiety only.
In this case, the average of points was significantly different between the first and the second phase F (1,27)=28,21, p=.000. Data showed both a significant interaction between the phases and the belonging to one of the two groups F (1,27)=6,70, p=.015. Groups showed points that changed significantly, different in terms of intensity between the pre and post-test phase and this was attributed to the type/nature of the group.
The experimental project detailed above showed how the training had a significant impact in decreasing Tension-Anxiety in the experimental group (Fig. 2) and its results, through the average scores, confirmed a significant decrease in the pre and post experiment T factor. WRITTEN FEEDBACK -Virtually all of the students reported that the activities on offer allowed them to acquire many new techniques for understanding their own bodies in a hands-on way which they didn't know before (Appendix 2).
Conclusions
We observed that breathing techniques and proprioceptive awareness acquired through meditative practices, together with static and dynamic balance activities for the body, enabled the students in the test group to lower levels of anxiety and tension that were not lowered in the control group that participated in regular physical activities concerning balance, without access to breathing and meditation exercises.
Limitations of the study
Due to organizational reasons of the school involved in the test, the study did not have a regular weekly continuity and the number of meetings was greatly reduced from the original plan. Equally, organizational issues prevented the administration of the Fukuda Unterberger test to the control group.
Acknowledgements
We wish to thank The Fondazione Internazionale verso l'Etica for its scientific support, as well as the Head Teacher of the Pascoli School of Florence, Dr. Elisabetta Bonalumi, who made the realization of the pilot study possible, and Prof. Giuliano Giovannini, who coordinated school activities and students in both the experimental and control group, for all their cooperation and support. This comment is an useful feedback to improve on our experiment as it encourages us to carry on with our project. | 2018-12-27T14:41:20.352Z | 2016-11-22T00:00:00.000 | {
"year": 2016,
"sha1": "c5e4d68211b86e792f6486d70658284e29e8bed9",
"oa_license": "CCBYNC",
"oa_url": "https://flore.unifi.it/bitstream/2158/1091081/1/Giulia-Lucchesi-ProprioceptionSelfAwarenessFinal.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cba39f4994e1878aadee76a8a7edf0b6fdedada8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
17492081 | pes2o/s2orc | v3-fos-license | Nebulin interactions with actin and tropomyosin are altered by disease-causing mutations
Background Nemaline myopathy (NM) is a rare genetic muscle disorder, but one of the most common among the congenital myopathies. NM is caused by mutations in at least nine genes: Nebulin (NEB), α-actin (ACTA1), α-tropomyosin (TPM3), β-tropomyosin (TPM2), troponin T (TNNT1), cofilin-2 (CFL2), Kelch repeat and BTB (POZ) domain-containing 13 (KBTBD13), and Kelch-like family members 40 and 41 (KLHL40 and KLHL41). Nebulin is a giant (600 to 900 kDa) filamentous protein constituting part of the skeletal muscle thin filament. Around 90% of the primary structure of nebulin is composed of approximately 35-residue α-helical domains, which form super repeats that bind actin with high affinity. Each super repeat has been proposed to harbor one tropomyosin-binding site. Methods We produced four wild-type (WT) nebulin super repeats (S9, S14, S18, and S22), 283 to 347 amino acids long, and five corresponding repeats with a patient mutation included: three missense mutations (p.Glu2431Lys, p.Ser6366Ile, and p.Thr7382Pro) and two in-frame deletions (p.Arg2478_Asp2512del and p.Val3924_Asn3929del). We performed F-actin and tropomyosin-binding experiments for the nebulin super repeats, using co-sedimentation and GST (glutathione-S-transferase) pull-down assays. We also used the GST pull-down assay to test the affinity of WT nebulin super repeats for WT α- and β–tropomyosin, and for β-tropomyosin with six patient mutations: p.Lys7del, p.Glu41Lys, p.Lys49del, p.Glu117Lys, p.Glu139del and p.Gln147Pro. Results WT nebulin was shown to interact with actin and tropomyosin. Both the nebulin super repeats containing the p.Glu2431Lys mutation and nebulin super repeats lacking exon 55 (p.Arg2478_Asp2512del) showed weak affinity for F-actin compared with WT fragments. Super repeats containing the p.Ser6366Ile mutation showed strong affinity for actin. When tested for tropomyosin affinity, super repeats containing the p.Glu2431Lys mutation showed stronger binding than WT proteins to tropomyosin, and the super repeat containing the p.Thr7382Pro mutation showed weaker binding than WT proteins to tropomyosin. Super repeats containing the deletion p.Val3924_Asn3929del showed similar affinity for actin and tropomyosin as that seen with WT super repeats. Of the tropomyosin mutations, only p.Glu41Lys showed weaker affinity for nebulin (super repeat 18). Conclusions We demonstrate for the first time the existence of direct tropomyosin-nebulin interactions in vitro, and show that nebulin interactions with actin and tropomyosin are altered by disease-causing mutations in nebulin and tropomyosin.
A rapidly growing number of mutations in the human NEB gene have been identified as a common cause of NM. These NEB mutations include frameshifts, premature stop codons, splice-site mutations, large in-frame deletions, and missense mutations [6,[17][18][19]. The mutations cause both mild and severe forms of NM, although the typical congenital form appears to be the most common, which usually results only in slowly progressive disease [1,2]. Homozygous missense mutations in NEB have been found to cause distal nebulin myopathy [14], and NEB compound heterozygous mutations may result in core-rod myopathy [16].
Nebulin is a giant (600 to 900 kDa), thin-filament, actinbinding protein, and the gene comprises a total of 183 exons, of which at least 17 are alternatively spliced, producing hundreds of different NEB isoforms [20]. A major stretch of nebulin consists of repetitive modules, 30 to 35 amino acids (aa) long, called simple repeats [21]. Most of these simple repeats are arranged into seven-module super repeats. Each simple repeat has a predicted α-helical secondary structure, and an SDXXYK motif that serves as an actin-binding site [22,23]. A second motif, WLKGIGW, is present once in each super repeat, and is thought to serve as a tropomyosin-binding site [21]. The longest isoforms of nebulin bind as many as 239 actin monomers, and are thought to act as molecular rulers, defining thinfilament lengths, especially specifying minimum lengths of the filaments to optimize thin/-thick filament overlap and force production [24][25][26]. Apart from thin-filament regulation, the structural roles of nebulin extend to maintaining intermyofibrillar connectivity through interaction with desmin [27] and setting physiological Z-disk widths [24,28]. Knockout Neb ΔExon55 mice show impaired regulation of contraction, which appears as marked changes in cross-bridge cycling kinetics and a reduction in the calcium sensitivity of force generation [29].
Biochemical studies have shown that isolated nebulin super repeats bind actin with high affinity [30]. Furthermore, a nebulette repeat 167 aa long, consisting of five nebulin-like repeats approximately 35 aa long, was shown to interact with actin, tropomyosin, and the troponin complex [31]. This fragment shows the highest homology to nebulin C-terminal simple repeats outside the super repeat region of nebulin. During muscle contraction, tropomyosin moves between different binding sites on the actin filament, allowing actin-myosin interactions [32]. It also appears that nebulin has several binding sites on actin, suggesting that nebulin acts in concert with tropomyosin during muscle contraction [32].
The successful isolation of the nebulin protein for the first time [33], and the generation of knockout mouse models [24,29,34] have helped elucidate the function of this giant molecule. Because of the enormous size of nebulin, functional studies of full-length nebulin are difficult. Hence, we opted for studying protein domains (super repeats) containing mutations known to cause NM ( Figure 1). Mutations in all the selected super repeats have been reported to cause NM or distal myopathy (Table 1).
We produced four wild-type (WT) nebulin super repeats (283 to 347 aa long) and five corresponding mutants: three missense mutations (p.Glu2431Lys, p.Ser6366Ile, and p.Thr7382Pro) and two in-frame deletions (p.Arg2478_ Asp2512del and p.Val3924_Asn3929del) (Table 1, Figure 1). The p.Arg2478_Asp2512del (2.5 kb deletion including exon 55) is a founder mutation in the Ashkenazi Jewish population. The missense mutations p.Ser6366Ile and p.Thr7382Pro are founder mutations in the Finnish population [14,18]. We performed F-actin and tropomyosinbinding experiments for the nebulin super repeats, using co-sedimentation and GST-pull-down assays in order to elucidate the pathogenetic mechanisms by which the mutations exert their effects.
RNA isolation and RT-PCR
Total RNA was isolated from human vastus lateralis (VL) muscles, using the RNeasy Fibrous Tissue Mini Kit (Qiagen, Venlo, The Netherlands). cDNA was synthesized from 2 μg of total RNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Foster City, CA, USA); 2 μl of template were used per 20 μl PCR reaction. All PCR reagents were from Thermo Scientific (Waltham, MA, USA). The amplifications were performed using Phusion High-Fidelity DNA polymerase, and PCR products were cloned into pCRBluntII-TOPO (Invitrogen, Carlsbad, CA, USA). For all nebulin exon amplifications, after initial heating at 98°C for 1 minute, 30 cycles of denaturation at 98°C for 10 seconds, annealing at 59°C for 30 seconds, and extension at 72°C for 15 seconds were performed, followed by a final extension of 10 minutes at 72°C. The primers used for nebulin exon amplifications and in vitro mutagenesis are summarized in Table 2.
Ethics approval
The project as a whole was approved by the ethics committee of Children's Hospital, University of Helsinki, Helsinki, Finland. The VL muscle was obtained from an The nebulin protein structure and the location of the mutations. The upper part of the figure shows a schematic presentation of the nebulin protein structure and its known protein interaction partners. The lower part of the figure shows a detailed view of the super repeats included in the study (S9, S14, S18, and S22), and the location of the mutations (arrows) and tropomyosin-binding sites (X) in the super repeats. (B) Purified GST (glutathione-S-transferase)-nebulin and tropomyosin. GST-nebulin domains were produced in the Escherichia coli strain BL21 (upper panel), and the α-tropomyosins (Tm3) and β-tropomyosins (Tm2) in insect cells (lower panel). The proteins were purified, run in a SDS-PAGE gel, and stained with Coomassie Blue. The β-tropomyosin mutations p.K49del and p.E139del cause altered protein conformation and thus slower migration in the SDS-PAGE gel [35]. Nomenclature of the mutations in relation to other figures: p.Glu2431Lys = ex54m, p.Arg2478_Asp2512del = ex55del, p.Val3924_Asn3929del = ex78m, p.Ser6366Ile = ex122m, p.Thr7382Pro = ex151m. Abbreviations: Tmod, tropomodulin; M1-M8, M163-M176, M177-M185 simple repeats; S1R1-S1R7, S22R1-S22R7, super repeats of seven simple repeats; Ser, Serine-rich domain, SH3, Src homology domain. amputated leg at Tampere University Hospital, and written informed consent for tissue sampling was given by the patient. Ethical approval for this sampling was given by the ethics committee of Tampere University Hospital, Tampere, Finland.
In vitro mutagenesis and sequencing
QuickChange site-directed mutagenesis kit (Stratagene, La Jolla, CA, USA) was used to introduce site-specific point mutations or deletions into NEB cDNA fragments. PCR reactions were performed using a supercoiled, double-stranded DNA as template, with two synthetic oligonucleotide primers containing the chosen mutation.
Cycling conditions were as recommended in the manual, and the primer details are given in Table 2. The purified products were sequenced using BigDye sequencing chemistry (version 3.1_ and an ABI 3730 DNA Analyzer (Applied Biosystems, Foster City, CA, USA). The sequences were analyzed using the Sequencher 4.1 software (Gene Codes Corporation, Ann Arbor, MI, USA).
Construction of vectors for the expression of nebulin super repeats
Plasmid vectors for the expression of human nebulin super repeats 14 and 18 for the analysis of exon 78 and exon 122 were constructed by cloning digested and
Protein production in Escherichia coli
GST-nebulin fusion proteins were expressed from the pGEX-4 T vectors in the E. coli strain BL21 (DE3) (Invitrogen). The proteins were expressed by selecting a single colony and culturing in 5 ml LB supplemented with ampicillin 100 μg/ml. After growing the E. coli to absorbance of 0.5 to 0.8 at 600 nm, the cells were induced with 0.5 mM isopropyl-β-D-thiogalactopyranoside (IPTG) for 3 hours at 250 rpm and 27.8°C. Harvesting of cells and batch-binding protein purifications were performed as described in the manufacturer's manual (Protino® Glutathione Agarose 4B; Macherey-Nagel, Düren, Germany).
Actin binding
Actin binding assays were performed using the Actin Binding Protein Biochem Kit (Cytoskeleton, Denver, CO, USA). Nebulin super repeats (10 μg) were allowed to bind to F-actin (40 μg) for 30 minutes at room temperature. The samples were run in a Beckman Coulter Optima MAX Ultracentrifuge at 60,000 rpm for 1.5 hours. The pellet and supernatant fractions were separated and analyzed by 12% SDS-PAGE electrophoresis, and the proteins were stained with Coomassie Blue. The pellet and supernatant bands were quantified from three experiments done in duplicates using the ImageJ Program (National Institutes of Health, Bethesda, MD, USA).
GST-pull-down assay
GST-containing WT and mutant nebulin super repeats (8 μg) bound to beads were mixed with baculovirus-produced and purified WT α-tropomyosin, WT β-tropomyosin, and mutant β-tropomyosin (40 μg) in 400 μl PBS in a shaker at 4°C for 45 minutes. Samples were centrifuged at 2000 rpm (500 x g) for 5 minutes. The supernatant was removed, and 400 μl fresh 1xPBS was added. Samples were washed five times with centrifugation at 1500 to 2000 rpm (300 to 500 x g) for 5 minutes. The pellets were dissolved in 20 μl Laemmli sample buffer, and analyzed by 12% SDS-PAGE electrophoresis. The proteins were stained with Coomassie Blue. Bands of pelleted protein were quantified using the ImageJ Program from three experiments performed in duplicate. We also tested WT nebulin super repeat binding affinity to WT α-tropomyosin and β-tropomyosin at different concentrations using GST pull-down. Nebulin super repeats (8 μg) were mixed with WT α-tropomyosin and WT β-tropomyosin (7.5, 15, 30, and 60 μg).
Statistical analysis
The statistical significance of the results was calculated using the Mann-Whitney test when comparing two groups, and the Kruskal-Wallis test when comparing three or more groups.
Results
We produced four WT 283 to 347 aa long nebulin super repeats: S9 (347 aa), S14 (347 aa), S18 (283 aa), and S22 (284 aa), and five corresponding mutants ( Figure 1). The individual simple repeats (seven in each super repeat) were of slightly different lengths in different super repeats, hence the size difference. We compared the binding affinities of WT protein domains with the corresponding mutants.
We also performed tropomyosin-binding experiments for the super repeats using GST pull-down assays (Figure 3). Some of the produced nebulin domains showed degraded fragments, but these were larger in size than tropomyosin ( Figure 1B). Super repeat 9, containing the p.Glu2431Lys mutation, showed stronger affinity for tropomyosin but this was not confirmed to be statistically significant. Super repeat 9, containing the in-frame deletion of exon 55 (p.Arg2478_Asp2512del), and super repeat 14, containing the in-frame deletion p.Val3924_Asn3929del, showed slightly, but not statistically significant, stronger affinity for tropomyosin. Super repeat 18, containing the p.Ser6366Ile mutation, showed similar affinity for tropomyosin as WT fragments. The nebulin exon 151, containing super repeat 22 with the missense mutation p.Thr7382Pro, showed significantly weaker affinity for tropomyosin compared with the WT protein fragment (P = 0.039). Tropomyosin affinities for each nebulin super repeat are shown as binding curves (Figure 4).
We also tested the affinity of WT nebulin super repeats for WT and six β-tropomyosin mutants (p.Lys7del, p.Glu41Lys, p.Lys49del, p.Glu117Lys, p.Glu139del, and p.Gln147Pro) using GST-pull-down assays. Nebulin super repeat 18 containing the WT exon 122 showed slightly weaker affinity for the β-tropomyosin p.Glu41Lys mutant, but this was not statistically significant using the Kruskal-Wallis test. The other mutant tropomyosins did not show significant differences in binding affinity for WT nebulin compared with WT tropomyosin (Figure 5).
Discussion
The WLKGIGW motif in nebulin has been proposed to serve as a tropomyosin-binding site [21,23]. Performing GST-pull-down assays for four WT nebulin super repeats Figure 2 Mutations affect the binding of nebulin to F-actin. Nebulin protein domains were incubated with F-actin and centrifuged. Pellet and supernatant fractions were separated, run on SDS-PAGE gels, and stained with Coomassie Blue. The relative abundance of nebulin protein in the pellet was quantified from three independent experiments. The mean value and standard deviations from three experiments are shown in the bar charts to the left, and gel pictures of representative experiments are shown on the right. Nebulin domains containing the mutation p.Ser6366Ile (ex122m) showed significantly strengthened actin affinity (P = 0.048). P values were calculated using the Kruskal-Wallis test when comparing three groups (S9) and the Mann-Whitney test when comparing two groups (S14, S18, S22). Asterisks indicate significant differences compared with the wild-type (WT) protein.
Figure 3
Nebulin mutations affect binding to tropomyosin. Purified GST-nebulin domains bound to beads were incubated with purified α-tropomyosin and β-tropomyosin, then beads were washed and the bound proteins run in SDS-PAGE gels, and stained with Coomassie Blue. The relative intensity of bound α-tropomyosin and β-tropomyosin was quantified from three independent experiments. The mean value and standard deviations from three experiments are shown in the bar charts to the left, and gel pictures of representative experiments are shown on the right. Nebulin domains containing the p.Thr7382Pro (ex151m) mutation showed significantly weakened affinity for tropomyosin than wild-type (WT) proteins (P = 0.039). P values were calculated using the Kruskal-Wallis test when comparing three groups (S9) and the Mann-Whitney test when comparing two groups (S14, S18, S22). Asterisks indicate significant differences compared with the WT protein. (9, 14, 18, and 22) and WT αand β-tropomyosins, we showed that all four nebulin super repeats bound to tropomyosin with high affinity. This is the first direct evidence that there is a tropomyosin-binding motif in these super repeats of nebulin (Figure 4, Figure 5). Chitose and co-workers [33] used far-western blotting and whole nebulin from rabbit skeletal muscle to test tropomyosin binding. They were not able to confirm any interaction between nebulin and tropomyosin. This could be due to lower quantities of protein or differences in testing Figure 4 Tropomyosin-nebulin binding curves. The affinity of wild-type (WT) and mutant nebulin domains to α-tropomyosin and β-tropomyosin is shown in binding curves. The relative quantities were calculated from three independent experiments. methods. Moreover, there could be an advantage in using smaller protein domains that can adopt the correct α-helical conformations.
Nebulin knockout mouse models and analyses of single muscle fibers from patients with NM caused by mutations in NEB have provided some insights into the pathogenesis of NM [24,34,[37][38][39][40][41], but in vitro functional studies of NEB mutations have not been performed previously, to our knowledge. Furthermore, the functional effects of NEB missense mutations have not been addressed to date.
Recent studies have shown that patients with NM caused by mutations in NEB may have markedly lower levels of nebulin protein in their muscles than healthy individuals, leading to lower calcium sensitivity of force generation [37,39,41]. A lower abundance of nebulin has been associated with the in-frame deletion of exon 55 (p.Arg2478_Asp2512del) included in the present study, as well as with frameshift and splice-site mutations in NEB [39,41]. It has been suggested that the vulnerability of mutant nebulin to proteolysis is due to a mismatch between nebulin and its actin-binding sites [38]. The results of our nebulin-actin binding studies support this suggestion, as the super repeat lacking 35 aa encoded by exon 55 (S9) showed weakened actin affinity, although the difference was not statistically significant. The 35 aa deletion does not span the tropomyosin-binding site in super repeat 9 ( Figure 1A), and super repeat S9 showed slightly, but not statistically significantly, stronger binding to tropomyosin (Figure 3). The effect may be more pronounced in vivo, when the tropomyosin-binding site periodicity of 235 to 240 aa is disrupted by the deletion, and the head-to-tail binding of tropomyosin dimers to the thin filament might thus be impaired [18].
Interestingly, nebulin super repeat S9 containing the p. Glu2431Lys mutation, which is close to a tropomyosinbinding site ( Figure 1A) showed weakened actin affinity ( Figure 2), but also stronger affinity for tropomyosin ( Figure 3). These differences were not shown to be statistically significant. This mutation was identified in a patient with a mild form of NM, who is compound heterozygous for the p.Glu2431Lys mutation and a frameshift mutation in exon 55 (Lehtokari et al., manuscript submitted). Stronger tropomyosin affinity may impair the movement of nebulin in concert with tropomyosin on the actin filament during muscle contraction. Incomplete movement of tropomyosin on the actin filament, resulting in disrupted myosin cross-bridge cycling kinetics and subsequent muscle weakness, has been described in muscle fibers from a patient with NM who was compound heterozygous for two splice-site mutations in NEB. The mutations resulted in skipping of exons 3 and 22. In that patient, the abundance of nebulin in muscle was only slightly lower than normal, and the calcium sensitivity of force production was maintained [40].
The missense mutations p.Ser6366Ile in super repeat S18 and p.Thr7382Pro in S22 are founder mutations in the Finnish population, and had been discovered in compound heterozygous form, together with a truncating mutation (frameshift or nonsense), to cause NM, and in a homozygous form to cause distal myopathy without nemaline bodies [14,18]. Interestingly, the p.Ser6366Ile mutation significantly strengthened the actin affinity of super repeat S18 (Figure 2). The strengthened actin affinity may have an impact on actin-myosin interaction during muscle contraction, considering that one nebulinbinding site on actin is in close proximity to the strong binding site for myosin on actin during muscle contraction, and that this site is blocked by tropomyosin in relaxed muscle [32]. The p.Thr7382Pro mutation did not affect actin affinity, but significantly weakened the tropomyosin affinity of super repeat S22 (Figure 3), although the mutated amino acid is much closer to an actin-binding site than to the tropomyosin-binding site ( Figure 1A). Super repeat S22 is the last super repeat before the C-terminal simple repeat region of nebulin, and it also contains the last predicted tropomyosin-binding site in nebulin [42].
The in-frame deletion of six aa (p.Val3924_Asn3929del) in super repeat S14 had no effect on actin or tropomyosin binding. This deletion does not reside in a known binding site. A few small (1 to 5 aa) in-frame deletions and insertions in NEB are listed in the Exome Variant Server (EVS) [43], compiling exome sequencing data of healthy individuals, as well as individuals with hypertension, and heart and lung disease. To our knowledge, no patients with skeletal muscle disease are included in the EVS study cohorts. Of note, some individuals in the EVS are homozygous for the in-frame deletions, indicating that at least some small in-frame deletions in NEB are non-pathogenic. The p.Val3924_Asn3929del in-frame deletion in our study (not present in the EVS) is the only small in-frame deletion we detected in our large series of patients with NM. The patient is compound heterozygous for p.Val3924_Asn3929del and a large (approximately 30 kb) duplication in NEB (unpublished results). The pathogenicity of both mutations remains to be established.
We also tested the affinity of six β-tropomyosin mutants (p.Lys7del, p.Glu41Lys, p.Lys49del, p.Glu117Lys, p.Glu139del, and p.Gln147Pro) for WT nebulin super repeats using the GST pull-down assay. The tropomyosin p.Glu41Lys mutant showed a slightly weakened affinity for nebulin super repeat S18, which was not statistically significant, but not for the other super repeats. The p.Glu41Lys substitution is in the non-actinbinding β-zone of β-tropomyosin and has been shown to cause low Ca 2+ sensitivity by in vitro motility assays [35]. The other tropomyosin mutations, except p.Glu117Lys, are in or close to the tropomyosin-actin-binding site at the α-zones of β-tropomyosin [44,45]. These mutants did not show significant changes in binding affinity for WT nebulin super repeats compared with WT proteins ( Figure 5).
Conclusions
Our results demonstrate actin-nebulin and tropomyosinnebulin interactions in vitro, and show that mutations in nebulin and tropomyosin can alter these interactions. Both actin and tropomyosin-binding affinity was affected by nebulin mutations. This suggests that abnormal interaction between aberrant thin-filament proteins is a pathogenetic mechanism in NM and related disorders. | 2016-05-12T22:15:10.714Z | 2014-08-01T00:00:00.000 | {
"year": 2014,
"sha1": "0dad78235ed624ec0260d04cb62c59ebc16deef4",
"oa_license": "CCBY",
"oa_url": "https://skeletalmusclejournal.biomedcentral.com/track/pdf/10.1186/2044-5040-4-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eddf706f6d8d9cbdd21fc9791350582827d0d9b0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248757925 | pes2o/s2orc | v3-fos-license | Laminin-221-derived recombinant fragment facilitates isolation of cultured skeletal myoblasts
Introduction Laminin is a major component of the basement membrane, containing multiple domains that bind integrin, collagen, nidogen, dystroglycan, and heparan sulfate. Laminin-221, expressed in skeletal and cardiac muscles, has strong affinity for the cell-surface receptor, integrin α7X2β1. The E8 domain of laminin-221, which is essential for cell integrin binding, is commercially available as a purified recombinant protein fragment. In this study, recombinant E8 fragment was used to purify primary rodent myoblasts. We established a facile and inexpensive method for primary myoblast culture exploiting the high affinity binding of integrin α7X2β1 to laminin-221. Methods Total cell populations from dissociated muscle tissue were enzymatically digested and seeded onto laminin-221 E8 fragment-coated dishes. The culture medium containing non-adherent floating cells was removed after 2-hour culture at 37 °C. The adherent cells were subjected to immunofluorescence staining of desmin, differentiation experiments, and gene expression analysis. Results The cells obtained were 70.3 ± 5.49% (n = 5) desmin positive in mouse and 67.7 ± 1.65% (n = 3) in rat. Immunofluorescent staining and gene expression analyses of cultured cells showed phenotypic traits of myoblasts. Conclusion This study reports a novel facile method for primary culture of myoblasts obtained from mouse and rat skeletal muscle by exploiting the high affinity of integrin α7X2β1 to laminin-221.
Introduction
The basement membrane is a specialized form of the extracellular matrix (ECM) that surrounds muscle fibers. Cell adhesion to the basement membrane is involved in the proliferation and apoptosis of cells [1]. Major components of the basement membrane are type IV collagen, proteoglycans, and glycoproteins such as laminin. The laminin protein family is a diverse set of 11 proteins commonly having three polypeptide chains (a, b, and g) connected with disulfide bonds [2]. Integrins and dystroglycans are the corresponding cell-surface receptors. Integrin-specific binding of laminin serves as a transmembrane linker which connects the cytoskeleton to the ECM, producing mechanical functions as well as prompting outside-in cell signaling. Thus, integrins and laminins control cell behavior including cell migration, differentiation, and proliferation through ECM-binding [1]. Integrin a7b1 is predominantly expressed in skeletal and cardiac muscles that originate from mesoderm. Two different sequences (X1 and X2) are located near the ligand binding site of the integrin a7 subunit. These domains are derived from the same gene by mutually exclusive alternative mRNA splicing and are equally expressed in myoblasts and the myocardium [3]. Integrin a7X1b1 binds to all laminins except laminin-332, which is found in cutaneous epithelial basement membrane; however, it preferentially binds laminin-211/221 and laminin-511/521 in muscle basement membrane, while a7X2b1 preferentially binds laminin-111 and laminin-211/221 [4].
We hypothesized that by exploiting the higher binding capacity of integrin a7X2b1 expressed by myoblasts to laminins, they could be selectively isolated from skeletal muscle tissues and their successful primary culture may be established on laminin substrates. Previous studies have reported myoblast cultures on laminin-111 [5] and Matrigel® [6]. However, laminins are large complex glycoproteins (900 kDa), which are difficult to isolate and purify, and recombinant laminin production is expensive. Laminin-111 isolated from Matrigel® is commercially available as an extract from murine Engelbreth-Holm-Swarm tumor. From the viewpoints of practicality and safety, its use in clinical settings for myoblast regeneration might be limited. Therefore, in the present study, a commercial human laminin-derived recombinant protein fragment was utilized. The fragment was produced by Chinese hamster ovary (CHO) cells transfected with genes encoding the E8 domain of laminin-221 that is essential for integrin binding. Laminin E8 fragments are truncated proteins consisting of the C-terminal regions of a, b, and g chains. This truncated protein contains active integrinbinding sites such as laminin globular 1e3 domain of the a chain and the glutamate residue in the C-terminal tail of the g chain.
Moreover, the E8 fragment does not bind to heparin and heparan sulfate unlike full-length laminin. Thus, it is the smallest unit with integrin-binding ability [7].
In the current rapidly aging society, age-related muscle atrophy and fragility (sarcopenia) remain of increasing medical concern. Few clinical options are available for treating various sarcopenias. These include nutrition and exercise intervention, but their effects are limited [8]. Therefore, improved understanding of the underlying molecular machinery in muscle pathology and muscle regeneration and the development of better treatments are needed. Muscle regenerative medicine for sarcopenia has not been reported in humans [9]; however, treatment for muscular dystrophies and muscle injuries has already been reported. In a study of immunosuppressed patients with Duchenne muscular dystrophy, when myoblasts were intramuscularly injected, dystrophin expression was observed in a small number of donor-derived myocytes [10]. In a study in which myoblasts were intramuscularly injected in the pharyngeal muscle of patients with oculopharyngeal muscular dystrophy, improvement in the quality of life and volumedependent recovery of swallowing function were observed [11]. Up to now, the therapeutic effect is still limited, and further studies are needed. Facile primary skeletal myoblast culture methods should be established as a basis for improving skeletal muscle regenerative medicine and developing molecular medicine approaches for muscular diseases.
Currently, primary culture of skeletal myoblasts is performed using several methods for myoblast isolation. Previously reported methods include the explant cell culture method [12] in which myoblasts migrate from biopsied muscle fragments on culture surfaces at early time points due to the high intrinsic myoblast motility. Percoll density gradient-based cell separation [13] is used to eliminate other cell types from myoblasts. Fluorescenceactivated cell sorting (FACS) and magnetic-activated cell sorting (MACS) by negative selection using antibodies against well-known cell-surface markers such as Sca-1 CD31, CD45, and CD11b for isolated myoblasts; and positive selection using anti-a7 integrin antibody have also been reported [14,15]. To eliminate fibroblasts, a major contaminant in muscle-derived cell populations, differences in cell adhesion capacity to collagen are also utilized [16]. Notably, MACS and FACS are most commonly used because the other methods are often time-consuming with poor myoblast purity. However, antibodies for cell selection are sometimes speciesspecific, and their applications to some experimental animal models and human clinical settings are limited. Furthermore, antibodies and the required equipment for this cell sorting are often expensive.
To overcome these shortcomings of conventional methods for myoblast isolation, we utilized laminin-derived recombinant fragment-coated isolation and culture surfaces. By simply plating total cell populations obtained from minced and enzymatically digested skeletal muscle and washing non-adherent floating cells from laminin-coated dishes, cells other than myoblasts were eliminated. Further, isolated myoblasts were subjected to prolonged cell culture and differentiation on these same surfaces. To demonstrate species-independence of this method, both rat and mouse myoblasts were successfully isolated from their skeletal muscles with high cell purity.
Ethics
All experiments were approved by the Ethics Committee of Tokyo Women's Medical University, Tokyo, Japan, and animal care was based on guidelines from the Science Council of Japan. This study was carried out in compliance with the Animal Research: Reporting of in Vivo Experiments (ARRIVE) guidelines.
Animals
C57BL/6J mice and Sprague Dawley (SD) rats were purchased from Sankyo Lab Service Corporation, Inc., Japan. Mice were housed in the Institute of Laboratory Animals, Tokyo Women's Medical University. Rats were housed in the Institute of Advanced Biomedical Engineering and Science, Tokyo, Japan. Both rodent species were housed in separate cages, with no more than 5 mice/ cage and no more than 2 rats/cage, with 12-hour light/dark cycles.
Preparation of human laminin-221-derived recombinant fragment-coated surfaces
Human laminin-221-derived recombinant fragment (iMatrix-221, Nippi Inc., Tokyo, Japan) was provided in vials as a solution in Dulbecco's phosphate buffer saline (PBS, FUJIFILM Wako Pure Chemical Corporation, Osaka, Japan). Each vial was diluted using 11 mL PBS made to a final concentration of 5 mg/mL. The total volume was added into 100-mm Primaria cell culture dishes (Corning, New York, USA) for isolation and prolonged culture of myoblasts, and incubated at 37 C for 1.5 h.
For initial primary cell adhesion assays, 2 mL iMatrix-221 solution was diluted with 420 mL PBS to a final concentration of 2.38 mg/ mL, and then its 2 mL aliquot was added into wells of 24-well Primaria culture plates (Corning) at 0.5 mg/cm 2 , and incubated at 37 C for 1.5 h. After discarding the solution, 3 mL 1% BSA (Sigma-Aldrich, St. Louis, USA) in PBS was added into the wells and incubated at 37 C for 45 min for surface blocking. These procedures were performed immediately before cell culture use; thus, the culture surfaces used were always wet and never desiccated.
Cell culture
Cell culture for all cell isolates was performed using the following culture media: 1) Ham's F-10 Nutrient Mix (Life Technologies, Carlsbad, Canada) supplemented with 2 ng/mL basic fibroblast growth factor (FUJIFILM Wako Pure Chemical Corporation), 10% fetal bovine serum (FBS, Life Technologies), and 1% penicillin-streptomycin (FUJIFILM Wako Pure Chemical Corporation) for myoblast growth medium (GM); 2) GM without FBS for myoblast isolation cultures to eliminate any effects of FBScontained fibronectin; 3) high-glucose Dulbecco's Modified Eagle Medium (FUJIFILM Wako Pure Chemical Corporation), supplemented with 2% horse serum (Life Technologies) and 1% penicillinstreptomycin differentiation medium (DM) for skeletal muscle differentiation cultures.
Automated computerized cell counting
In the initial cell adhesion and cell proliferation assays, cell numbers on wells were automatically counted. After each denoted culture period, wells were gently washed with PBS to remove nonadherent, floating cells; adherent cells were fixed with 4% paraformaldehyde (PFA) in PBS and washed with only PBS. Then, the cell nuclei were stained with a DNA-binding fluorescent dye (Hoechst 33258, Life Technologies), and the cells were washed. Cell nuclei were automatically counted using Image Xpress Ultra and MetaXpress Image Acquisition software (Molecular Devices, San jose, USA).
Primary culture of skeletal muscle-derived cells by MACS
Experimental animals (six-week-old C57BL/6J mice or fourweek-old SD rats) were anesthetized with isoflurane and sacrificed by exsanguination. Connective tissue, blood vessels, and fat were carefully removed from muscle collected from the lower limbs using forceps under a stereomicroscope. The collected muscle tissue samples were placed in Hank's balanced salt solution (HBSS-, FUJIFILM Wako Pure Chemical Corporation) containing 1% penicillin-streptomycin and gently shaken to avoid contamination. Myoblast-containing cell suspensions were prepared using the MACS skeletal muscle dissociation kit for mouse and rat (Miltenyi Biotec, North Rhine, Germany). Control mouse myoblasts were isolated by MACS with a purity of 98.5 ± 0.208% (n ¼ 3), using the satellite cell isolation kit (Miltenyi Biotec, North Rhine, Germany), according to the manufacturers' protocols.
Initial cell adhesion assay
Wells of a 24-well Primaria culture plate (Corning) were coated with iMatrix-221 (2 mL of iMatrix-221 dissolved in 420 mL PBS, 2.38 mg/mL). For preparing collagen-coated dishes, 180 mL of Milli-Q water was adjusted to pH3 using 6 N HCL, and type I collagen (FUJIFILM Wako Pure Chemical Corporation) was dissolved in this Milli-Q water to reach a final concentration of 0.3 mg/mL. Further, 5 mL of this aliquot was placed in 100-mm polystyrene culture dishes and placed in a clean bench overnight. The next day, the solution was removed and dried in a clean bench overnight [17]. Since the iMatrix-211-coated surface did not proliferate myoblasts well enough to obtain sufficient cell numbers, MACS-isolated mouse myoblasts were suspended in GM and cultured for 5 days on type I collagen-coated dishes for cell expansion. Further, cells harvested from these dishes by trypsinization were seeded into the iMatrix-221-coated 24-wells at an initial cell density of 7300 cells per well and cultured in GM at 37 C in a humidified CO 2 incubator. At the denoted time points, the culture medium containing floating, non-adhered cells was removed. Cells adherent to the wells were gently washed with PBS and subjected to automated cell counting using a Confocal High Content Screening System, Image Xpress Ultra (Molecular Devices) after cell nuclei were stained with DNA-binding fluorescent dye (Hoechst 33258). Cell counting was performed only in the central portion (5.6 mm 2 ) of the wells, which could be specified by the software. Three independent experiments from three different mice were performed.
Primary mouse myoblast isolation with iMatrix-221-coated culture surfaces
A total of 1 g collected mouse muscle was obtained by the aforementioned method. First, 50 mg of collagenase type II (Worthington Biochemical Corporation, Lakewood, USA) was dissolved in 100 mL HBSS(À) and 20 mL each was transferred into five 50 mL tubes. As an inhibitor to stop the reaction with collagenase, four 50 mL tubes containing 20 mL of HBSS(À) were prepared and kept in ice-cold conditions. The collected muscles were minced, placed in the collagenase solution, and incubated in a 37 C thermostatic bath for 10 min without shaking. The supernatant was discarded, and 20 mL of the prepared collagenase solution was re-added and incubated for 10 min with shaking at 130e80 rpm. A 40-mm strainer was set on a 50 mL tube containing HBSS(À) stored in ice, supernatant was filtered, and the tube was stored in ice again. The next 20 mL of collagenase solution was placed in the tube with the remaining minced muscle tissue and incubated at 37 C for 10 min with shaking. This process was repeated three more times. The last filter may easily be clogged because the last suspension contains remaining muscle tissue; thus, the tissue was gently pressed by a cell scraper. We obtained four 50 mL tubes containing the collected suspension, which were then centrifuged at 4 C and 300Âg, for 10 min. The obtained cell pellet was resuspended with 10 mL GM without FBS, and all cells were seeded in one iMatrix-221 or non-coated 100-mm Primaria cell culture dishes. After culture at 37 C for 2 h in a humidified CO 2 incubator, floating non-adhered cells were removed by medium change. Cells adhered on the iMatrix-221-coated and noncoated surfaces were subjected to prolonged culture on the same surfaces for about one week. Then, cells were subjected to immunofluorescence staining after labelling with anti-desmin antibody as well as a DNA-binding dye, and the desmin-positive cells were counted.
Cell proliferation assay of primary mouse MACS-isolated myoblasts on various materials-coated dishes
Wells of 24-well Primaria culture plates were coated with either iMatrix-221 (2 mL of iMatrix-221 dissolved in 420 mL PBS, 2.38 mg/mL), type I collagen (420 mL of aliquot prepared in the initial cell adhesion assay, 0.3 mg/mL), or Matrigel® (Corning, 10 mL of Matrigel dissolved in 200 mL PBS, 450 mg/mL) [18]. After these aliquots were added, they were incubated for 1 h at 37 C in a humidified 5% CO 2 incubator. Control wells were without any coating. MACS-isolated mouse skeletal myoblasts were centrifuged, and pellets were resuspended in 3.6 mL of GM.
Aliquots (100 mL) of this cell suspension were seeded into the wells and cultured in GM at 37 C in a humidified 5% CO 2 incubator. At the denoted time points, the cells in the wells were subjected to automatic cell counting using a Confocal High Content Screening System, Image Xpress Ultra (Molecular Devices), after cell nuclei were stained with a DNA-binding fluorescent dye (Hoechst 33258). Culture wells were 15.49 mm in diameter, and the central portion (5.6 mm 2 ) was counted since the well's bottom was cup-shaped and cells easily gathered into the center. This experiment was performed independently three times (from three separate mice), and each experiment was performed in triplicate. Each denoted symbol on Fig. 5 shows the average and SEM of cell numbers on the total of nine wells. The maximum three values of the number of cells obtained on each culture surface, regardless of the time from the primary culture, were averaged and subjected to Dunnett's test using GraphPad Prism software version 8 (GraphPad Software, San Diego, Canada).
Immunofluorescence staining
MACS-isolated mouse primary cultured myoblasts were also cultured on type I collagen-coated dishes until they reached 80% confluence, and then, were harvested from these dishes by trypsinization. Harvested cells were then seeded onto wells of 4-well slide chambers (Nunc™ Lab-Tek™ II Chamber Slide™, Life Technologies) at an initial cell density of 60,000 cells/well. The slide chambers were incubated at 37 C overnight to allow cell adhesion, and for counting desmin-positive cells, PFA (FUJIFILM Wako Pure Chemical Corporation) fixation was performed. Next, for the differentiation experiment, in another slide chamber with obtained cells, culture medium was changed to differentiation medium the next day, and the cells were cultured for 7 days and PFA-fixed. Mouse thigh skeletal muscle was frozen in isopentane at the temperature of liquid nitrogen. Then, 8-mm-thick cryo-tissue sections were prepared using a cryostat, fixed with acetone, and subjected to immunofluorescence staining. Cultured cells were fixed with 4% PFA in PBS for 15 min. After washing with PBS, the cells and tissue sections were both incubated with 0.5% Triton-X (Sigma-Aldrich) in PBS for 10 min for permeabilization, washed with PBS, blocked using Blocking One Histo (NACALAI TESQUE, Tokyo, Japan) for 15 min, incubated with primary antibodies (rabbit anti-desmin or rabbit anti-fast myosin skeletal heavy chain antibodies, 1:100 diluted, Abcam, Cambridge, UK), washed, and incubated with Alexa-Fluor 488-conjugated anti-rabbit IgG antibody (1:200 diluted, Abcam). The immunostained cells and tissue sections were observed using a confocal laser scanning fluorescence microscope (FV1200, Olympus, Tokyo, Japan) and Cell Sens Standard software (FV1-ASW, Olympus, Tokyo, Japan). Cell nuclei in the desminpositive cells were manually counted. Myoblast purity was obtained by dividing the number of desmin-positive cells by the total cell number in randomly selected five fields of view. Errors were expressed as ± SEM with n ¼ 5 in mouse and n ¼ 3 in rat.
Gene expression analyses of primary myoblasts
Mouse primary myoblasts were isolated by either of two methods (i.e. MACS and iMatrix-221-method). In the iMatrix-221method, the total cell population from enzymatically digested mouse skeletal muscle was plated on iMatrix-221-coated dishes and allowed to adhere in GM for 2 h at 37 C. Then, the nonadherent cells were removed. Adherent cells were subjected to prolonged culture on the same surfaces. Primary cells adherent on iMatrix-221-coated dishes and MACS-isolated myoblasts were cultured on both iMatrix-221-coated dishes and type I collagencoated dishes, respectively, until they reached 80% confluence (about one week). Cell cultures were then subjected to total RNA isolation and gene expression analyses by TaqMan PCR. Total RNA was purified with RNeasy plus mini kit (QIAGEN, Venlo, Netherlands), according to the manufacturer's protocol. Further, cDNA was synthesized and RT-qPCR was performed using TaqMan Fast Advanced Master Mix and TaqMan probes for Pax 7, MyoD, GATA4, GAPDH, and myf5. These specific probes were provided by Life Technologies. mRNA expression was evaluated by comparing the expression level of each mRNA to that of GAPDH.
Integrin a7X2 gene expression analysis
Primary mouse skeletal muscle-derived cells adherent on iMatrix-221-coated dishes and not-adherent floating cells were separated after 2-h incubation at 37 C. Non-adherent floating cells were transferred onto type I collagen-coated dishes. Both cells were cultured in GM until they reached 80% confluence. Expanded cells were harvested from these surfaces by trypsinization and subjected to gene expression analysis for integrin a7X2 and GAPDH genes with the specific TaqMan probes. Results of RT-qPCR were evaluated by the DDCt method.
Primary rat myoblast isolation with iMatrix-221-coated dishes
Rat skeletal muscle-derived cell populations were prepared as described above. Then, the total cells were seeded onto 100-mm Primaria dishes coated with iMatrix-221 and cultured in GM without FBS at 37 C for 2 h in a humidified CO 2 incubator. As a negative control, dishes without iMatrix-221 coating were also used. Then, floating non-adherent cells were removed by medium change. The remaining cells adhered on the iMatrix-221-coated surfaces were subjected to prolonged culture on the same surfaces. After reaching 80% confluence, both cell cultures were harvested, re-seeded in slide chambers, and subjected to immunofluorescence staining with anti-desmin antibody and a DNA-binding dye for cell nuclei staining using the same protocol as that used for mouse samples.
Initial cell adhesion of mouse MACS-isolated myoblasts on iMatrix-221-coated surfaces
MACS-isolated primary myoblasts were expanded on type I collagen-coated (27 mg/cm 2 ) dishes. They were examined by immunofluorescence staining with anti-desmin antibody to confirm myoblast purity after expansion. Desmin-positive and total cell numbers were manually counted, and the ratio of desminpositive cells to total cells was 98.5 ± 0.208% (n ¼ 3) (Fig. 1a). These cells were seeded on iMatrix-221-coated 24 well Primaria plate. Initial cell adhesion was evaluated by counting the adhered cells in a time-dependent manner (Fig. 1b). After culture until the denoted time points, floating, non-adherent cells were removed, and adherent cells were fixed and stained with a DNA-binding dye. Then, cell nuclei were automatically counted. Almost all MACSisolated myoblasts exhibited a spherical morphology on iMatrix-221-coated surfaces under phase contrast microscopy, but even after gentle washing with PBS, a portion of the cells remained on the surfaces. Automated cell counting after cell nuclei staining revealed that even at early time points (30 min), a portion of the seeded cells stably adhered on the surfaces and increased over time after cell seeding (Fig. 1b). Fig. 2a shows that 70.3 ± 5.49% (n ¼ 5) cells isolated with iMatrix-221-coated dishes from murine primary muscle tissues and cultured directly on iMatrix-221-coated dishes were desminpositive. In contrast, cells adhered on control dishes without iMatrix-221-coating were 7.24% desmin-positive (Fig. 2b), implying that these cells are mostly fibroblasts. The obtained cells showed a spindle shape under a phase contrast microscope (Fig. 2c). In addition, after culture of cells adhered on iMatrix-221-coated surfaces, cells were further cultured in GM on slide chambers for a day. Seven days after changing the medium to DM, spindle-shaped multinuclear cells showing spontaneous cell contraction were observed under a phase contrast microscope (Supplementary video file-mouse_myotube.mp4). Immunofluorescence staining with anti-myosin skeletal heavy chain-antibody revealed that these cells were fast myosin skeletal heavy chain-positive (Fig. 2d), suggesting that myoblasts that had a potential to differentiate to mature muscle cells were successfully isolated on iMatrix-221-coated surfaces even without MACS.
Primary culture of mouse myoblasts isolated on iMatrix-221coated dishes and muscle differentiation
Supplementary video related to this article can be found at https://doi.org/10.1016/j.reth.2022.04.006
Expression of skeletal myogenesis-related genes by primary mouse myoblasts isolated using iMatrix-221-coated dishes
Skeletal myogenesis-related gene expression by primary mouse myoblasts isolated on iMatrix-221-coated surfaces was quantitatively evaluated by TaqMan PCR. Four skeletal myogenesis-related genes were examined, and the results with primary myoblasts isolated by MACS are also shown (Fig. 3). Pax 7, a transcriptional factor gene that plays a role in myogenesis, commonly used as a marker for muscle satellite cells showed a similar relative expression level in primary myoblasts isolated by the iMatrix-221method to that of MACS-isolated primary myoblasts, indicating the presence of satellite cells in the cell population. Expression levels of GATA4, a transcriptional factor responsible for myocardial differentiation and function, and MyoD, also a transcriptional factor that represses satellite cell renewal and promotes terminal differentiation, were detected in primary myoblasts isolated by iMatrix-221-method. All transcription factor expression showed no significant differences by paired t-test, but GATA4 exhibited a tendency to be low in myoblasts isolated by iMatrix-221-method. Similar levels of expression of myf5, which regulates skeletal muscle differentiation and myogenesis, were observed in both myoblast cultures.
Expression of integrin a7X2 gene by primary mouse myoblasts isolated using iMatrix-221-coated dishes
To confirm that cell binding onto iMatrix-221-coated surfaces is mediated by a specific integrin, cell mRNA expression of the integrin a7X2 gene was quantitatively evaluated in adherent and notadherent floating cells. Although no significant difference was found between expression levels of these cells by paired t-test, expression was 10.5 times higher in cells adhered on iMatrix-221coated surfaces than in not-adherent cells (Fig. 4a). Immunofluorescence staining of skeletal muscle tissue with anti-a7 integrin antibody revealed that the periphery of each myotube was positively stained, and some positively stained cells were also observed in adjacent fascia ( Fig. 4b and Supplementary Fig. 1), implying that fibroblasts in fascia also expressed this integrin in relatively low amounts. In contrast, neither myocytes nor fibroblasts showed positive staining when isotype control antibody was used as the primary antibody (Fig. 4c).
Cell proliferation assay of primary MACS-isolated mouse myoblasts on various coated culture surfaces
MACS-isolated primary mouse myoblasts were cultured on iMatrix-221-coated surfaces, and the proliferation was compared with those on various material-coated surfaces (Fig. 5). Cell numbers increased over time. The number of cells increased the most in the collagen-coated dish and the least in the iMatrix-221coated surface, but statistical analysis revealed that no significant differences were observed among coated culture materials.
Cells obtained from rat skeletal muscle with iMatrix-221coated dishes
Finally, to show that the present method to isolate skeletal myoblast with the laminin-221 fragment is species-independent, myoblast populations obtained from rat skeletal muscle were plated on either iMatrix-221-coated dishes or non-coated Primaria dishes, and cultured for 2 h in GM. Then, the non-adhered floating cells were removed by media changes and gentle washing with PBS. Adhered cells were subjected to prolonged culture on the same surfaces. Immunofluorescence staining with anti-desmin antibody revealed that 67.7 ± 1.65% (n ¼ 3) of cells isolated using the iMatrix-221 method were desmin-positive skeletal myoblasts (Fig. 6a). In contrast, cells adhered on non-coated dishes were 0.613% desminpositive, implying that these cells were mostly fibroblasts adhered to surfaces without iMatrix-211 coating via endogenous fibronectin contained in GM (Fig. 6b).
Discussion
The aim of the present study was to establish a facile method for primary culture of myoblasts. Previously, primary myoblast isolation has been performed using several methods such as tissueexplant, separation using Percoll density gradients, and antibodybased FACS and MACS [12e14,19]. A primary culture method using differences in cell adhesion capacity has also been reported [16]. The method utilized the rapid adhesion of unwanted f`ibroblasts to collagen-coated dish to eliminate these cells; the supernatant containing non-adhered, floating myoblasts was collected 2 h after seeding, and subjected to prolonged culture. In the present study, we assumed that myoblasts would adhere to laminin-221 fragment-coated dishes more rapidly than fibroblasts, and that the albumin-coating of bare plastic dish surfaces would hinder fibroblasts from adhesion onto these surfaces. Because the initial adhesion of MACS-isolated primary myoblasts adhered on iMatrix-221-coated dishes occurs within 2 h, floating cells, which were assumed to be fibroblasts, were removed after 2-hour incubation, and the adherent cells were subjected to prolonged culture on the same surfaces. The initial 2-hour incubation needs growth medium lacking the cell adhesion protein, fibronectin, to produce only laminin-dependent cell adhesion. With this protocol, 70.3% desmin-positive cells were obtained, and immunofluorescence staining confirmed that these cells were both multinucleated and fast myosin skeletal heavy chain-positive after culture in DM (Fig. 2d). Desmin-positive cells and their differentiative potency suggested that the cells isolated by the present method were myogenic progenitors [20]. RT-qPCR comparisons between cells adhered on iMatrix-221coated dishes and non-adhered cells revealed that the expression level of integrin a7X2 was 10.5-fold higher in adherent cells, suggesting that cell adhesion to the surfaces were integrin-mediated. Laminin-derived recombinant fragments have been utilized as culture substrates for various cell types including human induced pluripotent stem cells and embryonic stem cells [7]. Besides, they have been utilized for differentiation of various cell types such as keratinocytes [21], neuronal cells [22], cardiac [7] and skeletal myocytes [23], and ocular cells [24]. However, this is the first report to utilize these recombinant fragments for cell isolation. Nonadherent cells exposed to iMatrix-221-coated surfaces were vimentin-positive (Fig. 2e), implying these were fibroblasts. The observed expression of integrin a7 in a small number of cells in fascia (Fig. 4b) might explain why the purity of desmin-positive cells isolated on iMatrix-221-coated dishes in this study did not reach 100%.
RT-qPCR of the obtained cells showed expression of Pax 7, a transcription factor of satellite cells [25], as well as in myoblasts sorted by the MACS method used as controls. In addition, MyoD and GATA4 [26,27], the transcription factors expressed during skeletal muscle differentiation, were also found to be expressed in both iMatrix-221 cultured cells and MACS control cells. In these transcription factors, GATA4 expression was especially low in iMatrix-221 cultured cells. GATA4 is known as a transcription factor that delays muscle differentiation [28], therefore, low GATA4 expression may indicate that differentiation was accelerated. This is consistent with the experimental results that cell proliferation on iMatrix- Fig. 4. RT-qPCR analysis of mRNA expression of integrin a7X2 in adherent primary cells on iMatrix-221-coated dishes versus non-adherent floating cells cultured on type I collagencoated dishes. (a) Bar graph of the relative value of integrin a7X2 expression. Adherent cells seeded in the iMatrix-221-coated dishes and not-adherent cells seeded in the type I collagen-coated dishes were incubated until they reached 80% confluence and were evaluated by RT-qPCR (DDCt method with GAPDH house-keeping gene). Adherent cells showed 10.5-fold higher integrin a7X2 expression compared to not-adherent cells (n ¼ 3). Bars represent the mean, and the dots represent the relative value. (b) Immunohistochemical staining of mouse tibialis anterior muscle with anti-integrin a7 antibody. The periphery of the muscle fibers was positively stained, but some faintly stained cells in the fascia (arrow) are also evident. (c) Neither myocytes nor fibroblasts showed positive staining when isotype control antibody was used as the primary antibody. Scale bars ¼ 100 mm. 221-coated culture dishes was slower than on other coatings. Myf5 is a transcription factor involved in muscle regeneration [29] and is reported to promote higher transplantation efficiency when it is highly expressed [30]. Cultured cells isolated by the present method produced equal or higher phenotypic marker expressions compared to cells obtained by MACS. These findings lead to the conclusion that iMatrix-221 cultured cells can be used in further studies of myoblasts, myogenesis, and for myoblast transplantation in basic studies as well as possible clinical applications.
Notably, the new iMatrix-221-method would leverage the use of primary myoblasts in cell therapy, tissue engineering, and regenerative medicine by reducing the time and cost required for cell preparation in culture because this method enables continuous workflows from cell isolation, proliferation, and differentiation. Recombinant fragments of the laminin E8 domain (e.g. iMatrix-221) produced by transfected Chinese Hamster ovary cells are less expensive and product purity is reasonably high [31]. Therefore, for cell isolation, culture, and differentiation, these recombinant fragments are attractive for use as a culture substrate. Myoblasts cultured on iMatrix-221-coated surfaces as a regenerative source for muscle diseases can be considered for clinical applications in future. For further clinical application, E8 fragments should be immobilized onto cell culture surfaces, not absorbed from media.
Although a commercial MACS kit for mice did not work with rat myoblasts (failure in this isolation might be due to the species specificity of the antibody agents used), the present iMatrix-221 . Proliferation of MACS-isolated primary mouse skeletal myoblasts on culture surfaces coated with iMatrix-221, type I collagen, or Matrigel®. Primary mouse skeletal myoblasts isolated by MACS and expanded on type I collagen-coated dishes were seeded onto three different wells at same seeding density and counted on Day 3, 5, and 7 in GM. Uncoated wells served as controls. Three independent experiments from three mice were performed (average and SEM of nine culture wells per treatment shown). The bold line represents the mean, and the thin lines represent SEM. The number of cells increased the most in the type I collagen-coated dishes and the least in the iMatrix-221-coated dishes; however, no significant differences in cell count were observed among these coated materials. culture method seemed to have no limitations in terms of its utility for the two animal species.
Conclusion
We established a new facile method for primary culture of myoblasts from mouse and rat skeletal muscles by exploiting the high affinity of integrin a7X2b1 to laminin-221. The number of desmin-positive cells in this method did not reach 100%, possibly because of the expression of integrin a7X2 in fibroblasts derived from the fascia. The binding of integrin to laminin is known to promote cell differentiation and proliferation, but for laminin-221, it did not promote myoblast proliferation.
Author contributions
Y.K. designed the study, performed the experiments, analyzed the data, and drafted the manuscript. J.H. and R.T. advised on experimental method. M.Y., N.S and K.I. conceived of the study and supervised all experiments. All authors critically revised the report, commented on the manuscript, and approved the final submission content.
Declaration of competing interest
iMatrix-221 was provided by Nippi. Inc., Tokyo, Japan. MY received Consulting honorarium from Nippi. Inc. | 2022-05-14T15:19:53.552Z | 2022-05-12T00:00:00.000 | {
"year": 2022,
"sha1": "aae8cbac77ab3e4e3d3d71e4a981038bbdd86438",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.reth.2022.04.006",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "abeae77a641213686fd1d9ac8796142a0a4453c6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
256441678 | pes2o/s2orc | v3-fos-license | Newly Diagnosed Monostotic Paget’s Disease of Bone during Living Kidney Donor Candidate Evaluation
The popularity of living-donor organ donation has increased recently as an alternative to deceased-organ donation due to the growing need for organs and a shortage of deceased-donor organs. This procedure requires an in-depth health assessment of candidates, who must be in excellent physical and mental health. We present a potential living-kidney donor withdrawn from donation due to a newly diagnosed Paget’s disease of bone (PDB). The patient underwent computed tomography (CT), magnetic resonance imaging (MRI), bone scintigraphy, and bone densitometry with trabecular bone score (TBS) assessment. The sole lumbar vertebra affected by PDB was investigated comprehensively, non-invasively, quantitatively, and qualitatively.
Introduction
Living-donor kidney transplantation (LDKT) is the treatment of choice for most patients with end-stage renal disease, offering optimum patient and graft survival and reduced time on the transplant waiting list. In such situations, donor welfare and care remain paramount. Medical and pre-operative evaluation and identification of high-risk donors are important. Assessment may reveal previously undiagnosed disease. Early detection of disease may benefit the donor and may also withdraw living-donor candidates from the transplant process. Living-donor candidates must meet health criteria included in local and international guidelines [1,2].
We present a case of a living-kidney donor candidate withdrawal due to monostotic Paget's disease of bone (PDB) with vertebral localization. We have previously published a detailed discussion of the reasons for withdrawal [3].
Case Presentation
A living-donor kidney donation coordinator considered a 54-year-old male a potential donor for his daughter. His preliminary medical tests revealed no abnormalities. He was admitted to the Department of Nephrology and Transplantation Medicine for extended medical assessment. The patient self-declared good health and had no chronic diseases or complaints other than occasional back pain.
The results of blood and urine tests, mandatory for eligibility to the living kidneydonor program (for example, blood cell count, tumor markers, and urine tests) [1,2], were normal. Concentrations of almost all serum biochemical markers were in the normal range; however, alkaline phosphatase (ALP) activity exceeded the reference range (maximal concentration was 199 U/L during six months of observation; normal range 40-129 U/L). The most important results in the context of calcium metabolism and the conversion of vitamin D into active forms are presented in Table 1; no other hormonal tests or markers of bone turnover were performed. Computed tomography angiography (CTA) and renal scintigraphy were performed for routine living-kidney donor qualification [4]. In renal scintigraphy, after 193 MBq I.V. 99m Technetium-diethylenetriamine-pentaacetic acid ( 99m Tc + DTPA) administration, both kidneys showed normal function (glomerular filtration rate 87.74 mL/min). In CTA, a hemangioma of the fourth lumbar vertebra (L4) with a chronic fracture was initially suspected ( Figure 1A,B). This finding was followed up with magnetic resonance imaging (MRI), resulting in a diagnosis of Paget's disease of bone (PDB) ( Figure 1C-E) [5].
To evaluate whether other bones were affected (monostotic or polyostotic PDB), a bone scan was performed 2 h post 740 MBq I.V. 99m Technetium-methylene diphosphonate ( 99m Tc + MDP) injection ( Figure 2). The bone scan revealed an intensely increased radiopharmaceutical uptake throughout the fourth lumbar vertebra (L4), involving the body, posterior elements, and spinous process, referred to as the clover/heart/Mickey Mouse sign. In the case of patients with high ALP activity and without an oncological history, this symptom indicates a highly probable PDB diagnosis [6][7][8].
Further assessment of the pagetic vertebra involved a dual-energy X-ray absorptiometry (DXA), a gold standard in bone density measurement (Horizon A, Hologic, USA), followed by calculation of TBS (TBS iNsight v. 3.0.2.0) [9]. The lumbar spine densitometry allows assessing the bone density of each L1-L4 lumbar vertebra separately and together. The densitometric image of the lumbar spine is presented below (Figure 3) and should not be used to make any diagnosis. However, the fourth lumbar vertebra seems more calcified than other vertebrae; the color scale is the "negative" of the negative usually used in radiology; darker color = more calcified structure. To evaluate whether other bones were affected (monostotic or polyostotic PDB), a bone scan was performed 2 h post 740 MBq I.V. 99m Technetium-methylene diphosphonate ( 99m Tc + MDP) injection ( Figure 2). The bone scan revealed an intensely increased radiopharmaceutical uptake throughout the fourth lumbar vertebra (L4), involving the body, posterior elements, and spinous process, referred to as the clover/heart/Mickey Mouse sign. In the case of patients with high ALP activity and without an oncological history, this symptom indicates a highly probable PDB diagnosis [6][7][8].
Further assessment of the pagetic vertebra involved a dual-energy X-ray absorptiometry (DXA), a gold standard in bone density measurement (Horizon A, Hologic, USA), followed by calculation of TBS (TBS iNsight v. 3.0.2.0) [9]. The lumbar spine densitometry allows assessing the bone density of each L1-L4 lumbar vertebra separately and together. The densitometric image of the lumbar spine is presented below ( Figure 3) and should not be used to make any diagnosis. However, the fourth lumbar vertebra seems more calcified than other vertebrae; the color scale is the "negative" of the negative usually used in radiology; darker color = more calcified structure. Low bone mineral density (BMD) was found in the lumbar spine (L1-L4; 2.1). L4 had the highest BMD among the lumbar vertebrae examined, which corre to normal bone density. Lumbar vertebrae T-scores for L1-L3 indicated osteoporo vertebra had a T score < −2.5 (Table 2A). L4 also presented the highest TBS com other lumbar vertebrae (Table 2B) [9,10].
BMD was also measured in other locations: the femoral neck, non-dominant and total body ( Figure 4). The BMD values were not below low bone mass, mean oporosis could not be diagnosed in these locations (Table 2C). The non-dominant showed low bone density: in the ultradistal and one-third distal sections of the Those values were similar to total body BMD. BMD was within normal ranges on femoral neck. Low bone mineral density (BMD) was found in the lumbar spine (L1-L4; T-score −2.1). L4 had the highest BMD among the lumbar vertebrae examined, which corresponded to normal bone density. Lumbar vertebrae T-scores for L1-L3 indicated osteoporosis; each vertebra had a T score < −2.5 (Table 2A). L4 also presented the highest TBS compared to other lumbar vertebrae (Table 2B) [9,10]. BMD was also measured in other locations: the femoral neck, non-dominant forearm, and total body (Figure 4). The BMD values were not below low bone mass, meaning osteoporosis could not be diagnosed in these locations (Table 2C). The non-dominant forearm showed low bone density: in the ultradistal and one-third distal sections of the forearm. Those values were similar to total body BMD. BMD was within normal ranges only in the femoral neck. Considering all available patient information, we calculated the 10-year probability of fracture (%) using the Fracture Risk Assessment Tool, FRAX [11]. The major osteoporotic fracture (MOF) probability was 3.3% and for hip fracture was 0.1%. After adjustment for TBS, the probabilities were 6.3% and 0.2%, respectively.
According to the National Osteoporosis Foundation guidelines, patients with FRAX 10-year risk scores of ≥20% for MOF or ≥3% for hip fracture should be treated [12].
The above results, which led to PDB diagnosis and the analysis of the benefits and disadvantages of kidney donation as a living donor, resulted in the patient's withdrawal from the living kidney donor program [3]. The patient was referred to the Rheumatology Department for further evaluation and treatment, where he qualified for risedronate use. In follow-up examinations after one year, ALP activity decreased from 199 to 69 U/L (normal range 40-129 U/L), the intensity of radiotracer uptake on the bone scan decreased, and BMD of the lumbar spine increased insignificantly. However, the MRI image did not change significantly. The patient is still under periodic observation at the Rheumatology Department.
Discussion
PDB is a chronic, metabolic bone disease. The information on the prevalence of PDB was published in 2013, and its prevalence varied in the different countries studied; the highest prevalence rate was reported in the UK (>5%), then in western and southern Europe, but uncommon or rare in Scandinavia, on the Indian subcontinent, in Southeast Asia, and Japan (<0.0003%) [13,14]. In Poland, PDB is rarely diagnosed, and the epidemiological data is scarce but suggestive of a declining incidence [15]. Our patient's case is extremely rare, as he was diagnosed with PDB due to participation in the living kidney donor program, requiring numerous tests. This information led to considerations about the possible future of the patient, who would probably be exposed to painkillers (bone pain) and bisphosphonate (bone pain and bone turnover reduction effect), which could damage his one remaining kidney [3]. Ultimately, the patient was withdrawn from the living-donor kidney program.
However, despite the relatively small number of people with PDB, the annual number of publications on PDB fluctuates rather than decreases. For example, recently, there have been publications from Asia [16][17][18] and more papers about the genetic aspects of PDB [16,19,20].
The etiology of PDB remains uncertain; genetic and environmental factors (paramyxoviruses) are suggested. A family history is present in at least 15% of cases, with the risk of developing the disease by a relative of a PBD patient being 7-10 times greater than in the general population [13]. Still, the genetic cause remains unknown in up to 50% of familial patients [19]. In the etiology of PDB, attention has recently been paid to the role of the RANKL/OPG/RANK pathway [16,20].
In PDB, abnormalities such as unusual bone growth presents in several ways. PDB involves excess osteoclastic activity followed by a compensatory increase in osteoblastic activity, leading to disorganized bone formation. The primary disorder is higher bone turnover [21,22]. Therefore, the most frequent therapy involves bisphosphonates, which interfere with osteoclast function and decrease bone turnover [23,24].
Nowadays, diagnosis of PDB is usually a secondary finding on an abnormal X-ray, CT, MRI, or hybrid imaging (positron emission tomography and CT; PET/CT) [5,[25][26][27][28][29][30][31] and/or elevated alkaline phosphatase activity of an unknown etiology, as in our patient [21,24]. The characteristics listed above that lead to the diagnosis are related to earlier and wider access to medical examinations. Using various imaging methods facilitates making the correct diagnosis of PDB and avoids unnecessary biopsy [32]. Clinical features include bone pain, deformity, pathologic fracture, secondary osteoarthritis, and deafness. Significantly less common are spinal stenosis, nerve compression syndromes, hypercalcemia, hydrocephalus, paraplegia, cardiac failure, and osteosarcoma [21].
The most common PDB involvement sites are the pelvis, lumbar spine, and femur, reported in more than 75% of cases, with the polyostotic disease being more common than the monostotic [13], but the axial skeleton is usually involved. It is assumed that about 75% of patients are asymptomatic [22], although some studies indicate that pain in the affected site is the most common symptom [25][26][27], along with deformity and fracture [22,25]. PDB occurs more often in men than women and is uncommon in people under 50. However, PDB is the second most common bone disorder in elderly individuals and affects 7% of men and 6% of women over the age of 85 in the UK [13,22].
Bone scintigraphy is the most sensitive and economical way of detecting PDB, determining the monostotic or polyostotic disease type, or differentiating the etiology of low back pain in uncertain diagnosis. Bone scans are useful not only to evaluate the entire skeleton for PDB but also to screen for complications associated with PDB, such as fractures and malignant transformations, and to monitor the response to therapy. Increased radiotracer uptake is found in all PDB phases because the osteoblastic activity is present from the early stage [37]. Therefore, a bone scan with 99mTc-labeled phosphonate derivatives is very sensitive imaging and rather specific for PDB; a trained clinical nuclear medicine specialist should not have any problem differentiating it from other bone diseases. However, the Mickey Mouse sign is not unique to PDB alone; this sign and other signs typical for PDB detected on a bone scan can be recognized as metastases, mimic osseous metastases, or coexist with metastases [32,[38][39][40]. Sometimes the differential diagnosis can also be metabolic bone disease or even fibrous dysplasia [41]. The radiotracer uptake in PDB patients is intense and well-demarcated. In long bones, pagetic lesions appear at the articular margin, progressing along the shaft and producing a sharp V-shaped advancing edge such as a flame (flame sign); it is clearly visible on X-rays and bone scans [41]. In contrast, metastatic disease may present with asymmetrical, irregular, heterogeneous, and spotty radiopharmaceutical uptake [42].
Although the number of X-ray, CT, and MRI examinations performed is much greater than bone scans or PET/CT, shortly, there will be more new, accidental PDB diagnoses during radionuclide examinations than in typical, classical radiology. This is related to the increasing use of nuclear medicine scans, which involve examining the whole body in a single test.
Bone densitometry allows assessing the bone density of each L1-L4 lumbar vertebra separately and together. What is even more interesting, all bone densitometers have programs for evaluating only the lumbar spine (the BMD of the thoracic or cervical spine cannot be measured). There are a few articles about bone mineral density (BMD) or its derivates in pagetic patients [43][44][45]. However, there are no papers where both BMD and TBS were measured in a vertebra (or other localization that can be measured using bone densitometry) affected by PDB. Pagetic bone is known to have high or extremely high BMD because the bone is larger and the "density" measured by densitometry is only "planar" [43,45]. In our patient, BMD results in all examined locations (lumbar spine, femoral neck, forearm, and total body) did not meet the WHO criteria for the diagnosis of normal bone mass (T-score > −1.0) (see Table 2A). This may be due to the diagnosis of the disease at an early stage, relatively weak osteoblastic activity, and a small fragment of affected bone (L4 only). In our study, the affected L4's BMD also did not meet the conditions of "elevated bone mass," proposed by J. Paccou et al. as Z-score ≥ +4.0 [46]. The result of the L4's Z-score in our patient was −0.4 and was also the highest Z-score of all examined vertebrae.
The use of densitometry in bone density assessment is almost universal, but the study of bone texture with TBS is not widely available. TBS is derived from the texture of the DXA image and is shown to be related to bone microarchitecture and fracture. This data provides information independent of BMD. It complements the data obtained from DXA and clinical examination. In our patient, TBS of the affected vertebra was higher than the rest of the vertebrae examined (clinical interpretation "partially degraded" for L4 and "degraded" for the other vertebrae) (see Table 2B). Higher TBS indicates that bone texture is not as degraded as, for example, in low bone mass, osteoporosis, and hypercortisolism [9], probably also because the changes are unrelated to bone calcium content and, above all, bone mineral content (BMC) in L4 is also the highest (Table 2A). Pande et al. showed that the quality parameters derived from quantitative ultrasound (QUS) assessment, complementary bone density testing to measurement by DXA, do not indicate bone thinning; the speed of sound (SOS) (m/s) was lower than in normal, non-pagetic bone [44]. Thus, non-invasive methods showed higher BMD and higher bone quality in PDB-affected bones.
Our patient was referred to the Rheumatology Department to be treated for PDB of L4 and accompanying "low bone mass" in densitometric examinations of other localizations. However, there were no indications for treatment due to fracture risk (low MOF and hip fracture probability). The most common treatment of PDB is an intravenous infusion of bisphosphonates, recently zoledronic acid [3,21]. In Poland, zoledronic acid is registered only for treating neoplastic hypercalcemia and preventing fractures/bone complications in oncology patients. For that reason, our patient was treated with risedronate: one tablet every two weeks. As expected, after one year of treatment, the serum activity of ALP decreased (69 U/L), the intensity of radiopharmaceutical uptake in L4 decreased and BMD at the lumbar spine increased insignificantly.
Imaging has a crucial role in the diagnosis and follow-up of Paget's disease. The imaging modalities are complementary, e.g., CT, which provides details of bony architecture and functional imaging (bone scan), which is useful to demonstrate the bone turnover activity of the whole skeleton. Therefore, hybrid imaging, such as PET/CT and single-photon emission computed tomography with CT (SPECT/CT), plays a vital role in diagnosis and monitoring. Furthermore, novel nuclear medicine technologies (including new cadmium-zinc-telluride detectors in SPECT cameras) facilitate a whole-body study in three dimensions, not only as a CT but also as a nuclear scan in a very short time (thus far, full body scanners giving AP and PA planar information are usually used) [47].
Conclusions
To our knowledge, this is the first report of a vertebra affected by PDB diagnosed, which was confirmed using CT, MRI, and a bone scan, assessed using DXA (the results of BMD and TBS). In this case, BMD and TBS measurements in the lumbar vertebra accurately show how the density and texture of healthy and pagetic vertebrae differ.
Increased BMD, TBS of L4, and high ALP activity with typical signs on CT, MRI, and bone scans confirmed PDB diagnosis, and the patient avoided bone biopsy. However, CT scans alone can reveal highly vascular lesions with lysis and sclerosis within the same structure. As a result, hemangioma with a chronic fracture can be initially diagnosed, as in this case. Therefore, atypical images or radiological/scintigraphic findings should always be supplemented with other imaging (morphological or radionuclide) and correlated with the clinical status and additional results, e.g., blood serum. Data Availability Statement: All data and materials were included in this paper.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-02-01T16:07:06.954Z | 2023-01-29T00:00:00.000 | {
"year": 2023,
"sha1": "148b0390b1b662af4ca8cbbbef1e169bfd04dff4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/11/2/401/pdf?version=1675044305",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "135c1193f214a4e7e4e01d82427c5ec6c642305a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119251617 | pes2o/s2orc | v3-fos-license | General proof of (maximum) entropy principle in Lovelock gravity
We consider a static self-gravitating perfect fluid system in Lovelock gravity theory. For a spacial region on the hypersurface orthogonal to static Killing vector, by the Tolman's law of temperature, the assumption of a fixed total particle number inside the spacial region, and all of the variations (of relevant fields) in which the induced metric and its first derivatives are fixed on the boundary of the spacial region, then with the help of the gravitational equations of the theory, we can prove a theorem says that the total entropy of the fluid in this region takes an extremum value. A converse theorem can also be obtained following the reverse process of our proof. We also propose the definition of isolation quasi-locally for the system and explain the physical meaning of the boundary conditions in the proof of the theorems.
I. INTRODUCTION
Black holes are fundamental objects in gravity theory which has been studied for a long time. Several breakthrough developments had been achieved about forty years ago. At the beginning of the 1970's, it is found that the laws of the mechanics of black holes are very similar to the usual four laws of thermodynamics [1]. Soon after, by studying the quantum effects of scalar field around a black hole, Hawking found that the black hole behaves like a blackbody with a temperature which is proportional to its surface gravity [2], and then the earlier proposal of the entropy of the black hole by Bekenstein could be confirmed to be one quarter of the horizon area [3]. Due to this celebrated work, the black hole mechanics is promoted to black hole thermodynamics. Since that time black hole thermodynamics has drawn a lot of attention, and has been widely studied during the past decades because people believe it might be a window in which one can catch sight of some important fundamental theories, such as the so-called quantum gravity theory.
Roughly speaking, there are two ways to approach the thermodynamics of black holes depending on the definitions of the thermodynamic quantities for associated spacetimes. Because of the equivalence principle in general relativity, some definitions of density for usual matter, such as energy density and entropy density are not valid for gravitational field. At most, we can define the energy of the gravitational field quasilocally. In the traditional construction of the thermodynamics of black holes, the thermodynamic quantities are identified to the global quantities defined at the infinities of the spacetimes, such as ADM mass, angular momentum, and charges of gauge fields. Fruitful results have been obtained along this way; for example, the thermodynamics of stationary black holes in general diffeomorphism invariant gravity theory has been established [4] (and references therein). However, in general, it is quite difficult to extract some useful local information of the spacetimes from this kind of thermodynamics of black holes. In 1993, Brown and York walked along another way and developed a new method to define the thermodynamic quantities quasilocally by using of a natural generalization of Hamilton-Jacobi analysis of action functional [5]. New notions on the definition of the horizons of black holes were also proposed soon after by some researchers [6,7]. Since then, black hole thermodynamics can be studied quasilocally, and the gravitational equations can be obtained from these quasilocal thermodynamic quantities and associated thermodynamic relations with some additional assumptions. This quasilocal approach allows us to turn the logic around and study the gravitational equations from the laws of thermodynamics. Actually, Jacobson has shown that Einstein equations are the state equation which can be derived from the Clausius relation by using of local Rindler horizon and Unruh temperature [8,9]. See also [10] and related works on some quasilocal horizons in dynamical spacetimes with spherically symmetry. These remarkable works inspire us to believe that the gravity and thermodynamics should have some deep and profound connection.
The thermodynamics of black holes heavily depends the quantum field theory in curved spacetimes. Quantum effects allow us to regard black holes as real thermodynamic systems. However, besides the black holes, there are also a lot of self-gravitating systems without horizons in general relativity and other possible gravity theories. Of course, the thermodynamics of these selfgravitating systems is very different from the thermodynamics of black holes. For example, the Hawking temperature does not exist in these systems. According to the work of Jacobson, we know that the gravitational equations can be deduced from the laws of thermodynamics. (This work is based on the thermodynamics of the local Rinder horizon, and some results from the quantum field in curved spacetimes have been used, such as the Unruh effect.) So a question naturally arises -whether it is possible to get the gravitational equations from the thermodynamics of the usual matter fields living in the curved spacetimes? Although there are no horizons and black holes in these cases, the gravitational equations and the laws of thermodynamics govern the same thing, i.e., the distribution of the matter fields (or equilibrium state of the matter fields) in stationary spacetimes. So some equivalent description among gravitational equations and thermodynamic laws might exist for these self-gravitating systems. For a spherical radiation system, Sorkin, Wald, and Zhang (SWZ) have shown that one can deduce the Tolman-Oppenheimer-Volkoff (TOV) equation from Hamiltonian constraint when the total entropy of radiation is in extremum [11]. Gao generalized SWZ's work to an arbitrary perfect fluid in static spherical spacetime and successfully got the TOV equation for this fluid [12]. Recently, a more general proof of the (maximum) entropy principle in the case of static spacetime without the spherical symmetry has been completed in [13,14].
However, all of the above discussions are limited in Einstein gravity. One can ask whether the entropy principle is still valid or not in other gravity theories. Lovelock gravity theory is a natural generalization of Einstein gravity to higher dimensions. The action of this theory includes higher derivative terms with respect to metric, while the equations of motion still keep the derivatives up to second order [15]. Due to the development of supergravity and string theory, Locklock gravity theory becomes more and more important. For example, people find that the higher order Lovelock terms appear in the higher order α ′ expansion of string amplitude [16][17][18][19][20]. Thus it is interesting to discuss the (maximum) entropy principle in this generalized gravity theory. The (maximum) entropy principle for the self-gravitating perfect fluid in an n-dimensional Lovelock gravity with the symmetry of an (n − 2)-dimensional maximally symmetric space has been studied by the present authors in [21], where the generalized TOV equations have been derived from both gravity field equations and the (maximum) entropy principle of perfect fluid.
In this paper, we will present a proof of a theorem that the (maximum) entropy principle in the Lovelock gravity theory generally holds without considering the symmetry of an (n − 2)-dimensional maximally symmetric space. The only symmetry we will consider is the static condition of the self-gravitating system. Assuming that the Tolman's law of the temperature holds for the perfect fluid in curved spacetimes, and imposing some boundary conditions in the variations of relevant fields, we show that the entropy of the fluid inside an (n − 2)-dimensional spacelike surface [which is embedded in an (n − 1)-dimensional hypersurface orthogonal to the static Killing vector of the spacetime ] takes extremum value. Our discussion is focused on the system inside the (n−2)-dimensional spacelike surface of the static spacetimes, so it is not hard to extract some local information, i.e., a part of gravitational equations, from the law of thermodynamics of the system. This suggests that the converse theorem can be read out from our proof. Related work on the thermodynamics of self-gravitating system can be found in Ref. [22] in which global quantities, such as ADM mass, have been used to discuss the thermodynamic stability of the system.
We also study the physical meaning of boundary conditions. We found that the boundary conditions which seem nontransparent in the proof of the theorems finally turn out to be the isolation condition which is necessary of applicability of (maximum) entropy principle. Since the system we consider here is a quasilocal system, so the isolation here is quasilocally defined. In this sense, our proof is self-contained. This paper is organized as follows: In Sec.II, for static spacetime, we study the equations of motion of the so-called Einstien-Gauss-Bonnet gravity which can be viewed as a special case of the Lovelock gravity up to second order. In Sec.III, we study the thermodynamics of the perfect fluid in the Einstien-Gauss-Bonnet theory and prove a theorem which relates the gravitational equations and the (maximum) entropy principle of the fluid. In Sec.IV, we generalize our proof to the general Lovelock gravity theory. In Sec.V, we check our previous work in which we studied the entropy principle in Lovelock gravity with an (n − 2)dimensional maximally symmetric space by using the present method. In Sec.VI, in contrast to the isolation condition of the usual thermodynamic system, we present the definition of an isolated system quasilocally. The last section is devoted to some conclusions and discussion.
II. THE EQUATIONS OF MOTION OF EINSTEIN-GAUSS-BONNET GRAVITY IN STATIC SPACETIMES
The Einstein-Gauss-Bonnet gravity is a typical example of the general Lovelock gravity. As a warm-up and an example, in this section, we consider the case of the Einstein-Gauss-Bonnet gravity in an n-dimensional spacetime (M, g ab ). The action of this system can be written as where α is the so-called Gauss-Bonnet coupling constant, ǫ is the natural volume element associated with the metric g ab , and L GB is the Gauss-Bonnet term which has a form Here, we have used curly alphabet to denote the geometric quantities in the n-dimension. For instance, R abcd , R ab , and R are the curvature tensor, Ricci tensor, and scalar curvature for the n-dimensional spacetime respectively. The symbol I matter represents the action for the matter fields. The variation of the action with respect to the metric yields the gravitational equation where G ab is the Einstein tensor of the spacetime (M, g ab ) and H ab is given by Since we have set 8πG = 1, the right-hand side of Eq.(2.3) is simply the energy-momentum tensor T ab for the matter fields. The spacetime (M, g ab ) we are considering is assumed to be stationary. This suggests that we have a timelike Killing vector field K a , i.e., K a satisfies Killing equation ∇ (a K b) = 0, where ∇ a is the covariant derivative compatible with the metric g ab (we use the notations and conventions in [23]). Following the notation by Geroch [24], in the region where λ = K a K a = 0, we can define a metric on the orbit space Σ of the Killing field If the Killing vector field is hypersurface orthogonal, i.e., Frobenius condition is satisfied, the orbit space Σ can be viewed as a hypersurface embedded in the spacetime. We always assume this condition is satisfied in following discussion. In other words, the spacetime is further assumed to be static. Considering the Frobenius condition (2.6), it is not hard to find Here, R abcd is the intrinsic curvature of the hypersurface (Σ, h ab ). Based on the relation, we have the following decompositions and where D a is the covariant derivative operator which is compatible with the induced metric h ab in Eq.(2.5), and R, L GB , and G ab are the scalar curvature, Gauss-Bonnet term, and, Einstein tensor for the hypersurface (Σ, h ab ) respectively. Furthermore, we have and From the above equations, it is easy to find following relations and and and These relations are important in our proof of the entropy theorem in the next section.
III. SELF-GRAVITATING FLUID IN STATIC SPACETIME AND (MAXIMUM) ENTROPY PRINCIPLE
In this section, we are going to analyze the self-gravitating fluid in the Einstein-Gauss-Bonnet gravity. In the static spacetime (M, g ab ), we assume that the Tolman's law holds, which says that the local temperature T of the fluid satisfies where T 0 is a constant which can be viewed as the local temperature of the fluid at some reference points with λ = −1. This relation is essential in the construction of an equilibrium state matter distribution in a curved spacetime, and it is popular satisfied in general stationary systems. Without loss of generality we shall take T 0 = 1. Now, we can present a theorem which relates the (maximum) entropy principle of the fluid and the equations of motion in this static spacetime listed in the previous section. Theorem 1 -Consider a self-gravitating perfect fluid in a static n-dimensional spacetime (M, g ab ) in Einstein-Gauss-Bonnet gravity and Σ is an (n − 1)-dimensional hypersurface orthogonal to the static Killing vector. Let C be a region inside Σ with a boundary ∂C and h ab be the induced metric on Σ. Assume that the temperature of the fluid obeys Tolman's law and equations of motion of both gravity and fluid are satisfied in C. Then the fluid is distributed such that its total entropy in C is an extremum for fixed total particle number in C and for all the variations in which h ab and its first derivatives are fixed on ∂C.
Proof. -The integral curves of the static Killing vector field K a can be viewed as some static observers in the spacetime, and the velocity vector field of such observers are given by Obviously, u a is just the unit norm of the hypersurface Σ. The acceleration vector associated with these observers has a form For a general perfect fluid as discussed in [21], the energy-momentum tensor T ab takes a form where ρ and p are the energy density and pressure of the fluid respectively. In another word, u a is also the velocity of the comoving observers of the fluid. The entropy density s is taken to be a function of the energy density ρ and particle number density n (do not confuse with the dimension of the spacetime), i.e., s = s(ρ, n). The standard first law of thermodynamics in terms of these densities and Gibbs-Duhem relation are listed here as follows where µ is the chemical potential conjugating to the particle number density n. It can be shown from conservation law ∇ a T ab = 0 and static conditions one gets together with Eqs.(3.1), (3.5), and (3.6), we find which leads to The total entropy S inside the region C on Σ is defined as the integral of the entropy density, i.e., S = Cǭ s(ρ, n) , (3.11) whereǭ is the volume element of Σ associated with the induced metric h ab , and invoking the local first law of thermodynamics (3.5), the variation of total entropy yields δS = C s δǭ +ǭ ∂s ∂ρ δρ + ∂s ∂n δn Similarly, the total number of particle N is an integral So the fixed total particle number in C yields the following constraint (3.14) With this constraint, the variation of the total entropy becomes where we have used µ/T = constant and δǭ = (1/2)ǭ h ab δh ab . The variations we perform here are only restricted on spacelike hypersurface Σ.
For the perfect fluid in an equilibrium state, the total entropy must take maximal value according to the maximum entropy principle. So our purpose is to proof the extremum condition δS = 0 from the gravitational equations which has been studied in sec.??.
From Eqs. (2.13) and (2.14), one can easily get the Hamiltonian constraint of the theory We can calculate the second term on the right-hand side of Eq.(3.15) by using the Hamiltonian constraint (3.16). Actually, we have where B 1 and B 2 are total derivative terms which come from the variation of the Einstein-Hilbert term and Gauss-Bonnet term in the right-hand side of Eq.(3.16) respectively. Explicitly, they are given by and By using integration by parts, and dropping the surface terms with fixed h ab and its first derivative, one can rewrite the last two terms of Eq.(3.18) as follows and where all of the terms with the derivatives of Riemann tensor are eliminated by using Bianchi identity.
Remembering that T 0 has been set to be a unit, so we have following relation Thus the evolution equations become Combining Eqs.
IV. MAXIMUM ENTROPY PRINCIPLE IN LOVELOCK GRAVITY
In the previous sections, we have proved our Theorem 1 in the context of Einstein-Gauss-Bonnet gravity. However, the theorem can also be generalized to a more general Lovelock gravity. The Lovelock action in n-dimensional spacetime is given by where ǫ is still the volume element of the static spacetime (M, g ab ), and I matter also represents the action of matter fields. The coefficients α (i) are constants and L (i) is defined as di is generalized Kronecker delta symbol. Considering the variation with respect to g ab , one gets the equations of motion which havw a form For the static spacetime (M, g ab ), by using Eq.(2.7), after lengthy and tedious calculation, we find and where L (i) is the ith Lovelock Lagrangian on the hypersurface (Σ, h ab ) and has an expression and G (i) ab is the generalized "Einstein tensor" for the ith Lovelock Lagrangian in the hypersurface, i.e., acbd has a symmetry of Riemann tensor and is defined as (4.9) Based on the above discussion, we find the left-hand side of Eq.(4.3) can be transformed into a form This suggests that the Hamiltonian constraint in this theory can be written as Similarly, the evolution equations are given by With the preparation in the above, we can discuss the entropy principle in this theory now. The variation of the total entropy inside the region C is still described by Eq.(3.15). We will prove the Theorem 1 in the Lovelock gravity theory order by order. We define and (4.14) It should be noted here: ρ i and p i are not the real energy density and pressure of the fluid (the energy density and pressure of the fluid are denoted by ρ and p respectively). Actually, we have Consequently, the variation of the total entropy inside C can be expressed as where δS i is defined as The second term in the integrand of the right-hand side of Eq.(4.17) can be expanded as where B i is the total derivative term which come from the variation of L i on the hypersurface. Explicitly, this term can be written as By the same token in the Einstein-Gauss-Bonnet gravity, we can rewrite the last term of Eq.(4.18) as follows where we have used an identity D d (∂L i /∂R acbd ) = 0 which can be deduced from Bianchi identity of Riemann tensor. Actually, it is easy to find By now, we complete our proof of the Theorem 1 in Lovelock gravity.
In the above procedure, we can also reverse the proof by assuming that total entropy already achieves extremum, and then the evolution equations can be deduced from the Hamiltonian constraint. This can be easily seen from Eq.(4.17). Thus we arrive at a converse theorem Then the evolution equations are implied by the extrema of the total fluid entropy for a fixed total particle number in C and for all variations in which h ab and its first derivatives are fixed on ∂C .
V. SPACETIME WITH AN (n − 2)-DIMENSIONAL MAXIMALLY SYMMETRIC SPACE
With the assumption of maximally symmetric (n − 2)-dimensional space, our previous work [21] has shown that the generalized Tolman-Oppenheimer-Volkoff (TOV) equation can be deduced from the (maximum) entropy principle together with the Hamiltonian constraint in Lovelock gravity theory.
According to the present discussion, the (maximum) entropy principle of perfect fluid can be realized by using of EOM in such spacetime manifestly. The n-dimensional spacetime metric is assumed as where γ ij dz i dz j is the metric of an (n−2)-dimensional maximally symmetric space. The nontrivial components of the Riemann tensor of such spacetime are given by where k = 0, ±1 corresponds to the sectional curvature of the maximally symmetric space and the prime denotes the derivative with respect to radial coordinate r.The gravitational equations with perfect fluid can be put in the form which comes from G t t = κ 2 n T t t , and which is given by G r r = κ 2 n T r r . The total entropy and total particle number inside ∂C can be written as where ω k := d n−2 z √ γ is the volume of the maximally symmetric space with the sectional curvature k, and R is the radius of spacial boundary ∂C With additional assumptions of the Tolman's law (note that √ −λ = e Φ ) and fixed particle number N inside ∂C, we can get that Φ ′ = −T ′ /T and the variation of total entropy Substituting the gravitational equations Eqs.(5.3) and (5.4) into the above equation and performing the integration by parts, we finally get δS = 0 when the boundary condition of Ψ: δΨ(R) = 0 is imposed. Thus we realize the entropy principle in case of spacetime with (n − 2)-dimensional maximally symmetric space. We used the boundary condition δΨ(R) = 0 in our derivation. For the metric of Eq.(5.1), the Misner-Sharp energy takes the form [6,25] Clearly, the boundary condition here has its physical meaning of fixed Misner-Sharp energy inside the spacial boundary. The volume of spacial boundary A(R) = R n−2 √ γd n−2 z is held fixed as well as the total particle number N inside ∂C. So δm(R) = 0, δA(R) = 0, and δN = 0 define an isolated system quasilocally.
In addition, we can reverse the procedure to get the so-called generalized TOV equation by assuming that both the Hamiltonian constraint {here is the tt component of gravitational equations Eq.(5.3)} and equilibrium state of perfect fluid have been already satisfied. Starting from Eq.(5.7) with δS = 0 and the exact expression of ρ, i.e., Eq.(5.3), we obtain the evolution equation as Note that the thermodynamic first law in terms of densities Eq.(3.5) and Gibbs-Duhan relation Eq.(3.6) tells us that dp = sdT + ndµ , (5.10) together with Eq.(3.10) we can get the following relation So the evolution equation can be written as a relation among energy density, pressure, and metric components. This is the so-called generalized TOV equation in Lovelock gravity More compactly, the generalized TOV equation can be written as where m(Ψ) is the Misner-Sharp energy Eq.(5.8) and here it is understood as a function of Ψ.
As a little worm-up, we can see that at least in the case of spacetime with maximally symmetric space, there do exist the equivalent description between the geometrical equations and the laws of thermodynamics without any man-made input since the the boundary condition can be finally realized as the necessary condition for an isolated system quasilocally.
The next section will focus on the static case, and talk about the boundary conditions and its physical meaning.
VI. THE BOUNDARY CONDITIONS AND QUASILOCALLY ISOLATED SYSTEM
In the previous sections, we have proved that the total entropy inside the spacelike (n − 2)-dimensional surface ∂C will take extremal value once the Tolman's law and both gravitational and fluid equations are held. To get this conclusion, we have assumed that the total particle number N inside the spacial region C is held fixed and the variations δh ab and D a δh bc are both vanishing on ∂C. The meaning of the condition δN is straightforward-there is no effective matter communication between the inside and outside of ∂C. But the physical meaning of the conditions δh ab = D a δh bc = 0 is still unclear so far. For a thermodynamic system by usual matter, the total entropy will take maximal value for equilibrium state when the system is isolated. In this case, the total energy E, volume V , and particle number N of the system are all held fixed, i.e., we have However, when gravity is taken into account, that is for a self-gravitating one, the phrase "isolated system" becomes ambiguous since the gravitational interaction is a long range force. In Einstein gravity theory, the entropy extremum of relativistic self-bound fluid in stationary axisymmetric spacetime has been studied by Katz and Manor [26], they have presented the condition for global isolation with globally defined quantities such as total mass energy and total angular momentum. For a quasilocal system, the (maximum) entropy problem of self-gravitating matter has been studied by many authors [11,12], together with our previous paper [21], where all the discussions are based on the assumption of spacetime with maximally symmetric space. Furthermore the quasilocal isolation is realized by requiring that the Misner-Sharp energy m(R) inside a fixed radius R does not change under the variation. The condition of fixed particle number N (R) appears as a Lagrange multiplier. So we see that at least in spherically symmetric spacetime, the quasilocal realization of isolation requires fixed Misner-Sharp energy m(R), total particle number N (R) and fixed spacial area ω k R n−2 .
In this section, we will give the definition of isolated system quasilocally for a more general static spacetime in Lovelock gravity theory which has implied by the boundary conditions mentioned above that was used to proof the extremum of total entropy.
First of all, a quasilocal system should have finite spacial volume or more rigorously, such system has spacial boundary. We will denote the product of surface ∂C with segments of timelike Killing vector field K a 's integral curve as a timelike boundary of spacetime manifold M as (n−1) B with a unit norm n a . The induced metric and extrinsic curvature tensor of this timelike boundary (n−1) B denoted as γ ab and Θ ab have the following form γ ab = g ab − n a n b , Once a timelike boundary is imposed, the well-defined variational principle requires a boundary term to cancel the total derivatives that produce surface integrals involving the derivative of δg ab normal to the boundary (n−1) B. For The Loveloock gravity theory, the boundary term can be written as [27,28] whereǫ is the volume element of (n−1) B associated with the induced metric γ ab and H (i) is defined as At this point,together with the boundary term, we have the total action for a quasilocal gravitational system in Lovelock theory.
Since the spacetime is static, the extrinsic curvature of spacelike hypersurface is vanishing. So we can decompose the intrinsic and extrinsic curvature tensor of (n−1) B along the time slice as follows whereR abcd is the intrinsic curvature tensor of (n − 2)-dimensional surface (∂C, σ ab ) with the covariant derivative operator denoted asD a which can be viewed as a submanifold embedded in spacelike hypersurface Σ with a unit norm n a . σ ab = h ab − n a n b and k ab = −σ c a D c n b are induced metric and extrinsic curvature tensor of ∂C's , and a b is the acceleration vector of the static observer.
With the form of stress-energy tensor defined in Eq.(6.5), one can obtain the energy density observed by the static observer as where t (i,s) = δ a1···a2i−1a b1···b2i−1b u a u bRb1b2 a1a2 · · ·R b2s−1b2s a2s−1a2s k b2s+1 a2s+1 · · · k b2i−1 a2i−1 , (6.10) which is the total energy density of the region C quasilocally defined on its spacial boundary ∂C. So the total energy of the region C as a thermodynamic quantity can be written as the following integration whereǫ is the volume element of ∂C. Now let us study the physical implication of boundary conditions we have imposed on variation of spacelike hypersurface Σ's induced metric h ab and its first derivatives. First, we noted that the static observer's four velocity u a is hypersurface orthogonal to Σ. This means that the (1,1)-type tensor field h a b is a projection operator which equals to a Kronecker delta symbol of Σ when restricted on the hypersurface Σ. Then we can conclude that δh ab also vanishes on the spatial boundary ∂C because we can use the relation h ab h bc = h a c to deduce that where the variation of h a b vanishes since the variation is restricted on Σ. Second, we find the following variational relation by using the fact that n a is the unit norm of the surface ∂C embedded in spacelike hypersurface Σ δn a = 1 2 n a n b n c δh bc . (6.13) The (1-1)-type tensor σ a b can also be viewed as a projection operator of the surface ∂C and equals the Kronecker delta symbol when restricted on it. Thus, the above relation tells us that the variation of induced metric σ ab of ∂C together with its all possible index form will vanish when restricted on the surface ∂C.
Based on the above discussion, the variation of quasilocal energy Eq.(6.11) yields the following form b1···b2i−1b u a u bRb1b2 a1a2 · · · δR b2s−1b2s a2s−1a2s k b2s−1 a2s−1 · · · k b2i−1 b1···b2i−1b u a u bRb1b2 a1a2 · · ·R b2s−1b2s a2s−1a2s k b2s−1 a2s−1 · · · δk b2i−1 a2i−1 , (6.14) where δ(u a u b ) = δh b a , (6.15) δR ab cd = σ ae σ bf σ dgDf (σ gmD e δσ cm + σ gmD c δσ em − σ gmD m δσ ec ) + 2R e b cd δσ ae +R ab c g δσ dg , (6.16) Note theD a is the covariant derivative operator which is compatible with σ ab and ∂C has no boundary. All the variational terms in δE can be finally changed into the form which contains δh ab and D a δh bc . After considering the integration on ∂C, the boundary conditions stated in the theorems finally yield which is nothing but the physical requirement that no energy exchange with environment of an isolated system. Since the total energy of the region C is defined quasilocally, a natural choice of volume of such a system is now the surface area of the region C, that is the volume of the (n − 2)-dimensional surface ∂C A = ǫ . (6.19) According to our boundary conditions, it is easy to see that the variation of this volume vanishes, i.e., δA = 0.
Comparing our results with the thermodynamic isolated system by the usual matter, we can now claim that the boundary conditions together with the fixed total particle number N stated in theorems imply an isolated system quasilocally in Lovelock gravity theory with δE = 0 , δA = 0 , δN = 0 . (6.20) As we have seen, the two boundary conditions δh ab = 0 and D a δh bc = 0 on the spacelike (n − 2)-dimensional surface ∂C are necessary conditions to define a quasilocally isolated system in Lovelock gravity theory.
If we relax one of the boundary conditions, then we can expect to get the variational relation among the total entropy and other thermodynamic quantities. Let us take Einstein gravity as a simple example. We will relax the boundary condition D a δh bc = 0 on ∂C and see the result of entropy variation.
Without the assumption of vanishing of the first derivative of all variations of induced metric on ∂C, the variation of total entropy now is nonzero even if the equations of motion of gravity are satisfied.
In the last step, we have used the fact that D b δh ab = σ be D e δh bd = 0 because δh ab = 0 on ∂C.
On the other hand, one can define the quasilocal energy inside ∂C according to Eqs.(6.9), (6.10), and (6.11) when limited in the Einstein gravity case as E = − ∂Cǫ σ a b D a n b . Thus, when ∂C is an isothermal boundary, the variation of total entropy inside ∂C can be written as δS = 1 T δE . (6.24) This is nothing but the thermodynamic first law of the isometrical system since we have fixed the induced metric on ∂C.
VII. CONCLUSIONS AND DISCUSSION
In this paper, we have shown that the (maximum) entropy principle of the perfect fluid in curved spacetimes can be realized by using the gravitational equations in the Lovelock gravity theory. This result has been put into the Theorem 1. Comparing to our previous paper, the symmetry of an (n − 2)-dimensional maximally symmetric space has not been imposed, and the only symmetry required here is the static condition.
For the traditional thermodynamics in flat spacetime, the entropy of matter must take maximal value in an equilibrium state if the system is isolated. When backreaction is encountered, that is for self-gravitating system, it seems that the requirement for isolation at least includes following conditions: First, the system inside an (n−2)-dimensional spacelike surface ∂C should have a fixed total particle number. Second, the induced metric h ab on Σ and its first derivatives should be fixed on ∂C. Physically, the first condition implies that the system has no effective particle communication with the outside region. The second one implies two physical explanations, one is the volume of the system which is quasilocally defined as the spacial region C's surface area should keep fixed, the other is the total quasilocal energy of the system does not change under the variations of the matter fields. Thus these two conditions in Theorem 1 and 2 will give the definition of an isolated system quasilocally when backreaction is taken into account.
We have just shown that the total entropy of the perfect fluid for this isolated system must take extremum value when both the gravitational and fluid equations are satisfied. However, we do not know the extremum is a maximum or a minimum at present. To confirm the state is a real equilibrium state, one has to perform a second order variation and analyze the stability conditions of the system. The maximum entropy principle in general relativity with stability analysis in spherical symmetric system has | 2015-03-01T07:17:27.000Z | 2014-04-26T00:00:00.000 | {
"year": 2014,
"sha1": "5e12218a9d4ee41fb1c9d3bc976c4377930c1d0b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1404.6601",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5e12218a9d4ee41fb1c9d3bc976c4377930c1d0b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
94750170 | pes2o/s2orc | v3-fos-license | Mesoscale solubilization and critical phenomena in binary and quasi binary solutions of hydrotropes
Hydrotropes are substances consisting of amphiphilic molecules that are too small to self assemble in equilibrium structures in aqueous solutions, but can form dynamic molecular clusters H bonded with water molecules. Some hydrotropes, such as low molecular weight alcohols and amines, can solubilize hydrophobic compounds in aqueous solutions at a mesoscopic scale, around 100 nm, with formation of long lived mesoscale droplets. In this work, we report on the studies of near critical and phase behavior of binary, 2,6-lutidine - H2O, and quasibinary, 2,6-lutidine - H2O - D2O, and tert-butanol - 2-butanol - H2O solutions in the presence of a solubilized hydrophobic impurity, cyclohexane. In additional to visual observation of fluid phase equilibria, two experimental techniques were used - light scattering and small - angle neutron scattering. It was found that the increase of the tert-butanol to 2-butanol ratio affects the liquid - liquid equilibria in the quasi-binary system at ambient pressure in the same way as the increase of pressure modifies the phase behavior of binary 2-butanol - H2O solutions. The correlation length of critical fluctuations near the liquid-liquid separation and the size of mesoscale droplets of solubilized cyclohexane were obtained by dynamic light scattering and by small - angle neutron scattering. It is shown that the effect of the presence of small amounts of cyclohexane on the near - critical phase behavior is twofold - the transition temperature changes towards increasing the two-phase domain, and long-lived mesoscopic inhomogeneities emerge in the macroscopically homogeneous domain. These homogeneities remain unchanged upon approach to the critical point of macroscopic phase separation and do not alter the universal nature of criticality. However, a larger amount of cyclohexane generates additional liquid-liquid phase separation at lower temperatures.
Introduction
Hydrotropes are substances consisting of amphiphilic molecules whose non-polar part is much smaller when compared with long hydrophobic chains of traditional surfactants [1,2]. Typical examples of non-ionic hydrotropes include short-chain alcohols and amines. In aqueous environment, hydrotropes unlike surfactants do not spontaneously self-assemble to form stable equilibrium micelles [3], although they are frequently used as "co-surfactants" to stabilize micro-emulsions [4].
In addition, many experiments on aqueous solutions of hydrotropes show the presence of long-lived mesoscale inhomogeneities of the order of a hundred nm in radius [12,13,[20][21][22][23][24][25][26][27][28][29][30][31][32]. It has been shown that such inhomogeneities occur in aqueous solutions of nonionic hydrotropes when the solution contains a third, more hydrophobic, component [12,13,[28][29][30][31][32]. Remarkably, these inhomogeneities (emerging in water-rich ternary systems) are only pronounced in the hydrotrope concentration range where molecular clustering and thermodynamic anomalies are observed in the original binary hydrotrope-water solutions. The hypothesized structure of these mesoscopic droplets is such that they have a hydrophobe-rich core, surrounded by a H-bonded shell of water and hydrotrope molecules [12,13,22,31,32]. These droplets can be extremely long-lived, being occasionally stable for over a year [32]. The phenomenon of the formation of mesoscopic inhomogeneities in aqueous solutions of nonionic hydrotropes induced by hydrophobic "impurities" is referred to as "mesoscale solubilization" [12,13,31,32]. Mesoscale solubilization may represent a ubiquitous feature of certain nonionic hydrotropes that exhibit molecular clustering in water and may have important practical applications in areas, such as drug delivery, if the replacement of traditional surfactants is necessary.
Mesoscale solubilization is still poorly understood and a theory for this phenomenon is yet to be developed. If such a system is brought into the vicinity of the liquid-liquid critical point, mesoscale solubilization could somehow interact with the mesoscopic concentration fluctuations [31]. The first studies of the mesoscale solubilization near the liquid-liquid separation in ternary solutions of tertbutanol, propylene oxide and water were performed by Sedlák and Rak [30] and by Subramanian et al. [31]. These works agree on the interpretation of the observed mesoscale inhomogeneities as the phenomenon of solubilization of oily impurities originally present in tert-butanol and propylene oxide. However, behavior of these inhomogeneities in the critical region remained unclear.
Subramanian et al. [31] speculated that the mesoscale droplets could exhibit anomalous curvature fluctuations in the critical region, thus forming a sponge phase. Herzig et al. [33] have demonstrated that a bicontinuous interface can be stabilized with the help of colloidal particles, which are wetted in both phases. It was thus hypothesized that the mesoscopic droplets, which might behave as colloidal particles, could stabilize the bicontinuous phase formed in hydrotrope-water-oil systems near the critical point. This is one possible scenario. An alternative possibility is the aggregation or disintegration of solubilized droplets caused by attractive Casimir forces induced by the concentration fluctuations [34].
There are two main obstacles preventing quantitative studies of mesoscale solubilization near phase separation and, in particular, in the critical region. Firstly, mesoscale solubilization could be easily confused with the so-called "ouzo effect" [35,36], the formation of micron-size metastable droplets (making the solution milky-white) by nucleation between the thermodynamic solubility limit (binodal) and the absolute stability limit of the homogeneous phase (spinodal). Moreover, mesoscale solubilization and the ouzo effect may overlap, thus making unambiguous interpretations of the experiments challenging. Secondly, in the critical region this task will require simultaneous measurements of two mesoscopic lengths, the correlation length of the critical fluctuations and the size of the mesoscopic droplets. This task cannot be accomplished by light scattering experiments alone. When the correlation length reaches 20-30 nm, the critical opalescence dominates the lightscattering intensity and the size of the mesoscale droplets becomes poorly detectable. Refractive-index matching is not applicable for this particular case because of the large difference between the refractive indices of oil and water. The problem can be solved in small-angle neutron scattering (SANS) experiments by changing the H 2 O/D 2 O ratio, such that the signal from the SANS scattering caused by the critical fluctuations can be almost eliminated.
Mixtures of isotopes, H 2 O/D 2 O, as well as of isomers, TBA/2BA, form near-ideal solution and can be regarded as effective compounds whose properties can be tuned by changing the component ratio. In particular, the two butanol isomers are very similar, except for the fact that 2BA exhibits partial immiscibility with water, while TBA is the highest molecular-weight alcohol to be completely miscible with water under ambient conditions. TBA is a "perfect" hydrotrope. When placed at the water/oil interface, a TBA molecule is equally divided, with the hydroxyl group on the water side and the methyl groups on the oil side [44]. It was expected that on addition of TBA to 2BA the immiscibility gap in the TBA/2BA-H 2 O system would shrink and a liquid-liquid critical point could emerge. We have confirmed this expectation. Moreover, we have shown that an increase of the TBA/2BA ratio affects the liquid-liquid equilibria in the quasi-binary system at ambient pressure in the same way as the increase of pressure modifies the phase behavior in binary aqueous solutions of 2BA [45][46][47].
We were able to simultaneously measure the correlation length of the critical fluctuations near the liquid-liquid separation and the size of the mesoscopic inhomogeneities. It is shown that the effect of the presence of small amounts of a hydrophobe (controlled addition of cyclohexane) on the nearcritical phase behavior is twofold: 1) increase of the domain of liquid-liquid separation; 2) formation of long-lived mesoscopic inhomogeneities that survive and remain basically unchanged near the critical point of macroscopic phase separation. The presence of these inhomogeneities does not alter the universal nature of critical anomalies. However, larger amounts of cyclohexane generate additional macroscopic liquid-liquid phase separation at lower temperatures. on the study of the effect of CHX addition on the critical temperature, we have concluded that the level of the hydrophobic impurities originally present in these systems does not exceed ~0.1%.
Supplemental Information
To
Determination of Phase Behavior
The liquid-liquid phase transition temperatures in 2,6-lutidine-water upon addition of cyclohexane were detected visually and from the location of a maximum in the light-scattering intensity. Second macroscopic phase separation at lower temperatures was detected upon addition of CHX more than 0.16 % (mass). Ternary phase diagram (H 2 O-TBA-2BA) was determined by the cloud-point method [42]. The third component (2BA) was added to a binary mixture in small steps. At each step, the ternary mixture was manually shaken and let to rest for about 3 to 5 minutes. The sample was then visually observed to determine if phase separation had occurred. If not, more of the third component was added and the above procedure was repeated. The ternary phase diagram of the TBA/2BA-water system was determined at room temperature, 22 °C ± 0.5 °C. In order to estimate the location of the critical point, light scattering experiments were carried out in the macroscopic one-phase region close to the binodal curve. If the correlation length of critical fluctuations exhibited a maximum, then the point of the binodal curve corresponding to this maximum was interpreted as the solution with the critical composition.
Light Scattering
Static and dynamic light scattering experiments were performed in College Park and Moscow with Photocor setups, as described in refs. [28,29,31,32]. In Moscow, the measurements were performed at the scattering angles of 20, 30 and 90 degrees. Measurements in College Park were performed from 30 to 130 degrees in 10-degree increments. Light scattering measurements at NIST were made at 90 degree with a Wyatt DynaPro NanoStar setup. Temperature was controlled with an accuracy of ± 0.02-0.05 °C that prevented us from performing experiments in the immediate vicinity of the critical point.
For two exponentially decaying relaxation processes, the intensity auto-correlation function 2 ( ) g t (obtained in the homodyning mode) is given by [50,51] ( ) ( ) ( ) where A 1 and A 2 are the amplitudes of the two relaxation modes, t is the "lag" (or "delay") time of the photon correlations and , are the characteristic relaxation times. For two diffusive relaxation processes, the relaxation times are related to the diffusion coefficient D 1 and D 2 as [50,51] where q is the difference in the wave number between incident and scattered light, , n is the refractive index of the solvent, λ is the wavelength of the incident light in vacuum (λ = 633 nm for He-Ne laser) and θ is the scattering angle. The linear dependence between the decay rate 1/τ and q 2 is characteristic for diffusive relaxation [50,51]. For monodisperse, spherical Brownian particles the hydrodynamic radius R can be calculated with the Stokes-Einstein relation [50]: where 2 D is the diffusion coefficient of the particles, B k is Boltzmann's constant, T is the temperature and η is the shear viscosity of the medium.
The correlation length of critical fluctuations,ξ , was obtained from the mutual diffusion coefficient 1 D which near the critical point was approximated as: where correlation length (order of the range of molecular interactions). Eq. (4) is a simplified version of the mode-coupling theory [52][53][54][55][56]. In this version, the weak critical singularity of shear viscosity [55,56] and a q-dependence of the non-asymptotic term ( 0 / ξ ξ ) [57]
Small-angle Neutron Scattering
DLS measurements cannot be used to study the nature of the droplets of solubilized hydrophobe in the immediate vicinity of the critical point because the scattering from the critical fluctuations overwhelms the signal from the droplets. To assess the region where the correlation length becomes comparable to the droplet radius, we used small-angle neutron scattering (SANS) with contrastmatched samples [58]. By using this technique, one can eliminate the contribution from the critical concentration fluctuations and focus on the scattering from other inhomogeneities near the solution critical point. The intensity of SANS depends on differences between scattering-length densities of the constituent molecules. The scattering-length density of 2,6-lutidine is 8 1. 16 , where q is the scattering angle. By measuring the SANS intensity as a function of the wave number, ( ) I Q one can obtain the average radius, R , of inhomogeneities if their scattering-length density different from that of solvent.
The expression for fitting a spherical droplet model to the SANS intensity data reads [58]: In Eq. 5, ρ ∆ is the difference in scattering-length densities of the solvent and spherical droplets, φ is the volume fraction of droplets, and is the droplet volume. The droplet radius R , obtained from SANS experiments with using Eq. 5, is assumed to be close to the hydrodynamic radius obtained by DLS. system [47]. The results from the contrast-matched SANS measurements are shown in Figure 8 for two temperatures for the sample with 0.2 % CHX. Although the data suffer from a fair degree of statistical uncertainty, the mesoscopic droplets can be clearly distinguished. From a fit to a monodisperse spherical form-factor model we find that the average radius of the droplets to be R = 77 nm.
Effects of addition of cyclohexane on mesoscale behavior of 2,6-lutidine in H 2 O/D 2 O contrast-matched quasi-binary mixture (DLS and SANS)
This value is quite close to the effective hydrodynamic radius (~80 nm) measured by DLS.
We did not observe any detectable change in the SANS intensity data for this sample as the critical point was approached. This fact also confirms that the 2,6-lutidine and the solvent (H 2 O/D 2 O mixture) have been effectively contrast-matched. We want to emphasize that the samples used in the SANS measurements and in the DLS measurements presented in Fig. 7 were identical. Hence, we conclude that the critical fluctuations do not have a significant impact on the mesoscopically solubilized cyclohexane, at least if the system is not extremely close to the critical point.
There is an important feature of the behavior of mesoscale droplets of solubilized hydrophobic impurities in solutions of hydrotropes, which was reported in the literature [28][29][30][31][32]. While the droplets are extremely long-lived, this may be a non-equilibrium phenomenon. In particular, the droplet size depends on the protocol used to prepare the sample (aging and cooling/heating rates [29]). On multiple occasions, we had observed changes in the droplet size from 80-90 nm to 120-130 nm after a day of waiting and re-measuring, as shown in Figure 9, while the contribution from the concentration fluctuations remained unchanged.
Effects of addition of cyclohexane on mesoscale and phase behavior of 2,6-lutidine in water (DLS)
Long-term stability and aging of mesoscale droplets was investigated in more detail for solutions of The average size of the mesoscale droplets also depends on the lutidine/water ratio. As shown in Figure 11, the size increases with decrease of the concentration of lutidine. This effect is obviously linked to increasing hydrophobicity of the system with decrease of the lutidine concentration. Figure 12 demonstrates how this size also increases with increase of the amount of added CHX, as well as upon cooling. In Figure 12 we also show the growth of the correlation length of critical concentration fluctuations (obtained from the diffusion coefficient given by Eq. 4) for three samples (without CHX, 0.13 %, and 0.16 % CHX). When the correlation length becomes comparable with the size of the mesoscale droplets, the scattering from critical fluctuations overwhelms the scattering from the droplets and the droplet size becomes undetectable. In Figure 13 the correlation length of critical fluctuations as a function to proximity to the critical temperature is presented in log-log scale. In the interval the data for the solution without CHX and with 0.13 % CHX can be well described by a scaling power law as , with the universal critical exponent (fixed) and nm. This result is in agreement with previous studies of criticality in this system, nm [38]. The data for the solution with 0.16 % CHX systematically deviate from the data obtained for the other solutions, however, within the statistical error they are still consistent with the power law given by Eq. 7. We also notice that with increase of the CHX concentration the bare correlation length increases.
Critical phenomena in a quasi-binary system 2BA/TBA -water
In Figure 14 we show the coefficient of mutual diffusion in the near-critical quasibinary solution of TBA/2BA in water, measured by DLS at different scattering angles, as a function of temperature.
The solid curve is a fit of a semi-empirical formula, given as where T c (= 23.41 °C) is the critical temperature, 0.63 ν = is the critical exponent of the correlation length that provides the diffusion coefficient to vanish at the critical point. T * is projected ("virtual") temperature of possible vanishing the diffusion coefficient at a lower temperature, as suggested by the phase diagram presented in Figure 5. This formula is a further simplified version of Eq. 4 in which the Kawasaki function is neglected and the non-asymptotic term as well as the temperature dependence of viscosity are absorbed by the second (empirical) term. The asymptotic scaling power law, given by Eq. 7, approximates the correlation length diverging at the critical point and is applied twice: providing vanishing diffusion coefficient at the experimentally observed critical temperature and at a lower (projected) temperature T*. The correlation length, calculated from the experimental data on mutual diffusion with full implementation of Eq. 4, is presented in Figure 15. It is demonstrated that the correlation length within the statistical error (however relatively large when compared with the results presented in
Discussion
Addition of small amounts of cyclohexane (hydrophobe) lowers the critical consolute temperature of the 2,6-lutidine-water system, as expected, and generates a second liquid-liquid separation at lower temperatures, as can also be easily understood. With an increase of hydrotrope concentration the second liquid-liquid separation moves to higher temperatures and would overlap the original immiscibility in the 2,6-lutidine-H 2 O system that is moving to lower temperatures upon addition of hydrotrope. This overlapping generates three-phase coexistence. Interestingly, addition of tertiary butanol to the solutions of 2-butanol in water mirrors this phenomenon: initially 2-butanol is partially immiscible with water, but TBA shrinks the miscibility gap and at a certain concentration of TBA two liquid-liquid separation domains (at higher and lower temperatures) emerge. A similar effect can be achieved by applying pressure on the 2,6-lutidine-H 2 O system.
However, the major challenge is to fully understand the role and thermodynamic nature of mesoscopic droplets that are formed upon addition of small amounts of a hydrophobe in a broad range of temperatures and concentration of hydrotrope solutions. It seems that the macroscopically homogeneous phase (below the critical temperature but above the temperature of the second macroscopic liquid-liquid separation) behaves as a dilute colloid solution; changing the critical temperature in the same manner as usual colloids do [59], but does not alter the universal nature of the critical behavior. When the solution is macroscopically phase-separated, the mesoscale droplets are observed in the aqueous phase for days or even weeks, however, upon a longer period of time the phase is cleared. Occasionally, a soap-like residual mesophase on the water-oil interphase is observed over the years [32,60]. We were not able to observe effects of critical fluctuations on the mesoscopic droplets. More accurate experiments in immediate vicinity of the critical point are needed.
The size of the mesoscopic droplets increases with increasing hydrophobicity of the environment, upon approach to the conditions of macroscopic phase separation. Under these conditions, the phenomenon of mesoscale solubilization starts overlapping with the ouzo effect -the formation of micron-size droplets by nucleation followed by Ostwald ripening and macroscopic phase separation.
What is the connection between the ouzo effect (which is a purely kinetic phenomenon, even relatively long-lived) and mesoscale solubilized droplets which are so long-lived that they practically can be viewed as an equilibrium phenomenon defined by thermodynamics? We still do not know a full answer to this question.
Importantly, the addition of small amounts of CHX and the existence of mesoscale droplets of solubilized CHX does not change the universal scaling power law for the correlation length within the accuracy of our experiment. However, the location of the critical point changes. This explains the fact that in spite of different values of the critical temperature reported by different investigators for the binary mixture 2,6-lutidine-water (as well as for many other near-critical binary solutions), there is consensus on the character of the critical behavior in agreement with scaling theory. However, at higher concentrations of the third component, when the experimental path is at constant composition, a phenomenon of renormalization of critical exponents is predicted by theory [61] and observed experimentally [62]. In particular, the exponent of the correlation length will be renormalized from the universal value 0.63 ν = to a higher (but also universal) value, where 0.11 α = is the critical exponent for the heat capacity [61,62]. The characteristic temperature, ( ) c c / T T T × − , of crossover between these two asymptotic values can estimated (by the order of magnitude) as follows [62,63]: ) are unlikely to be attributed to the beginning of renormalization of the power law.
Conclusions
In this work, we investigated the effects of the presence of small amounts of a hydrophobic This transition is manifested by emulsification ("ouzo effect") following by Ostwald ripening and macroscopic phase separation. We were not able to observe effects of critical fluctuations on the mesoscopic droplets. More accurate experiments in immediate vicinity of the critical point are highly desirable. | 2019-04-04T13:05:47.834Z | 2015-03-24T00:00:00.000 | {
"year": 2015,
"sha1": "7ed95f46e63837370e74af62247b3b6f8f7e6a27",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1503.07071",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "25c491349c82c961c4cba833b511c71c9d533fe6",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
268180271 | pes2o/s2orc | v3-fos-license | Adopting of Smartphone Technologies Amongst Older Adults in Windhoek, Namibia
Recent technological advancements show that mobile phones are becoming an increasingly significant part of our daily lives. Older adults (OAs) [60+] constitute a key demographic for this study. This study aims to determine the features of smartphone technologies used by OAs in Windhoek and to analyse the possible factors that may influence their adoption as well as to assess OAs perceptions of smartphones. A quantitative research method was adopted. A structured questionnaire was used to collect data from 99 OAs in Windhoek through convenience sampling method. Data were analysed employing SPSS. The study revealed that OAs do not fully adopt smartphones. However, they believe that smartphones provide benefits such as giving entertainment and curbing loneliness. The findings suggest there is a relationship between the smart technology (ST) features and perceptions towards the adoption of STs by OAs in Windhoek. This study can educate smartphone manufacturers and developers about elements that should be considered when designing communication devices and applications for OAs.
BACKGROUNd OF THE STUdy
Smartphones are technological innovations that provide immense benefits and convenience to society today.Despite this, not every member of society adopts and uses the technology (Omotayo, 2000).Zijlstra et al. (2020) state that innovations are often adopted faster by young adults than by older adults.According to Kim et al. (2022), older adults experience the need to engage in smartphone technologies but are not able to access such technologies either to meet their essential daily needs or to overcome restrictions of physical distancing.Ajaegbu et al. (2019) define smartphone technology (ST) as a hand-held computer capable of multitasking besides making calls.According to Ma et al. (2020), an older adult (OA) is defined by the United Nations as a person who is over 60 years of age.When new smartphone technologies (STs) are developed, OAs may not be able to keep up.Instead, they may continue using older phones.Makkonen (2021) defines adoption as an active decision to take full advantage of new technologies.Society is becoming increasingly tech-savvy, but there is still a noticeable digital divide between younger and older adults.Using mobile devices is one method to close the digital divide as highlighted by Carmien and Manzanares (2016).Even if the percentage of OAs with technological skills grows every day, a large proportion of this demographic is still technologically illiterate (Smith & Tran, 2017).Some may want to acquire skills, but several barriers prevent them from acquiring them.It is critical to increase smartphone usage and adoption among OAs because it may improve their quality of life, facilitate independent living, and bridge technological gaps between generations (Anderson & Perrin, 2017).Therefore, in this study, the researcher aims to assess the adoption of STs amongst older adults in Windhoek.
STATEMENT OF THE PROBLEM
There is the potential for both positive and adverse effects associated with smartphone usage.Besides being used for social interaction, smartphones can help people alleviate loneliness, acquire education, and reduce cognitive decline.However, Busch et al. (2021) tells us that digital technologies are becoming more important in modern society, where the next generation of older adults will experience more difficulties with their use.Although one of the National Planning Commission of Namibia's (NDP5) desired outcomes is the use of information and communication technologies (ICT) to improve public service delivery, it is not yet known how far Namibia is considering the adoption of smartphones, particularly by OAs.The use of smartphones by older adults is less well understood, hence the study aims to assess the adoption of STs amongst OAs in Windhoek.
RESEARCH OBJECTIVES
The primary objective of this study is to assess the adoption of STs amongst OAs in Windhoek.This objective is divided into three specific objectives which are: a) To analyse the perceptions of OAs in Windhoek towards the adoption of STs.b) To establish the features of STs used by OAs in Windhoek.c) To assess the possible factors that influence the adoption of STs amongst OAs in Windhoek.
RESEARCH QUESTIONS
The study sought to answer the following research questions:
HyPOTHESES
The study tested the following hypotheses:
Introduction
This chapter examines literature related to the research issue as well as other topics that have an impact on the research findings.A theoretical framework has been developed through a survey of the literature.
The History of Smartphones
A smartphone is a portable device that combines the capabilities of a phone and a computer into one item (Li & Luximon, 2018).International Business Machines Corporation (IBM) invented the world's first smartphone in 1992, produced by Mitsubishi Electric in 1994 (Heathman, 2017).The IBM Simon personal communicator combined the features of a mobile phone and a personal digital assistant (PDA).These features made the device unique and sophisticated enough to earn it the title of the World's First Smartphone (ibid).Nokia 9000 communicator was another notable effort, introduced in 1996 (Sahay & Sharma, 2019).Mobile communication standards improvements of the 21 st century allowed portable electronic devices access to the internet wirelessly (Skyrme, 2007).In 2008, Google launched the Android operating system.Samsung and more smartphone manufacturers now use Android.
Smartphone Technology (ST) and Trends in the Market
ST is a relatively young technology with a lot of opportunities for advancement (Aldhaban, 2016).Namibia has the distinct advantage of being one of the African countries with the highest smartphone penetration rate.The smartphone market, in general, is rapidly evolving, with new items being offered regularly.The market features rapidly changing technology and designs, short product life cycles, aggressive pricing, and rapid replication of technological breakthroughs (Turnbull et al., 2000).The current trajectory of ever-more-powerful mobile devices has turned the processing industry away from desktops and toward smartphones (Pencarelli, 2020).One-third of network communication in Africa is established using a smartphone (Oyelaran-Oyeyinka & Adeya, 2004).In Namibia, the Telecommunications Company (MTC) runs campaigns aimed at converting customers from basic to smartphones.
Older Adults (OAs)
A person above the age of 60 is defined as an older adult (Gao et al., 2017).In Africa, old age is characterised by a growing reliance on others as a result of increased security needs brought on by physical frailty and poor health.Older people in Africa and other nations may be seen as burdensome due to the impairment of their health or dependence on others (Jennings & Wasunna, 2005).-Lozano et al. (2014) state that OAs use their smartphones more rarely than other age demographics.Senior folks have seen the greatest surge in smartphone adoption in recent years (Hong et al., 2016).Vaportzis et al. (2018) show that OAs are keen to learn how to use a smartphone and are enthusiastic about adopting new technology.However, there is some concern regarding ambiguity in the instructions and support.Kuerbis et al. (2017) claim that older persons who practice perform more accurately and quickly on technical than digital tasks.Only a few studies have examined explicitly mobile phone usage among the elderly (Jamalvo & Constantinovits, 2019).The majority of research discovered that the key reasons for elderly persons utilising mobile phones were for emergencies, safety, and security.However, when it comes to utilising smartphones, elderly persons have several obstacles.Financial constraints, restricted vision, a lack of interest, and a lack of information are the key hurdles to smartphone use for the elderly (Vaportzis et al., 2017).
Smartphone Technology Features
A product feature is a solution that suits the level of satisfaction of consumers' needs and wants during product ownership, consumption, and utilization (Kotler et al., 2017).Studies suggest users may be initially attracted to features in technology that enhance their task efficiency or completion rate (Li & Luximon, 2018).Ease of use is an individual's view of how easy that technology is to use.The main priority for designers is aligning technology features to the particular traits of OAs (ibid).Complexity-The complexity of innovation or its usage and comprehension is the level of difficulty individuals perceive to be associated with innovation (Straub, 2009).Today's social-technical systems are different, and older individuals are failing to adapt and use technology (Lewis & Naden, 2018).Technology is welcomed and used by older folks once it is viewed as simple to use or primarily designed for them (Kuerbis et al., 2017).
Size of the device-The facial attractiveness, size, and menu structure of smartphones are the most important determinants of mobile phone selection when it comes to OAs (Shabrin et al., 2017).The Design-A smartphone's hardware includes the body, weight, and size of the phone.In addition Applications, mobile applications, are computer programs that run on mobile devices (Li et al., 2014).OAs can engage with their friends and family by downloading social media platforms like Facebook, Instagram, and Twitter.Text Messaging (SMS) is the act of composing and sending electronic messages, primarily composed of alphabetic and numeric data, between and among different users of smartphones, desktop computers, laptops, or any other type of computer (Hochgraf et al., 2010).Some OAs say that text messaging helps them keep relatives out of the house and out of their affairs (Kuerbis et al., 2017).WhatsApp-WhatsApp is a cross-platform freeware instant messaging (IM) and voice-over-IP (VoIP) service from Meta Technologies available worldwide that has made mobile phone communication less expensive and easier than other platforms.(Rosales & Fernández-Ardèvol, 2016).Facebook profiles allow users to share information, create and sustain relationships, and urge others to join a community (Rithika & Selvaraj, 2013).Namibia has a Facebook penetration rate of 10.7%.Nadhom and Loskot, (2018) indicate that 231,340 people use the internet in Namibia.Google provides access to information on the internet about people or things (Mbala, 2019).
Internet connectivity-The 2019 Inclusive Internet Index indicates that Namibia's mobile connectivity is only at 39% serviced with 4G, that 3G is only accessible to 53% of the nation, and that 2G is accessible to 100% of the country.This creates some difficulties when it comes to acquiring smartphones in Namibia (Nashilongo, 2020).Mobile browser-This is a new smartphone app shop that has a wider range of browser alternatives (West & Mace, 2010).Each browser has its own set of features (West & Mace, 2010).Battery life-One significant property for batteries is their life cycle.Most mobile phones include changeable batteries (Harris et al., 2020).Mobile phone batteries are intentionally manufactured with an artificially restricted usable life so that they become dysfunctional after a specific period to drive the market (Ibid).Storage space refers to the total space on a phone's hard drive that may be used to store information (Meena et al., 2014).The storage capacity becomes one of several significant criteria that influence the decision to choose one phone over another when purchasing a new phone.Camera-Smartphones now come with cameras as standard equipment.Shoaei Shirehjini ( 2019) reports that a smartphone camera contains one or more built-in digital cameras that can take pictures and record videos instantly (Delbracio et al., 2021).
Factors Influencing the Adoption of Smartphone Technologies
Adoption is the decision to utilise a new invention to its maximum potential as the best course of action available (Arts et al., 2011).OAs adapt smartphone technologies in various ways (De Barros et al., 2014).There has been much more interest among stakeholders in examining and understanding the major factors affecting user adoption and usage of ST.The economic, social, performance, cost, and demographic were all selected as factors to be reviewed for this study.The perceived value of the trade-off between a technology's advantages, its acquisition, use, and costs is known as the economic factor (Choudrie et al., 2018).This factor can predict consumers' willingness to accept and use new technology.Cost is the price paid for a product or service.People may be more willing to acquire and use smartphone technology if they believe it is cost-effective (Kuerbis et al., 2017).Social factors also play an important role in the adoption of smartphone technologies amongst OAs.
OAs trust those close to them when it comes to technological choices.An individual's belief that key people should be entrusted when making judgments about the employment of new technology is characterized as a social influence (Bakker & Kamann, 2007).Performance factors have also been found to be significant predictors of intention to use smartphone technology.Customers' perceptions of how easy it is to use technology are measured by their effort expectancy.The effort expectancy element plays a significant role in impacting people's intention to use mobile technology (Kwateng et al., (2018).Research has shown that performance expectancy has a substantial impact on customers' readiness to embrace and employ technology.Demographics are an important set of elements to consider when trying to understand and respond to the needs of customers.The use of mobile apps varies by demographic group (Linnhoff & Smith, 2017).The confidence factor indicates that when older persons lack confidence in their ability to learn anything new, they may avoid using technology (Kuerbis et al., 2017).Many OAs are wary of technology and cell phones in general (Vaportzis et al., 2018).Regardless of whether a person is a subscriber or not, mobile network coverage refers to the number of people within range of a mobile cellular signal (Paul et al., 2011).Telecom Namibia (TN) and Mobile Telecommunications (MTC) are Namibia's two major telecommunications providers and they have almost comparable rates.Wi-Fi is available just about anywhere, especially in cities and businesses, making browsing the net simple.
Older Adults' Perceptions of Newer Technology
Perception is a psychological process created by the five senses (Pallasmaa, 2012).Understanding OA's perceptions of technology is critical to introducing technology to this group and maximizing its effectiveness (Vaportzis et al., 2017).The decision of a customer to purchase a product will be heavily affected by how he or she perceives it (Mustapha et al., 2021).TV advertisements are ideally suited for smartphones, since they can make the devices even more attractive and appealing which may, in turn, create positive perceptions in viewers (Sethi, 2017).
Conceptual Framework
Antunes et al. ( 2021) define a conceptual framework as an interconnected set of concepts that provide a unified understanding of an issue or phenomenon.The senior technology acceptance model (STAM) and Unified Theory of Acceptance and Use of Technology (UTAT) have provided the theoretic framework for this study.
METHOdOLOGy Introduction
This section focuses on the research design, philosophies, and methodology used.It further covers the population, sampling techniques, data collection procedure, and how the data was collected and tested for reliability and validity.
Research design
In this study, exploratory research was conducted to improve the understanding of how OAs use their smartphones and of their adoption of the technology.To do this, a cross-sectional quantitative study was conducted.
Research Philosophy
The positivism paradigm formed the basis of this study.The main purpose of the research(er) was to reach the objective truth and facts.
Research Approach
Descriptive research was conducted using a quantitative methodological approach to obtain information about the existing phenomena.Creswell & Creswell (2003) defines quantitative research as an approach to testing objective theories.The study employed deductive reasoning to explain the link between variables in quantitative data.
Population
There are 13,720 OAs in the Khomas region (Namibia Statistics Agency, 2017).The population for this study is OAs in Windhoek
Sample Size and Method
The participants were selected using a non-probability convenience sampling method.Yamane's formula was used to calculate the sample size (Israel, 1992).This formula provides a simplified formula for calculating sample sizes, and it also gives a sample size with known confidence and risk levels (Saunders et al., 2012).A structured questionnaire was developed to collect primary data from the participants.Participants were reached through the old age homes identified via a google search and telephone directories and door-to-door visits.
data Analysis
The data were analysed using the Statistical Package for the Social Science (SPSS) software.Frequency and percentages were used to analyse the demographic characteristics of respondents, while descriptive statistics such as mean and standard deviation were used to analyse the study's objectives.Correlation analyses were used to test for the association between the ST features and perceptions towards the adoption of STs by older adults in Windhoek.Further, the researcher used graphic presentations such as tables, bar graphs, and pie charts to help with the understanding of the results.
Trustworthiness, Credibility, and Confirmability
The data collection instrument questions were prepared.A pilot study was created to test the design of the survey questions and the inclusion of measurement questions.
Validity
The questionnaire was constructed to address the specific and relevant aspects of the concepts under study, and representative questions from each section of the data collection instrument were evaluated against the desired outcome.
Reliability
Cronbach's alpha was calculated to ensure reliability and only reliability coefficients of 0.7 and above were accepted.
Ethical Considerations
The researcher remained honest and respectful to all the participants.Participants took part voluntarily in the study and could withdraw at any point.The identity of participants was kept confidential.
Introduction
This section presents the findings, interpretation, and discussion based on the data collected from responses to the questionnaires.
Response Rate
The study achieved a 97% response rate to the administered questionnaires.This is far above the optimal rate of 35% to 50% which is sufficient for corporate and academic studies.The total number of questionnaires administered for this study was 99.However, only 96 out of 99 administered questionnaires were fully completed, as seen in Table 1.The questions were not all responded to because some respondents were not able to make responses to all questions.
Reliability
The internal consistency reliability and indication reliability were evaluated through the application of Cronbach's Alpha.Results are depicted in Table 2.
Gender distribution
Figure 2 shows that 56% of the respondents were females while 44% were males.
Age Categories
The study aimed to find out if age influences the adoption of smartphone technologies.Figure 3 shows that in terms of age, 23 (23.23%) respondents were between the ages of 60 and 70, while 76 (76.76%) were above 71.
Marital Status
In the study, the respondents' marital status was requested.The results depicted in Figure 4 show that 48% of respondents are married, 34% are single, and 18% are divorced.
Level of Education
Figure 5 shows the level of education of the respondents.46% of respondents have no formal education, 26% have grade 12, 14% have attained diploma level while 11% have bachelor's degrees, and only 3% have attained master's degree level.This shows that the majority (54%) of the OA respondents in Windhoek have adequate education to understand the importance and operation of a smartphone.
Ownership of Smartphones
Figure 7 shows that the majority (92%) of respondents have smartphones.The remaining 8% do not own a smartphone.This indicates that the older generation is moving with technological changes.Table 3 shows the respondents have been owning smartphones for a minimum of one year and a maximum of at most four years.This gives mean ownership of 3.0909 years with a standard deviation of 1.10740.
Frequency of Smartphone Use
Figure 8 shows that 82% of the respondents use their smartphones daily, 11% do not use their smartphones at all, while 6% use their smartphones once in a while.
Comfortability in the Use of Basic Functions on the Smartphone
Figure 9 shows that the majority (58%) of the respondents are comfortable using the features on the smartphone, 24% are not comfortable, while the remaining 17% are not sure.
Features of the Smartphone
The respondents were asked to select the features of the smartphones they normally use.Table 4 shows that the majority of respondents (96%) said their smartphones were used for basic phone functions, while 35% used the browser, 24% accessed e-mail, and 4% played games.Even though email is one of the less advanced features of smartphones, only 24% of OAs emailed their friends and family.At 48%, another popular phone function is photo sharing.Features like Calendar, Internet, Google, Music, E-mails, Facebook, Games, Clock, and GPS are not so popular amongst the older generation.Only 35% of people use the internet and 35% use the calendar.Snapchat and Twitter come out as unknown to these older adults.
Challenges Faced Using the Smartphone
Figure 10 shows that lack of instruction guidance takes the pole position with 47%, screen size being too small at 38%, and data being a costly (challenge) at 34%. Figure 11 also depicts that the respondents face the challenge of complex technology at 29%, devices being too fragile at 27%, and devices being slippery at 26%. 19% find the device difficult to operate, whereas 18% of respondents have no challenges.41% indicated that the font size was too small, but because there is no instruction book, they do not know how to enlarge it.
Benefits Found in the Use of Smartphones
Figure 11 shows that 59% of respondents benefit from quick access to information by using a smartphone, 49% from easy-to-type text, 48% from the elimination of loneliness, 45% from the ease of navigation, 41% from access to media, 30% from the availability of social networks, 25% from multiple apps, 18% from bigger storage space, and 18% from easy communication.
Perceptions Toward Adoption of Smartphone Technologies
In Table 5, the most striking feature was the flexibility of the device.The majority of the respondents agreed with the statement that a smartphone is a flexible device.It is evident that the statement "Smartphone features meet my expectations" collected the second-highest ratings from the respondents.The table further indicates that the statement, "I find the device light and convenient" Figure 9. Comfortability also received favourable ratings from the respondents.Followed by "I enjoy using a smartphone because of its applications."Respondents did not agree with some statements such as "Smartphones can improve communications."
Source of Information Regarding Smartphone Use
Respondents were asked to identify the source of information that prompted their smartphone purchases.The results are depicted below in Figure 12.The majority (87%) of purchases were made due to the social influence of friends and family, followed by 38% from colleagues, 18% from own research, 32% from salespeople, 16% from TV/Media, and 6% from online networks.The analysed results indicated that social influence has a significant impact on smartphone purchases.
Considerations Taken Into Account When Buying a Smartphone
Respondents were asked to choose from the list of elements indicated in the survey instrument.Figure 13 shows that the top four elements for consideration were the price, ease-of-use, size of screen/ device, and brand.Findings showed that the price of the device was considered significant by 84%, that 71% of respondents considered ease-of-use, 63% considered size of device/screen, and 36% the brand.There were some anxieties regarding the camera, design, storage space, applications, multiple apps, operating speed, security/ privacy, and the keyboard.
Factors That May Possibly Encourage Future Use of Smartphones
Factors that might encourage elderly Windhoek residents to use ST in the future were found to be the network connectivity and the reduced cost of smartphone devices (price).Figure 14 indicates that 70% of respondents reacted to connectivity being an important factor and 59% the reduced cost of smartphones.The remaining factors did not come out strongly.Social influence was rated by 39% of respondents, perceptions by 26%, and functionality by 44%.13% of respondents cited online networks, 33% enjoy smartphones, and 32% felt they were safe.Only 5% of respondents indicated that there were no encouraging factors and that they will never adopt smartphone technology.
Reasons for Not Using a Smartphone
Figure 15 shows that the most stated reason for not using the device turned out to be that because they did not use the technology therefore the question did not apply to them.Some indicated that it's too much effort to use the technology, others mentioned that the device does not fit into their lifestyles, some struggled with the design, whilst others felt that they were too old to adopt the technology.Some statements were not selected and therefore not indicated in the figure.
Factors That Can Improve the Adoption of Smartphone Technologies
In Figure 16, it can be seen that in order to improve the adoption of smartphone technology, 45% of respondents would like to see more age-friendly smartphone devices, 20% indicated they liked smartphone literacy courses, 13% would like local languages on their smartphone to be introduced, and 25% thought that the cost to acquire a smartphone should be reduced 25%.
Hypothesis Testing
Pearson's correlation coefficients were calculated to analyse the connection between older persons' perceptions of smartphone adoption and smartphone features.Based on the results of the analysis, and as seen in Figure 17, the H 1 hypothesis was confirmed by a coefficient of 0.748 at a significance level of less than 0.01.Therefore, the study concludes that ST features are related to perceptions of Windhoek OAs with regard to the adoption of smartphone technology.The study rejects the H 0 hypothesis.There is no relationship between the ST features and perceptions towards the adoption of STs by OAs in Windhoek.
A relationship diagram was also used in this study to establish how variables are linked to one another.Black circles shown in Figure 17 represented statements of perceptions.Whereas the blue circles represent the variables (features of the smartphone).The larger the circles, the stronger the influence is on the variables.Links between the circles represent the strength of the influence between notes and variables.Thicker links represent stronger connections and influence.Thinner links represent weaker connections and influence.The respondents who have indicated that they find the device light and convenient appear to be enjoying utilising the device to access the internet.
Introduction
This section now concludes this research study after presenting the objectives, the literature review, the research methodology, the findings, interpretations, and discussions.
Summary of Key Findings
Overall, the study concluded that OAs adopt smartphones because they have the necessary expertise, time, and money.They also believe smartphones are easy to use, provide benefits such as entertainment, and fit with their lifestyles.
Objective 1: Analyse OAs Perceptions Towards the Adoption of STs in Windhoek
The study determined that the features of smartphones play an important part in catching the attention of elder adults.The study reveals that smartphones were found to be flexible for OAs and that OAs perceive smartphones as light and convenient devices.OAs enjoy using their phones because of the applications on the device.OAs are found to be less interested in the durability of the device and do not agree with the statement that smartphones can improve communication.
Objective 2: Establish the Features of STs Used by Older Adults in Windhoek
The study reveals that the majority of participants were comfortable using their smartphones.However, it was established that OAs only utilise the primary features and shy away from fully utilising the technology features.There were concerns regarding the inclusion of features not utilised by the elderly folks.It is also cited that such features should be replaced with more appealing features.These could include features like emergency buttons, larger screen sizes, hearing aid compatibility, voice commands, face recognition, etc.The study established that the use of the international language was not a barrier to usage.Findings further indicated that devices can be tailor-made and applications can be paid for as the user needs them or acquires new skills.
Objective 3: Assess the Possible Factors That Influence the Adoption of STs Amongst OAs in Windhoek
The study revealed a rise in ownership of smartphones among OAs.It is noted that although there are benefits to an individual's ownership of smartphones there are also disadvantages (Busch et al., 2021).Findings further revealed that older folks enjoy benefits such as quick access to information on their device, easy-to-text features, easy-to-navigate screens, and much more.However, findings showed concerns regarding the price of the device.The study noted that the four factors mentioned by most respondents that influence the adoption of smartphones amongst the older generation were the price, brand, appearance, and features.Price surfaced as the most critical criterion influencing older individuals' real choice of acquiring a smartphone.The study further concludes that the cost of a smartphone has a considerable impact on elderly people's desire to use technology.The brand appears to be the second influencing factor.Many Chinese brands that were introduced to the market raised concerns in the elderly market.Technology is far too complicated, and it has been proposed that simpler technology might be ideal for the elderly.Kuerbis et al. (2017) indicated that older people welcome and utilise technology.Easeof-use also stood out as one of the influencing factors.The elderly appear to avoid using the more complex functions.
Conclusion
Our findings revealed that more OAs are adopting smartphones rather than ordinary cell phones.However, the older folks show no interest in fully embracing them.Older folks are still engaging with their cell phones at a quite primary level for conversing.Although the majority (92%) of respondents use smartphones, many are dissatisfied with them.It is important to consider the factors that contribute to not enjoying using smart mobile devices to minimize the likelihood of future abandonment of mobile technology.This study reveals that older people do not want to put in much effort to stay up with new technologies.When it comes to smartphone applications, usability is critical (de Oliveira et al., 2021).Studies have indicated that smartphone application usability concerns are broadly shared among the elderly.When technology is viewed as simple to use or specifically created for them, OAs welcome and accept it (Ameen & Willis, 2015).As a result, many elderly folks prefer the most basic technological versions over the most advanced.Three sources of influence for technology adoption among OAs have been identified by the literature.Tsertsidis et al. (2019) cite family, friends, and institutions as sources of influence.This study looked at the sources of influence and revealed that with societal interest, OAs may feel pressured to use a smartphone.87% of respondents indicated that their source of influence was friends and family, meaning that social influence has a strong influence on the use of the smartphone.The study also found that loneliness is a strong predictor of smartphone purchases among older adults.OAs seek social relationships to gain social status and create effective connections, regardless of their personality types (Jeong et al., 2016).Loneliness is more common among elderly persons than in other age demographics (Mahapatra, 2019).
The study further concluded that lack of experience and guidance is the main obstacle to the adoption of STs among elderly people.OAs claimed to have purchased services (such as Netflix) that they never used since they couldn't figure out how to utilise them.The study also concludes the main impediment to OAs using mobile technology is the same as it is for the general population (cost).The cost of a smartphone device significantly impacts older people's willingness to use technology (Kuerbis et al., 2017).Those on government pensions may not be able to afford even basic devices, let alone the most advanced smartphone models.
Recommendations for the Industry
What Can Be Done to Improve the Adoption of Smartphones Among Elderly People?
The study recommends that stakeholders such as application developers, smartphone manufacturers, etc., use the findings to improve smartphone applications that help elderly individuals learn new things.To improve the quality of life, health, security, well-being, and independence of the OAs, the technological goal must be clearly stated and made known to the OAs.Lastly, smartphone manufacturers should consider lowering the cost of smartphone technologies to make them more available to a greater spectrum of older adults with diverse socioeconomic classes.This will allow a larger proportion of the aging population to reap these potential benefits.
What Could Be Done to Minimize Elderly People's Lack of Knowledge Regarding Smartphone Technologies?
The study reveals some of the obstacles that have kept some of the elderly from using smartphones.These can be summed up as a lack of expertise, a lack of awareness about the benefits of smartphone use, the complexity of use, and fear of difficulty that may be encountered while learning to use the device.The study recommends that user manuals should be written in layman's terms, with easy step-by-step instructions, so that an elderly person can refer to them as often as needed.In the longer term, as more elders utilise smartphones, mobile media literacy programs should be considered to assure success, which will also have a good impact on raising seniors' skill levels, allowing them to be employed in a more diverse variety of jobs.It is also recommended that local trainers and experts be made available to provide innovative technology training and education to assist OAs in acquiring knowledge that will inspire them to use innovative technologies.
Recommendations for Further Research
For academia, there was little significant research to be found on smartphone adoption in general when this study focussed on Windhoek was conducted.By referring to the results of this study, a new study could be expanded to the entirety of Namibia to provide a further dimension.Furthermore, the study's future trajectory should incorporate quantitative features as well as data collection methods suitable for a qualitative study, such as interviews, observations, or focus groups.This would result in a better understanding of smartphone adoption among older adults.In terms of advancing technology, more research could be done focusing on other devices such as tablets and iPads.Future studies could also look into smartphone use for specific goals, such as healthcare.
a) What are the perceptions of OAs in Windhoek towards the adoption of STs? b) What are the features of STs used by OAs in Windhoek?c) What are the possible factors that influence the adoption of STs amongst OAs in Windhoek?
Figure
Figure 1.Conceptual Framework sample size, N = the size of the population, e = the probability error of 10%.
Figure
Figure 2. Gender Distribution
Figure
Figure 7. Smartphone Ownership
Figure 12 .
Figure 12.Source of Information Regarding the Use of Smartphones
Figure 13 .
Figure 13.Factors That May Influence the Adoption of STs Among OAs in Windhoek
Figure 15 .
Figure 15.Reasons for Not Opting for a Smartphone
Figure
Figure 17.Relationship Map
Table 4 . Features of STs Used by OAs in Windhoek
(Source: Field survey, 2022)
Table 5 . Perceptions of Older Adults Toward Smartphone Adoption Perceptions of Smartphone Technology
(Source: Field survey, 2022) | 2024-03-03T19:27:34.269Z | 2024-02-26T00:00:00.000 | {
"year": 2024,
"sha1": "3d11651657c3d7bf18e71d025e86f17a5cddda82",
"oa_license": "CCBY",
"oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=339567&isxn=9798369325308",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "12044ef05b0deb6bc1dca3573150485acb92289d",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
267136342 | pes2o/s2orc | v3-fos-license | Playing Smart with Numbers: Predicting Student Graduation Using the Magic of Naive Bayes
ABSTRACT
In the realm of education, higher education institutions are challenged to orchestrate a realm of quality education that births students who are not only competent, but also creative and brimming with competitiveness.The quality of Indonesian higher education institutions is reflected in the accreditation bestowed by the National Accreditation Agency for Higher Education, commonly known as BAN-PT.Expanding the horizon, the rate of graduation success stands as a pivotal gauge, shaping the evaluation that determines the quest for recognition [1], following the footsteps of regulations etched in the Appendix of National Accreditation Agency for Higher Education Regulation No. 23 of 2022.
The evaluation of graduation rates conducted so far has mostly relied on graduation registration data, often overlooking students who might be facing academic or administrative challenges.On the other hand, the university's response to students who do not graduate on time can be carried out through methods of persuasion, guidance, and mentoring, encouraging students to promptly complete their studies.
The challenge faced by universities is the absence of an integrated system for predicting student graduation.The consequences of this situation include, among others: the Academic and Student Administration Bureau cannot ensure that an entire cohort graduates on time, leading to a scenario where students find themselves without a solution to the issue of delayed graduation.
Drawing inspiration from the aforementioned issues, there arises a need for a student graduation prediction application within the university environment [2].This application encompasses comprehensive data about students grouped within a single cohort, spanning various study programs.With the advent of this application, we embark on a new chapter where student quality and university accreditation are not only monitored but also continually enhanced on the journey towards excellence.
LITERATURE REVIEW 2.1. Literature Review
Regarding the matter of predicting student graduations, numerous studies have been conducted across various universities, employing a variety of methodologies.Previous research endeavors were undertaken by Armansyah and Rakhmat Kurniawan Ramli in a study titled "A Naive Bayes Approach to Predicting Timely Graduation of Students" [3].They grappled with the challenge of declining graduation rates stemming from the disparity between incoming students and those who graduate, leading to detrimental effects on study programs across multiple aspects.They adopted an experimental approach and employed the naïve bayes method.The outcomes of this research yielded predictions of student graduation rates that showcased exceptional performance of this prediction model, reaching an accuracy of 100%.
Moving forward, let's delve into the study by Lydia Yohana Lumban Gaol, M. Safii, and Dedi Suhendro titled "Anticipating Successful Student Graduations in Stikom Tunas Bangsa's Information Systems Program Through the Implementation of the C4.5 Algorithm" [4].The puzzle they tackle stems from the crucial role that graduation plays as a vital yardstick in evaluating the accreditation of higher education institutions.As a result, when an increasing number of students graduate within the designated time frame, it directly contributes to the institution's accreditation assessment climbing higher.Their chosen methodology revolves around the deployment of the C4.5 classification algorithm, deftly sifting through both numeric and categorical attributes within the dataset.The culmination of their research opens up a treasure trove of predictive data on student graduations, underlining that the most influential factor determining student success is the GPA attribute [5].
Stepping into the next research expedition, we enter the realm of exploration crafted by Ray Mondow Sagala with the title "Unveiling Student Graduation Forecasts Through the Playing Smart with Numbers: Predicting Student Graduation Using the Magic of Naive Bayes (Shilpa Mehta) ❒ P-ISSN: 2963-6086, E-ISSN: 2963-1939 K-means Algorithm in the World of Data Mining" [6].The heart of the issue arises from the significant urgency surrounding student graduation, as the common thread interweaving various courses forms an inevitable connection.K-means, used as a tool, unlocks the door to interpreting research data into a series of numbers.The final hue of this research journey is a canvas of predictive data, proving that through the stride of k = 3 out of a total of 118 manipulated data points, 13 students were found not to conclude their journey, followed by 36 students opting for the path with satisfactory grades, and finally, 69 students anchoring within the realm of stellar scores.
In the ensuing research endeavor, the ladder of knowledge is scaled by Nursetia Wati with the title "Envisioning Student Graduations by Applying the K-Nearest Neighbor Approach Based on Particle Swarm Optimization" [7].The foundation of this challenge is obstructed graduation rates, especially in the realm of the Faculty of Engineering.As this situation arises, like the rustling of leaves driven by the wind, every study program tirelessly delves into the journey to enhance the graduation rates, aiming to reach the pinnacle of desired quality.Employing the method of classifying data based on the distance from new to existing data, curiosity answers the call, and the experimental method named K-Nearest Neighbor (KNN) is chosen as the complement.The result of this research ripple, as it turns out, takes the form of notes, recording that the conducted testing yielded the best value when the K-Nearest Neighbor algorithm was applied.
Theoretical Framework
Unveiling the curtain of knowledge in the realm of data, we come across the term "data mining" as a tool to excavate intellectual treasures within the database vault.In this process, mathematics, statistical techniques, machine learning, and even artificial intelligence play pivotal roles.They collaborate, comb through data, and formulate the identification and extraction of various valuable pieces of information, as well as weighted knowledge from an array of expansive databases [8].Maulana and Fajrin also share an intriguing perspective that data mining in the realm of research is fundamentally not a novel topic.It emerges as an added-value agent capable of enhancing the effectiveness of various previously employed techniques, thus addressing an array of challenges we commonly encounter [9].
Like a ready-to-eat dish, an application is a program menu waiting to be served to execute various commands of its users.In alignment with the given commands, it artfully crafts detailed outcomes, just as desired when preparing a meal.However, an application's role doesn't merely stop at being a digital cook; it also becomes a catalyst for solving puzzles through the application of one of the various data processing recipes available.Aligned with specific hopes or goals, it transforms into a data-grinding machine that functions harmoniously according to its capabilities.Delving deeper, we encounter another perspective that defines an application as a pre-assembled machine component ready for operation by its enthusiasts [10].
Once upon a time, Al Khawarizmi, a scholar from Persia, breathed life into algorithms for the first time.Like a seed sown, initially, algorithms were used to formulate solutions for arithmetic problems.However, algorithms underwent transformations over time, assuming the role of cracking various mathematical puzzles.Delving deeper, algorithms also weave an inseparable thread with mathematics, anchoring themselves at the heart of the world of knowledge [11].Through another lens, T.S. Alasi mentioned that algorithms are a sequence of logical steps, speaking the language of order.They lead us on a journey through the forest of problems with a well-organized and systematic route [12].Prediction is a mystical endeavor that leads us to peek through the door of the future, estimating the array of possibilities that might unfold there.It's like unearthing a magical chest of past data, sculpting forecasts guided by the stars of indicators.Various challenges require the enchantment of prediction, including peeling back layers in the tale of prices, unveiling the production veil, or dissecting the secrets of graduation rates-and many more [13].
Classification is like assembling puzzle pieces of data, carefully putting them together to predict the characteristics of new data.Just like a detective grouping evidence based on the clues left behind, classification uses existing data as a foundation to guess the nature of unfamiliar data.In the realm of classification, there are two main ingredients: test data, like gems whose light is being examined, and training data, the stepping stones that guide the learning process [14].
Imagine Jupyter Notebook as an enchanting laboratory holding three magical languages: Julia, the clever wizard; Python, the versatile magician; and R, the alchemist of numbers.In its ritual, Jupyter Notebook combines the powers of these three languages into a mesmerizing interactive spectacle.Like a sorcerer turning objects into gold, this web application transforms thoughts into beautiful computational documents.Undisturbed, uncomplicated, solely focused on the magic of the document itself [15].
Python, the magical language that traverses various platforms like astral beings exploring the universe.Interactive like conversing with genies, it swiftly and gracefully responds to every command call.Its magical prowess is undeniable, comprehending human language with elegance and charm.The enchanting codes in this language will be transformed into secret codes known as byte code before the execution spell is cast.Like embarking on a journey to learn the art of magic, understanding classification and Python is the first step in mastering this mysterious world.Throughout the adventure, you will comprehend how to piece together clues from the past to unlock the gates of the future [16].
RESEARCH METHODOLOGY 3.1. Stages of Research
This research journey begins by crafting challenging questions and delving into the realm of hidden literature.Like a detective gathering clues from various angles, we gather data through observation and documentation methods, laying the foundation to unravel the existing mysteries [17].
Having completed the initial phase, we step into the next chapter, gathering the "trails" of data from the required students.Like assembling puzzle pieces, 395 sets of data from the 2018 batch of students who have completed their study journey are collected across 16 attributes.The subsequent action involves crafting and cleansing this data, akin to arranging bricks before constructing a house.Out of the 395 data points, 302 of them, complete with 14 relevant attributes, will serve as the main ingredients when the naive Bayes algorithm comes into play.This is the spotlight moment, where the naive Bayes algorithm takes center stage as the hero of this narrative.Utilizing the magic of the Python programming language, this algorithm is activated to meticulously unravel the data and provide potential hidden answers.Not dissimilar to the power of a wizard concocting magical potions [18].
As the experiments unfold, the constructed model is tested as if facing a magical trial.And behind the curtain's veil, evaluation and validation take on the leading roles.Like a sorcerer assessing the success of an incantation, we carefully evaluate the testing results and ensure their authenticity.
Thus, from scientific steps to magical performances, this research is a journey to unveil mysteries, gather truths, and carve new pathways in the realm of knowledge.
Data Collection
As a strategy to gather valuable information resources, the researcher opted for the following steps: 1. Observation Method Embark on a journey of direct observation, immersing yourself in every hidden detail of the field.The insights gained from this observation are not merely visual snapshots, but rather the core components that will paint the path forward for a dynamic system under the company's spirited umbrella [19].The author also assumes the role of an observer within the observed university campus.
Interview Method
Engaging in direct interviews through a carefully crafted set of questions, we venture into the realm of inquiry, guided towards the Bureau of Academic Administration and Student Affairs.Like an explorer unearthing treasures from conversations, we discern the pieces of information needed.
Library Method
Delve into the realm of knowledge by excavating intellectual treasures through the pages of books, scholarly journals, and the digital footprints in the vast sea of the internet.Like a mind archaeologist, we unearth valuable artifacts that fortify the foundations and outcomes of this research.
Data Analysis
The student landscape that emerges in this research paints a portrait of the students from the 2018 cohort at Universitas.They are captured through intriguing variables such as gender traits, student statuses, marital journeys, age, records of Semester Grade Point Averages (IPS) spanning from the first semester to the eighth, and also Cumulative Grade Point Averages (IPK) that provide a glimpse into the extended journey of academic achievements.Amidst this center stage of attention, the target class dances with joy, depicting the graduation destination that awaits at the end of the journey, whether it lands accurately or might encounter a minor time obstacle [20].
Pre-processing
Based on the collected research findings, some intriguing revelations come to light.A total of 395 datasets of students who have completed their academic journey are recorded.When categorized by study duration, three main groups emerge.Firstly, there are those who graduated on time within 7 semesters (3.5 years) or 8 semesters (4 years).Then, there is a group of students who surpassed these timeframes, concluding their studies in more than 8 semesters [22].
However, just as sifting for gems in the sand, not all data and information gathered can be readily utilized.An initial, creative process is necessary to refine this data, akin to tending a garden to yield better results.And, do not miss it -under the spotlight's beam, there are 16 data attributes that have not undergone this process.Behold, the list of Playing Smart with Numbers: Predicting Student Graduation Using the Magic of Naive Bayes (Shilpa Mehta) ❒ P-ISSN: 2963-6086, E-ISSN: 2963-1939 treasures to be further unearthed [23].As for the preprocessing techniques employed by the author, they encompass: 1. Polishing the Data Brilliance, by sweeping away all vacant and incomplete data.For instance, the records of inactive or departed students, their data erased due to the incomplete payload of course grades.Consequently, a mere 302 usable data remain from the initial 395, implying a data cleansing of 23.54%.This process acts as a cleansing beam in the data preprocessing phase, ensuring no gaps are left behind.2. Squeezing the Data Spectrum, aims to grasp the relevant traces within records along with the suitable count of attributes engaged in the mining process.This resembles not inviting attributes like SIN and name to the gathering, deemed somewhat unrelated or less impactful.This signifies that in the swift mining dance, only a handful of attributes are embraced -such as gender, age, student status, marital status, Semester Grade Index (IPS) from semester 1 to 8, as well as Cumulative Grade Index (CGPA), and Graduation status.3. Interweaving Data: Shifting from Categorical Flair to Numeric Charm.The "gender" attribute, once entwined with the words "male" and "female," is now transformed into 0 for males and 1 for females.The "student status" attribute, previously branching into "working" and "student," is now swapped to 0 for those working and 1 for students.Next, the "marital status" attribute, previously linked to "married" and "unmarried," is ignited to 0 for those already married and 1 for those still waiting.Lastly, the "graduation" attribute, formerly narrating "on time" and "late," is Int.Transactions on Artificial Intelligence, Vol.❒ reshaped to 0 for the late ones and 1 for the punctual achievers.
After the preprocessing waltz reaches its final bow, the next act unfolds -a dance into the mining process upon the cluster of 302 student data.They all play their parts across 14 attributes that have passed through the scale harmonization and evaded the potential of missing values.Here are the intricate movements engraved on the following page: The author harnesses the power of Jupyter software to embark on a journey of data experimentation concerning student graduation, employing the Naive Bayes methodology.Much like a digital alchemy expert, they skillfully blend information in pursuit of shimmering discoveries [24], [25].
1. Summoning the Required Library In the course of this research involving the utilization of the Naive Bayes algorithm, various aspects have been examined and analyzed comprehensively.The evaluation and validation of these findings indicate that the Naive Bayes algorithm exhibits an impressive accuracy rate, reach ing 85% out of a total of 302 student data records.Specifically, the late submission precision value reaches 0.42, while the on-time precision stands at 0.95.Similarly, the late submission recall rate is 0.65 and the on-time recall rate is 0.88.The late submission F1-score achieves 0.51, and the on-time F1-score is 0.91.
In its application, the Naive Bayes algorithm has demonstrated the ability to predict student graduation statuses accurately, whether they are on time or delayed.This Playing Smart with Numbers: Predicting Student Graduation Using the Magic of Naive Bayes (Shilpa Mehta) Int. Trans on AI P-ISSN: 2963-6086, E-ISSN: 2963-1939 ❒
❒ 4 .
Int. Transactions on Artificial Intelligence, Vol. 2, No. 1, November 2023: 60-75 Int.Trans on AI P-ISSN: 2963-6086, E-ISSN: 2963-1939 RESULTS AND DISCUSSION 4.1.Data CollectionLike tracing the footsteps of an adventure, the process of data collection brings its narrative to life in the field and from the expanse of the observed university's website.Much like painting piece by piece of a puzzle, data is meticulously gathered and poured into Microsoft Excel files in the xlsx format.As if weaving the tales of the 2018 cohort that has traversed their academic journey, this data arrives in a total of 395 entities woven with 16 unique attributes.As if presenting a beautiful painting, examples of the showcased data can be found within mysterious images under the sunlight[21].
Figure 6 .
Figure 6.Command to Invoke a Library 2. Reading student graduation data from an Excel file
Figure 7 .Figure 8 .Figure 9 .
Figure 7. Command to Read Excel Data 3.The displayed results of the data are as follows:
Figure 22 .
Figure 22.Printing Accuracy Score With the staged report of classification results as follows, unveiling the grand spectacle where 88% of students successfully complete their studies on time: | 2024-01-24T17:56:36.909Z | 2023-11-23T00:00:00.000 | {
"year": 2023,
"sha1": "e240ff761f1146b2e5956f1eade446a982fe1ce5",
"oa_license": "CCBY",
"oa_url": "https://journal.pandawan.id/italic/article/download/405/387",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "03684f7c1dfe739b9f63dc84d89df39eb2f0684e",
"s2fieldsofstudy": [
"Education",
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": []
} |
256591525 | pes2o/s2orc | v3-fos-license | Deep learning-based urban morphology for city-scale environmental modeling
Abstract Herein, we introduce a novel methodology to generate urban morphometric parameters that takes advantage of deep neural networks and inverse modeling. We take the example of Chicago, USA, where the Urban Canopy Parameters (UCPs) available from the National Urban Database and Access Portal Tool (NUDAPT) are used as input to the Weather Research and Forecasting (WRF) model. Next, the WRF simulations are carried out with Local Climate Zones (LCZs) as part of the World Urban Data Analysis and Portal Tools (WUDAPT) approach. Lastly, a third novel simulation, Digital Synthetic City (DSC), was undertaken where urban morphometry was generated using deep neural networks and inverse modeling, following which UCPs are re-calculated for the LCZs. The three experiments (NUDAPT, WUDAPT, and DSC) were compared against Mesowest observation stations. The results suggest that the introduction of LCZs improves the overall model simulation of urban air temperature. The DSC simulations yielded equal to or better results than the WUDAPT simulation. Furthermore, the change in the UCPs led to a notable difference in the simulated temperature gradients and wind speed within the urban region and the local convergence/divergence zones. These results provide the first successful implementation of the digital urban visualization dataset within an NWP system. This development now can lead the way for a more scalable and widespread ability to perform more accurate urban meteorological modeling and forecasting, especially in developing cities. Additionally, city planners will be able to generate synthetic cities and study their actual impact on the environment.
Introduction
Urban centers are economic hubs of the world that contribute about 60% of the global gross domestic product while accommodating more than half of the world's population (1). As urbanization intensifies, cities experience extreme weather conditions such as compound flooding (2), and heatwaves (3). Additionally, urbanization has led to changes in weather conditions such as rainfall (4), urban heat island (5), and air pollution (6). Different Urban Canopy Models (UCMs) have been developed to incorporate and study urban interaction with the environment. Examples include single layer (7), multi-layer (8,9), town energy balance (10), and community land model urban parameterization (11).
UCMs utilize urban morphometric details and provide a more realistic urban representation aiding the weather/climate model's performance in simulating urban environments (12). The singlelayer UCM incorporates detailed physics of the radiation representation, turbulent transportation, and assumes infinitely long streets for urban geometry representation. Multi-layer UCMs are more sophisticated compared to the single-layer UCM in representing buildings and street layout. The energetics and internal wind as well as thermal effects are represented to provide a more explicit linkage with the urban canopy boundary layer, and coupling with the atmospheric surface and boundary layer (13,14). The urban models can be run in offline mode to conduct energy balance studies (15,16) or can be coupled with numerical weather prediction (NWP) or regional climate framework such as, Regional Atmospheric Modeling System (RAMS, (17,18)), COSMO-CLM model (19), and the Weather Research and Forecasting model (WRF; (7)).
This study considers the WRF urban modeling framework. The WRF model is a mesoscale, non-hydrostatic, compressible, and one of the most widely used NWP model worldwide for operational and research purposes (20). The Noah land surface model used in the WRF model can be coupled with different urban parameterization schemes driven by approximately 30 input parameters for representing urban canopy (21). These Urban Canopy Parameters (UCPs) represent thermal, and geometric properties (including the building internal temperature), which play a role in urban boundary layer (22), precipitation (23), and urban heat island (24) simulations. In an ideal case scenario, the UCPs can be specified for each grid cell where such dataset is specifically generated and made available. If not, as on the default case, the UCPs are specified in the form of a look-up table in the standard WRF framework, for three broad urban classes (lowintensity residential, high-intensity residential, and commercial), which notably underestimate urban complex morphology.
Due to the lack of detailed UCPs, the urban NWP modeling community has been recently shifting towards the use of Local Climate Zones (LCZs) and the World Urban Database and Access Portal Tools (WUDAPT) initiative, which divide the urban surface into 10 different classes (commonly termed as LCZs) based on building height, the density of the buildings, vegetation fraction, and material thermal properties (25,26). LCZs are being used in the numerous studies focusing on temperature (27), rainfall (12), and other environmental variables (28).
However, developing UCPs for these various urban classes is challenging. This is because mapping urban areas at street level for urban morphology is expensive, requires coordination and approvals from different agencies. Even when such data are available, integrating such data within urban models can be challenging . Some of the examples of a large, community efforts through the US EPA, wherein UCPs are available for 44 US cities comprising the National Urban Database and Access Portal Tool (NUDAPT; ( 29)). There are similar datasets for select cities of China (30), including Guangzhou (31), Beijing (32), and European cities (as part of the project MapUCE, (33)). These datasets and the method for deriving UCPs using intensive manual intervention, development, and processing of high-resolution datasets. Typically, building footprints and individual building heights which are difficult to obtain.
With the increasing urbanization in developing countries, there is a growing need to generate UCPs more widely. Towards that objective, we introduce an automatic method for generating UCPs using an urban visualization approach. The work builds on the foundational work of (34-36) that have been aligned with the WUDAPT (37,38) initiative. This approach utilizes deep learning combined with procedural modeling to infer various urban features despite only having limited information, and then automatically generates a 3D city model and its UCPs. The translation of such synthetic data, into a weather modeling framework would potentially open an avenue for developing simulations for locales where such measurements are lacking-which is more of a norm than the exception.
Accordingly, the study objectives described here are: (1) to introduce a novel automatic method to generate UPCs, (2) to demonstrate the development of a UCP dataset for Chicago, USA, and (3) to integrate these UCPs with the WRF model and evaluate the performance of the WRF model using these derived UCPs.
Digital synthetic city generation
We have developed a novel deep-learning and procedural modeling based method for creating a city-scale 3D urban model, called in this paper Digital Synthetic City (DSC), from which we can derive various urban morphology parameters. Our method uses satellite imagery and global-scale population and elevation data as input to our automatic method for producing a statistically similar and synthetic city-scale 3D urban model as output. The result is the ability to almost instantly create a plausible synthetic large-scale 3D urban model (Fig. 1). The approach takes as input various geospatial products, summarized in Table 1. It consists of three main components: (1) building and parcel area estimation, (2) procedural model generation, and (3) an optional procedural model optimization (38). As shown in Fig. 2, we first utilize an image segmentation network (i.e. U-NET (42,43)) and then a novel upsampling and sharpening network based on an autoencoder framework (44). Further, we combine building segmentation with a building setback prediction network. An optional optimization step uses information about the number and height of a few percent of buildings in the target area to calibrate the generated city to the target location. The end result is the ability to segment and infer building footprints accurately despite the relatively low-resolution of satellite imagery and the occlusions by nearby structures. The output of our method is a large spatial procedural city model consisting of 3D buildings distributed over the target area and registered in place with the road network, suitable for modeling urban areas worldwide for urban design in city planning and simulation. The method is leap-frog generational improvement from (34,37) that develops a novel framework by considering the globally available datasets and upscaling method shown in Fig. 2. As a result, the similarity between the real and estimated morphological parameters is maintained. More information on DSC method is provided in the online supplementary material text and Figures S1-S4.
Estimation of urban canopy parameters
The subset of UCPs under our current consideration, which are the typical main parameters of urban areas modeled by systems such as WRF, are: 1. Building Height (Z R ): where A i is the plan area of the building, h i is the height of the building, N is the total number of buildings within a given region. 2. Standard deviation of building height (σ z ): 3. Roof width (W roof ): calculated by assuming buildings are rectangles of equal area (A) and perimeter (P) as of building footprint.
4. Urban fraction (f urb ): where A i is the area of the ith building footprint for a given LCZ, A is the total area of the given LCZ. 5. Building height percentage bins: for each LCZ, buildings are placed into bins based on their height with a granularity of 5m. Table 2 shows the difference between the WUDAPT and DSC-derived values of UCPs. The height percentages per bin amongst the various LCZ classes are shown in Figures S6 and S7. The spatial plots of urban fraction, building height, and road width is shown in Figure S8.
Modeling experiments and evaluation
One of the challenges that exist when creating such a highresolution data set is, how to verify the output? It is important to highlight that DSC is not an exact replication of the urban morphology. In fact, the efficiency of the DSC framework lies in the flexibility to create variable grid resolution (spacing) urban morphological parameters in a "fast" manner (matter of minutes). The DSC output need to be evaluated for "fit for purpose" and not just the geometric reproducibility of a corresponding Google Earth or similar available dataset in the public domain. Therefore, for assessing the suitability of DSC for urban modeling studies, we design a modeling experiment for a real-world case focusing on the Chicago downtown region using the WRF model.
DSC-WRF urban modeling
The simulations were performed using the Weather Research and Forecasting (WRF) model, version 4.2.1 (45). A typical model configuration is considered. Fig. 3A shows the three nested domains centered over Chicago, USA, with a spatial resolution of 9, 3, and 1 km for outermost, middle and inner most domains, respectively. Table S1 and overview of methodology is shown in Figure S5. The urban heat island intensity is calculated by subtracting urban to rural 2 m air temperatures.
The 2018 July 1-7, period represent the weather over Chicago, USA, after the hottest day (June 30) since 2012. On July 1, the temperatures start to face until July 3, as the cold front departs through the region and a surface ridge shifts towards the east of the region. From July 3-5, the temperatures again start rising due to the moist and warm air mass over the area. On July 5, the heat and humidity help support the initiation of isolated and scattered thunderstorms in the region. Finally, on July 6-7, the thunderstorms, the movement of the cold front, and the advection from lake breeze towards the urban area allowed the temperatures to reduce.
Results and discussions
The notable weather features for 2018 July 1-7, over Chicago, USA, shows the temperature variations from 290 to 305 K, which contain typical urban heat island feedbacks and land-lake breeze circulation. Therefore, we discuss the results of the simulations for three key variables, temperature, relative humidity, and wind (speed and direction), that are impacted by urban structures.
Evaluation of WRF simulations
The performance of the WRF simulations is evaluated for day (sunrise to sunset; 05:00-20:00 LT), night (20:00-05:00 LT), and whole period (All) consisting of day and night (see Table 3). During the daytime, DSC outperforms WUDAPT and Control simulations for 2 m air temperature. However, 2 m specific humidity and 10 m wind speed is better simulated in the Control. The reduced 10 m wind speed in case of DSC is due to increased roughness length (taller buildings) relative to WUDAPT and Control. The correlation coefficient during the daytime is relatively better for Control simulation. During the nighttime, 2 m air temperature, 2 m specific humidity, and 10 m wind speed is better simulated by the DSC simulations. Comparing the entire period, WUDAPT shows performance better for 2 m air temperature and 10 m wind speed, while 2 m specific humidity is better simulated by Control. Thus, the mean statistics suggest that WUDAPT and DSC perform better than Control, and a negligible difference is noted between WUDAPT and DSC simulations. The DSC values are closer to the WUDAPT simulations for the entire simulation period (for latent, sensible heat fluxes, and friction velocity see Fig. S10). This highlights that the UCP values generated from the DSC are performing similar to the literature derived values, bolstering confidence in the automated supporting the methodology adopted in this study.
Here, it is important to highlight that the evaluation represent the effect of urban morphology by providing bulk values to LCZs. High-resolution gridded datasets such as (54,55) are expected to provide better urban morphology. There are other sources of uncertainty such as the lack of representation of street trees and urban vegetation (56). Other UCPs (such as albedo, emissivity, and thermal properties of the building material) that affects the local climate zones are not considered, thus adding uncertainty in the simulations. However, the automated city-scale urban morphology generator framework adopted for urban weather modeling is novel and effective for regional studies.
Diurnal and urban heat island intensity
The time-series of the variables (shown in Fig. 4A,C,E) shows that all simulations follow the observations until 2018 July 3, where the 2 m air temperature and 10 m wind speed drops to less than 290 K and 1 ms −1 respectively. On 2018 July 5, the increase in the 2 m specific humidity, reduction in 2 m air temperature is not well captured in the simulations. The simulation diurnal profile of the variables is shown in Fig. 4B,D,F. The Control simulation overestimates the afternoon 2 m air temperature by ≈1.5 K. The DSC and WUDAPT simulations show consistent behavior for 2 m air temperature and specific humidity and are closer to the observations. During the daytime, DSC shows a reduced 10 m wind speed while WUDAPT and Control simulations are closer to the observations. For, nighttime 10 m wind speed is better captured by the DSC simulations. The difference between the rural and urban temperature is shown in the form of urban heat island intensity (UHII, rural and urban stations shown in Fig. 3C) (see Fig. 5). The rural area starts warming from sunrise (05:00 LT) to afternoon (14:00 LT), while a counter effect is observed in the day to night transition, rural is cooling faster than the urban areas. Thus, Bold text represents best score. r is statistically significant (P-value <0.05).
higher (>2 K) urban heat island intensity is noted for nighttime. DSC simulations captures afternoon urban cooling but underestimates the nighttime UHII by ≈1 K. The reduction in the UHI may be attributed to the reduced urban fraction in DSC as compared to the WUDAPT. Overall, changing the UCPs from WUDAPT to DSC has non-linear feedback leading to changes in simulated weather inside and outside the city. Changes in the temperature of each LCZ and spatially are shown in Figures S11 and S12, respectively. The kernel density estimates of 2 m specific humidity, 2 m air temperature, and 10 m wind speed are also provided in Figure S9. Figure 6 shows the wind direction and speed for the Control simulations. The wind speed within the city is lower than the surrounding areas, during all times of the day due to relatively high roughness lengths of the city. The WUDAPT simulation shows a negligible change in the wind speed within the city when compared with the Control run (Fig. 6B,F,J,N). The DSC simulation shows a reduced wind speed from all the simulations. This change in wind speed has a small but notable non-linear effect on the atmospheric circulation dynamics, such as the land-sea breeze circulation. The relatively lesser change of 2 m temperature than the 10 m wind speed can be attributed to the significant difference in the building heights and roof widths that affects the roughness lengths, thus modulating the model outputs (see Fig. S8; (8)). Similar results were also observed by Wang et al. (57) and Loridan et al. (21) for offline simulations of the singlelayer UCM. Additionally, the results from the Control simulations are in parity with (58).
Conclusions
UCPs are an important component of urban climate modeling. This study introduces a new methodology to calculate city-wide UCPs using an automatic deep learning-based synthetic data generation framework from globally available products. The newly generated UCPs utilized for environmental simulations in Chicago, USA using the WRF model. A total of three simulations consisting of a Control run (using NUDAPT dataset), WUDAPT (incorporation of LCZs), and DSC (using LCZs and new UCPs) were conducted. The results show that urban LCZs have a significant impact on the simulation of air temperature. Moreover, the automatically computed DSC parameter values yield simulation results as good as, or sometimes more accurate, than WUDAPT (which requires crowd-sourcing and benefits from a hand-crafted dataset optimization). The changes in the UCPs also impacted the overall simulations by reducing the wind speed (due to increased roughness length) within the urban area and small changes in the temperature values (due to urban fraction). Thus, the automation rendered by the DSC method opens the opportunity to a more scalable and widespread ability to perform more accurate urban meteorological modeling and forecasting. An overarching conclusion, the DSC renders a visualization of the urban canopy by producing urban structures/environment details that can be used in representing the urban areas within the UCMs. As future work, we see three avenues. Firstly, we would like to extend our DSC method to include support for all LCZ classes, potentially leading to increased accuracy. Secondly, we would like to improve the accuracy of determining parameters for mostly green areas within the city and in the para-urban region. This may have a significant effect on temperature and humidity estimates. Lastly, we would like to tie our synthetic generation ability with urban planning policies so that what-if scenarios can be generated based on desired urban meteorological consequences. | 2023-02-05T16:08:57.463Z | 2023-02-03T00:00:00.000 | {
"year": 2023,
"sha1": "a8ad81aec27647b777bd9f4e377b15521fae0591",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/pnasnexus/advance-article-pdf/doi/10.1093/pnasnexus/pgad027/49087757/pgad027.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d8ce9424aff153e6d77048d97fb6b0758eda1da",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233535945 | pes2o/s2orc | v3-fos-license | Towards an Operative Predictive Model for the Songshan Area during the Yangshao Period
: The literature in the field of archaeological predictive models has grown in the last years, looking for new factors the most effective methods to introduce. However, where predictive models are used for archaeological heritage management, they could benefit from using a more speedy and consequently useful methods including some well-consolidated factors studied in the literature. In this paper, an operative archaeological predictive model is developed, validated and discussed, in order to test its effectiveness. It is applied to Yangshao period (5000–3000 BC) in the Songshan area, where Chinese civilization emerged and developed, and uses 563 known settlement sites. The satisfactory results herein achieved clearly suggest that the model herein proposed can be reliably used to predict the geographical location of unknown settlements.
Introduction
Over the years, numerous archaeological predictive models have been developed to study the existing relationships between environmental parameters and known archaeological site locations [1,2]. This was done in order to assess the likelihood of finding remaining archaeological sites containing the past human activity [3][4][5][6], and also for management and protection reasons. GIS is important for understanding and summarizing spatial relationships, and it offers the potential to exploit this knowledge to structure solution techniques and new location models [7]. GIS predictive models enable archaeologists to test a theory through the use of empirical data, and are generally used to detect archaeological sites by taking into account statistical samples or anthropologic dynamics [8]. The Integrated Conservation of Cultural Landscape Areas [9] recommended the use of predictive modeling through statistical analysis to infer the occurrence of sites on the basis of observed patterns and assumptions about human behavior [10].
In fact, traditionally observed settlement patterns and assumptions related to the relationships between natural and social environmental parameters have been statistically investigated to obtain "settlement rules" that are important to improve the understanding of past human behavior and develop interpretations of the socio-economic structures of past societies [11].
•
Locations of known archaeological sites; and • Surveys, in areas classified as having high or moderate probability of storing ancient remains.
To construct an archaeological predictive model is a complex task, because the right method, appropriate parameters, and robust validation criteria must be chosen. All these elements allow us to construct a solid, but time-consuming, support decision system for scholars and actors in the fields of preventive archaeology and archaeological heritage management and protection. Having ascertained the usefulness of these models, sometimes it is necessary to support the work of authorities with the setup of faster and consequently more operative methods that will be, as reliable as possible.
In this paper, an operative method to model settlement location preference was tested for investigation at the regionalization level. The prediction model was established using 563 settlement sites and tested using the locations of additional 55 known sites (not included in the modelling and analysis steps) located around the Songshan area ( Figure 1
Study Area and Data Acquisition
The region of interest (ROI), located in Songshan, including Zhengzhou, Luoyang, Xuchang, Pingdingshan, is located between 33°6′50″ N-35°3′30″ N, 111°8′20″ E-114°19′20″ E. The length of the EW extension is about 294 km, the length of the NS extension is about 214 km. The total area is 36,000 km 2 , and is characterized by the presence of mountains in the west, lowlands in the east (with Songshan as the center), erosion and loess hills in the south, and a depression basin in the north [29]. Songshan belongs to the Funiu Mountain system and is one of the Five Mountains of China [30]. Over the millennia, these geomorphological characteristics and the presence of the Yellow River facilitated human frequentation and, consequently, a high concentration of sites of cultural interest. Therefore, this region is very important from an archaeological point of view, as it is considered to be the cradle of Chinese civilization. There are many famous sites, such as Zhijidong [31], Lingjing [32], and other Paleolithic sites; as well as Neolithic sites such as Peiligang [33], Jiahu [34], Tanghu [35], Dianjuntai [36], Shuanghuaishu [37], Wangchenggang [38], Guchengzhai [39], etc. For this reason, it is important to develop a predictive model in this area: (i) to improve the knowledge of the settlement dynamics related to environmental parameters; and to (ii) draw a sensibility map useful for understanding where new discoveries could be made, and consequently for the definition of investigation priorities. In particular, the study and prediction of settlement locations in this area during the Yangshao period plays an important role in improve our knowledge of the origin and development of the Chinese civilization.
Characteristics of Yangshao Period Sites and Choice of Parameters
The Yangshao culture took its name from the village Yangshao (Mianchi County, Henan province) where, in 1921, the first important remains and traces of this culture were discovered. The Yangshao culture, in turn, originated from the Peiligang culture,
Study Area and Data Acquisition
The region of interest (ROI), located in Songshan, including Zhengzhou, Luoyang, Xuchang, Pingdingshan, is located between 33 • 6 50 N-35 • 3 30 N, 111 • 8 20 E-114 • 19 20" E. The length of the EW extension is about 294 km, the length of the NS extension is about 214 km. The total area is 36,000 km 2 , and is characterized by the presence of mountains in the west, lowlands in the east (with Songshan as the center), erosion and loess hills in the south, and a depression basin in the north [29]. Songshan belongs to the Funiu Mountain system and is one of the Five Mountains of China [30]. Over the millennia, these geomorphological characteristics and the presence of the Yellow River facilitated human frequentation and, consequently, a high concentration of sites of cultural interest. Therefore, this region is very important from an archaeological point of view, as it is considered to be the cradle of Chinese civilization. There are many famous sites, such as Zhijidong [31], Lingjing [32], and other Paleolithic sites; as well as Neolithic sites such as Peiligang [33], Jiahu [34], Tanghu [35], Dianjuntai [36], Shuanghuaishu [37], Wangchenggang [38], Guchengzhai [39], etc. For this reason, it is important to develop a predictive model in this area: (i) to improve the knowledge of the settlement dynamics related to environmental parameters; and to (ii) draw a sensibility map useful for understanding where new discoveries could be made, and consequently for the definition of investigation priorities. In particular, the study and prediction of settlement locations in this area during the Yangshao period plays an important role in improve our knowledge of the origin and development of the Chinese civilization.
Characteristics of Yangshao Period Sites and Choice of Parameters
The Yangshao culture took its name from the village Yangshao (Mianchi County, Henan province) where, in 1921, the first important remains and traces of this culture were discovered. The Yangshao culture, in turn, originated from the Peiligang culture, about 7000-5000 years ago. The cultural evolution from the Peiligang to the Yangshao periods parallels that of agriculture [40], which was primitive in Peiligang period, but strongly developed in the Yangshao period due to the considerably increased variety of grain crops. In addition to the traditional drought-resistant crop millet, rice was also planted in areas with sufficient water resources. So, the Yangshao culture was mainly an agricultural culture.
Settlement sites in this period were characterized by significant variety in dimensions and building characteristics. Houses in larger settlements had a layout characterized by a trench surrounding them. Outside of the settlements, there were cemeteries and kilns. There were mainly two kinds of houses in the village: round and square in the early period, and square in the later period. The walls of the houses were made of grass and mud. The outer surfaces of the walls were wrapped with grass and then burned to improve water resistance [41].
Yangshao culture was an important Neolithic culture widely distributed across numerous sites in the middle and lower reaches of the Yellow River. According to incomplete statistics, there were nearly one thousand settlement sites in Central China [42].
Settlement distribution in the Yangshao period was mainly conditioned by topography, waterways, and climate [43]. According to the latest research [44], during the Yangshao period, monsoon were weakened and precipitation was reduced, which made people more dependent on the river system. The settlement distribution characteristics of Yangshao period Songshan show that human beings entered a farming society [45][46][47]. Millet agriculture was primarily found in the hills and mesas, while mixed agriculture was practiced in the plains [48]. The most important parameters were geology, slopes, and water accessibility. Moreover, in the relevant research in this area, the visibility between sites was relatively poor, which cannot prove a close connection between sites [49]. Following from all these considerations, the parameters chosen were the following: elevation, slope, distances from rivers, landforms, soils, and climate types. The current river courses were properly rectified ( Figure 2c) as suggested by the specialized literature, which enabled us to account for: (i) the changes to the Yi River in the late Pleistocene [50], (ii) the continuous variation of the Yellow River [51] up to the construction of the river networks of the Yiluo and the lower Yellow River (in the Yiluo Basin). about 7000-5000 years ago. The cultural evolution from the Peiligang to the Yangshao periods parallels that of agriculture [40], which was primitive in Peiligang period, but strongly developed in the Yangshao period due to the considerably increased variety of grain crops. In addition to the traditional drought-resistant crop millet, rice was also planted in areas with sufficient water resources. So, the Yangshao culture was mainly an agricultural culture. Settlement sites in this period were characterized by significant variety in dimensions and building characteristics. Houses in larger settlements had a layout characterized by a trench surrounding them. Outside of the settlements, there were cemeteries and kilns. There were mainly two kinds of houses in the village: round and square in the early period, and square in the later period. The walls of the houses were made of grass and mud. The outer surfaces of the walls were wrapped with grass and then burned to improve water resistance [41].
Choice of Parameters and Data Acquisition
Yangshao culture was an important Neolithic culture widely distributed across numerous sites in the middle and lower reaches of the Yellow River. According to incomplete statistics, there were nearly one thousand settlement sites in Central China [42].
Settlement distribution in the Yangshao period was mainly conditioned by topography, waterways, and climate [43]. According to the latest research [44], during the Yangshao period, monsoon were weakened and precipitation was reduced, which made people more dependent on the river system. The settlement distribution characteristics of Yangshao period Songshan show that human beings entered a farming society [45][46][47]. Millet agriculture was primarily found in the hills and mesas, while mixed agriculture was practiced in the plains [48]. The most important parameters were geology, slopes, and water accessibility. Moreover, in the relevant research in this area, the visibility between sites was relatively poor, which cannot prove a close connection between sites [49]. Following from all these considerations, the parameters chosen were the following: elevation, slope, distances from rivers, landforms, soils, and climate types. The current river courses were properly rectified ( Figure 2c) as suggested by the specialized literature, which enabled us to account for: (i) the changes to the Yi River in the late Pleistocene [50], (ii) the continuous variation of the Yellow River [51] up to the construction of the river networks of the Yiluo and the lower Yellow River (in the Yiluo Basin).
Choice of Parameters and Data Acquisition
Landform, soil, and climate data were obtained from the Atlas of the Agricultural Resources of Henan province [52], at 1:2,500,000 scale ( Data on settlement distribution in the Yangshao period came from the third national survey of cultural relics, the Chinese Cultural Relics Atlas-Henan volume, Henan Province cultural relics and records, and the Yangshao culture site map of Henan province. These data, available in GPS longitude and latitude coordinates, were imported into a GIS environment. The data from other image formats were corrected by the topographic map after registration, and vectorization was carried out with points. In total, 563 settlements were collected and mapped.
Altitudes
In this area, the lowest elevation was 48 m, and the highest was 2159 m. The study area was divided into eight height elevation ranges for which the site number, site percent for each class, and density of settlements are reported in Table 1. Data on settlement distribution in the Yangshao period came from the third national survey of cultural relics, the Chinese Cultural Relics Atlas-Henan volume, Henan Province cultural relics and records, and the Yangshao culture site map of Henan province. These data, available in GPS longitude and latitude coordinates, were imported into a GIS environment. The data from other image formats were corrected by the topographic map after registration, and vectorization was carried out with points. In total, 563 settlements were collected and mapped.
Altitudes
In this area, the lowest elevation was 48 m, and the highest was 2159 m. The study area was divided into eight height elevation ranges for which the site number, site percent for each class, and density of settlements are reported in Table 1.
From Table 1 we can draw the following rules: 1.
In areas of below 500 m, the proportion of settlement distribution reached 98.21%.
2.
The higher density and distribution of the number of settlements was concentrated in the elevation range between 100-200 m and 200-300 m.
3.
In the area of 100-200 m and above, the number and density of settlements decreased with the increase of elevation.
4.
At the lowest elevation of 48-100 m, the distribution of the number of settlements and their density was relatively small, indicating that the lowest elevation was not suitable for settlement selection.
5.
Yangshao period settlement in the area around Songshan Mountain was mainly distributed in the area with altitude lower than 400 m (see also Figure 2a). It may be that the higher the altitude, the worse the climate, and consequently, those regions were not suitable for human survival.
Slope
The minimum value of slope in this area was 0; the maximum value was 72.9 • . The study area was divided into 12 segments of slope gradient; the number and density of settlement of each slope segment was counted. The results are presented in Table 2 (see also Figure 2b). From Table 2, we can draw the following rules: 1.
The site selection mode of prehistoric settlements was in the 0-3 • zone, the total number of settlements was 402, accounting for 71.4%.
2.
It can be seen from the settlement density that settlement in Yangshao period was mainly concentrated in the 2-3 • area, which indicated that the ancients in this period had not completely transferred from the mountains to the plains. 3.
The amount and ratio of settlement decreased with the increase of slope, indicating areas with gentle slope were more suitable for settlement. Areas with a greater slope were less suitable because of the greater cost of settlement construction. Overall, as the slope increased, the density of settlements was constantly reduced (see Table 2).
Distances from Rivers
The early lakes and swamps were mainly distributed in rivers, and the modern river valley is basically the same as the early one. The modern water system pattern essentially reflects the characteristics of the hydrological environment in the early period. The relationship between settlements and distance from the river for intervals of 500 m is reported in Table 3. From the values shown in Table 3, we can deduce the following information: 1.
The areas within 500 m of the river had the largest number of settlements. With an increase in distance from the river system, the number of settlements significantly decreased. This indicates that population had to be close to the river to survive in the Yangshao period. This was because at a low level of productivity, humans had to live near river sources in order to rely on natural runoff.
2.
Most of the settlements were distributed 3 km of the river system (around 96%). Therefore, 3 km seems to be the limit distance within which to live in order to best exploit river resources.
Landforms
The number and distribution of the settlement ratio and the density statistics of each geomorphic area were counted by overlaying the Yangshao period settlements onto the landform map. The results are presented in Table 4. From Table 4, we can see that the number and density of settlements were the highest in the Sanmenxia-Luoyang loess hilly area. This showed that in the prehistoric period, the Sanmenxia-Luoyang loess hilly region was the area most suitable for settlement location. In particular, the number of settlements in the Xiaoshan mountain-Xiongershan mountain-Funiushan mountain area was more than that of the Huaihe alluvial plain area, which was equivalent to the Yellow River alluvial plain area, but the density was much lower. There was no settlement in Tongbai-Dabie mountain hilly area, indicating that the mountain and hilly areas were not suitable for site location due to the complex morphology of the terrain, which was not conducive to human production and life.
Soils
The number, proportion and density of settlements in each soil area determined by overlaying settlements and soil type, shown in Table 5. From Table 5, we can see that the number and density of settlements in Hilly brown soil and red clay in northwestern Henan were far greater than other types. It showed that the ancients settled in the soil area suitable for the development of agriculture in order to stabilize their life in Yangshao period. There were no settlements in the area of yellow brown soil area in Funan Mountain, the western Henan Aeolian sand, the salt and alkaline soil along Huanggangwa in the northeast of Henan province, or the Shajiang black soil area in the depression of central and eastern Henan province.
Climate
The number, proportion and density of settlements of each soil area are shown in Table 6 by overlaying settlements and climate types. According to these statistics, the number and density of settlement distribution were the largest in the drought-prone and less rainy area in the hilly region of western Henan, and the values were much larger than those of other climatic types. In the Yangshao period, the drought-prone and the less rainy area in the hilly region of western Henan was more suitable for human habitation.
Summary of the Influencing Factors of Settlement Location and Their Correlation Analysis
In the Yangshao period, site selection was mainly conditioned by the following six environmental parameters: (i) elevation, (ii) slope, (iii) distance from the river system, (iv) geomorphology, (v) soil, and (vi) climate. The settlement sites were concentrated in the following areas: • Elevation around 100 to 200 m; • Slope around 2-3; • The (horizontal) distance from the river around 0 to 500 m; • The preferred geomorphic type was the landform area of the Sanmenxia Luoyang loess hilly region; • The preferred soil was the hilly brown soil and red clay of northwestern Henan; and • The climate was the drought-prone and less rainy area of the hilly region of western Henan.
Only 26 sites out of 563 sites satisfied these conditions. Therefore, during the Yangshao period, the ancient people used a different site selection mode, based on environmental parameters, in choosing settlement locations.
A correlation analysis was used to eliminate redundant factors, as well as to capture the degree of closeness between the elements of the geographical environment which influenced the location of settlements in prehistoric times. The correlation analysis of geographical environment factors was carried out using SPSS software [53], and the correlation coefficient among the various factors was expressed by Pearson index R. In general, when the absolute value of R was more than 0.7, it was highly correlated, when the absolute value of R was less than 0.7 and greater than 0.4, it was of moderate correlation. When the absolute value of R was greater than 0.1 and less than 0.4, it was of low correlation. When the absolute value of R was less than 0.1, it was unrelated. Table 7 shows that elevation was positively related to the geomorphology, soil, and climate data. The slope data were positively related to the river system and soil. The landform data were positively related to the elevation, slope, and soil data. The soil data were positively related to the elevation, slope, landform, and climate data, and the climate data were positively correlated with the altitude, elevation, landform, and soil data. The correlation coefficient was always smaller than 0.4, so they all exhibited low level of correlation, suggesting that the six elements of the geographical environment were relatively independent and, therefore, none of them can be excluded from any analysis.
Quantification of Influence Factors of Settlement Location
In order to eliminate the inconsistency of the dimensions and diverse units of the environmental parameters, it is necessary to quantify the value of the impact factors through data standardization. The quantitative basis was the relationship between the quantity or density of settlement distribution and the geographical environment.
Density of settlement distribution was used as a quantification standard to account for the influence of elevation, slope, landform, soil, and climate on settlement. The number of settlements was used as a quantification standard to account for the influence of river system.
The formula used to quantify the score is expressed in Equation (1): (1) f i was a quantified score in the formula. If the geographic element was a river system, v i represented the number of prehistoric settlements distributed in the i buffer zone of the river system, and v max was the maximum value of the number of prehistoric settlements distributed in all the buffer zones of the river system.
If the geographical elements were elevation, slope, landform, soil, and climate, v i was the prehistoric settlement density of the i segment of a geographical element, and v max was the maximum of the prehistoric settlement density distributed amongst all the subsections of the given geographical element. The maximum and minimum values of the score were 100 and 0, respectively, so that the value of 100 represented the region with the highest preference for prehistoric settlement, whereas the value of 0 represented the least preferred area. The quantitative results are shown in Table 8.
Weights Determination of Influence Factors of Settlement Location
The weight set reflected the relative importance of each environmental parameter which affected the settlement location. Weighting methods can be divided into three categories: subjective, objective, and combined. The subjective empowerment approach is mainly based on a subjective judgment of experts to obtain the index weight, such as in the Delphi method [54] and the analytic hierarchy process. Even if this approach is widely used, its objectivity is poor, being that it depends on the knowledge, experience, and personal preferences of the experts, i.e., on the emphasis that experts subjectively place on each index, with no consideration for the characteristics of the data under investigation. The objective weighting approach has a strong theoretical basis and objectivity, and it uses diverse methods, such as entropy or variation coefficients, to calculate the index weight, exploiting the relationships among the original data. Therefore, the objective weighting method was easily subject to the influence of the data sample, as well as to the specific method adopted to assess the weights from the available data sample, being that diverse methods tend to yield different results.
In this paper, an objective weighting approach was used to calculate the weights of the diverse environmental and geographic factors which influenced settlement distribution in the Yangshao period. In order to make the result more accurate, we adopted two weighting approaches: (i) the variation coefficient, and (ii) the entropy method. Moreover, to mitigate the limitations of single weighting models, the final weight of the factors influencing settlement location were the average of the weights obtained from both the variation coefficient and the entropy method.
Variation Coefficient
The variation coefficient is an objective weighting method to determine weights using evaluation indices. Compared with the subjective weighting method, this method is more scientific, objective, and reliable [55].
The steps to calculate the weights of the factors affecting settlement site selection are as follows.
Firstly, the coefficient of variation, C V , of each influencing factor was calculated. The formula of the coefficient of variation of each influencing factor was as follows (Equation (2)): where C Vi is the i coefficient of variation of the ith influencing factor, also known as the standard deviation coefficient, σ i is the standard deviation of the ith influencing factor, and x i is the average number of the ith influencing factor.
Secondly, the weight of each factor was calculated as follows (Equation (3)): where w i represents the weight of the ith impact factor, while v i is the same as in Equation (1).
Entropy Method
The entropy method determines the weights according to the amount of information contained: the smaller the entropy, the greater the information provided, and, therefore, the greater the weight associated with the index and the role that factor plays in the comprehensive evaluation [56].
The steps to determine the weights of the impact factors of settlement site selection by the entropy method are as follows: Firstly, the original data matrix was constructed (Equation (4)).
where, m is the number of settlements in a certain period, n is the number of influence factors, and f ij is the evaluation value of the ith settlement under the jth influence factor, as defined in Table 8 and Equation (1). Secondly, the specific gravity p ij of the factor value of the ith settlement under the jth influence factor was calculated (Equation (5)).
Thirdly, the entropy e j of the jth influence factor was calculated (Equation (6)).
where the coefficient k is defined as in Equation (7): Finally, the entropy weight ew j of the jth influence factor was calculated (Equation (8)): The larger the entropy weight, the more information the influence factor represents, which means that the influence factor had greater influence on settlement site selection.
Using the above two methods, we calculated the weights of the settlement site selection factors in Yangshao period Songshan, finding the results shown in Table 9. The weight ranking of each influencing factor was exactly the same in both the variation coefficient and entropy methods. The weights of the influencing factors were from greatest to smallest, the river system, slope, elevation, soil, physiognomy, and climate. It can be concluded that in the Yangshao period the order of importance in the settlement location rules was, from highest to lowest, the river system, slope, elevation, soil, landform, and climate.
Settlement Location Prediction Model Construction
Before constructing the model, the unit of preference classification was determined. Considering the accuracy of DEM data, a grid of 100m × 100m was selected as the unit of preference classification and also as the cell size of the raster analyzed.
The spatial weighted superposition method was used to construct the preferred grade model of Yangshao period settlement site selection. The effect of each factor was superimposed upon different layers, and finally the graded distribution map of settlement site preferences in Yangshao period Songshan was generated. The model (Formula (9)) was: where F was the comprehensive evaluation score of an evaluation unit, W i was the weight of the first factor, f i was the score of the second factor corresponding to the evaluation unit as calculated in Table 8, and n was the total number of factors.
Results and Model Validation
Six geographic and environmental factors were weighted and superimposed to obtain the comprehensive index distribution map of the settlement preferences in Yangshao period Songshan, with values from 0 to 100. By drawing the frequency distribution histogram of the composite index [57], the index values corresponding to the places where the histogram had obviously changed ere used as the boundaries of different grades to classify the preferred degree of settlement location. The criteria were: 80-100 preferred high-grade areas, 52-79 preferred middle-grade areas, 0-51 preferred low-grade areas (Figure 3). The preferred high-grade area was 2666 km 2 , accounting for 7.5% of the total area, and mainly distributed in the Yihe, Luohe, Yiluo, Jialu, Shuangjihe, Yinghe, Ruhe, and Shahe river basins, 500 m away from the river area. The area of preferred secondary districts was 11,650 km 2 , accounting for 32.8% of the total area, and mainly distributed in preferred high-level areas near the region, including Yiyang County, Yichuan County, Luoyang City, Yanshi City, Mengjin County, Xingyang City, Zhengzhou City, Xinmi City, Xinzheng City, Yuzhou City, Jiaxian County and other areas. The preferred low-grade area was 21,220 km 2 , accounting for 59.7% of the total area, and mainly distributed in the western and southern parts of the region, including Luoning County, Luanchuan County, Song County, Ruyang County, Lushan County, Yexian County, Wugang City, Dengfeng City, most of Gongyi City around Songshan Mountain in central China, Zhongmou County, Changge City, Xuchang City, Yanling County and other regions in eastern China.
The preference model was validated using 55 newly discovered Yangshao settlement sites in the third general survey of cultural relics.
There were 23 sites in the high-grade area, with a density of 86.3/104 km 2 , 28 sites in the middle-grade area, with a density of 24/104 km 2 , and only 4 sites in the low-grade area with a density of 1.9/104 km 2 . The results showed that the density of settlements in the preferred high-grade area was much higher than that of the other two areas, indicating the highest probability of finding Yangshao period settlement sites was in the highest-grade area, followed by the preferred middle-grade area, and the preferred low-grade area was the most difficult area in which to find settlement sites.
Overall, we can know that: 1. Yangshao period settled around Songshan Mountain involved different choices for different environments. The settlement sites were concentrated in the areas where the elevation was within 100-200 m, the slope was between 2-3 • , the horizontal distance from the river was within 500 m, the geomorphic type was that of the landform of the Sanmenxia-Luoyang loess hilly area, soil type was hilly cinnamon soil and red clay in northwest Henan, and the climate type was the arid and rainless hilly area in west Henan.
2.
The priority of geographic environmental impact factors in settlement selection in the Yangshao period Songshan mountain area was: river system, slope, elevation, soil, landform, and climate. 3.
Settlement prediction results showed that the preferred high-grade area was the area with the highest probability of prehistoric settlement, followed by the middle-grade area, and the low-grade area was characterized by the lowest probability of discovering settlement sites. According to this grade, we can predict which areas contain undiscovered settlements to guide field archaeological investigation, determine the scope of field archaeological investigation more accurately, and to actively excavate archaeological sites. There were 23 sites in the high-grade area, with a density of 86.3/104 km 2 , 28 sites in the middle-grade area, with a density of 24/104 km 2 , and only 4 sites in the low-grade area with a density of 1.9/104 km 2 . The results showed that the density of settlements in the preferred high-grade area was much higher than that of the other two areas, indicating the highest probability of finding Yangshao period settlement sites was in the highest-grade area, followed by the preferred middle-grade area, and the preferred low-grade area was the most difficult area in which to find settlement sites.
Overall, we can know that: 1. Yangshao period settled around Songshan Mountain involved different choices for different environments. The settlement sites were concentrated in the areas where the elevation was within 100-200 m, the slope was between 2-3°, the horizontal distance from the river was within 500 m, the geomorphic type was that of the landform of the Sanmenxia-Luoyang loess hilly area, soil type was hilly cinnamon soil and red clay in northwest Henan, and the climate type was the arid and rainless hilly area in west Henan. 2. The priority of geographic environmental impact factors in settlement selection in the Yangshao period Songshan mountain area was: river system, slope, elevation, soil, landform, and climate. 3.
Settlement prediction results showed that the preferred high-grade area was the area with the highest probability of prehistoric settlement, followed by the middle-grade area, and the low-grade area was characterized by the lowest probability of discovering settlement sites. According to this grade, we can predict which areas contain undiscovered settlements to guide field archaeological investigation, de-
Discussion and Conclusions
In this paper, a comprehensive and fast approach for modelling settlement location preferences at a regional level was proposed. The developed method exploits the knowledge related to 563 settlement sites, dated to the Yangshao period of 5000-3000 BC, and located in the Songshan area, where Chinese civilization emerged and developed. Six geographic and environmental factors-elevation, slope, distance from river systems, geomorphology, soil, and climate-were weighted and superimposed to obtain a comprehensive index distribution map of settlement preference.
One of the most important steps in predictive modelling is the calculation of weights which reflect the relative importance of each parameter in the selection process for the identification of settlement locations.
In this paper, the objective weighting approach was used to calculate the weights of the various indices, namely the diverse environmental and geographic factors which influenced the distribution of Yangshao period settlement. In order to make the results more accurate, we adopted two objective weighting approaches: (i) the variation coefficient, and (ii) the entropy method.
The area of investigation was divided into: (i) high-, (ii) middle-, and (iii) low-grade preference zones, and the analysis was carried out exploring the relationship of the 563 settlements with respect to altitude, slope, river, landform, soil and climate. In the model, the weight of each factor was determined by using the average of the weights obtained using both the variation coefficient and entropy methods.
A settlement location prediction model was obtained using the comprehensive index method and validation was successfully performed using new 55 settlement sites. The results show that the priority order of the factors which affected human settlement was: (I) distance from rivers, (II) slope, (III) altitude, (IV) soil, (V) landform, and (VI) climate. This finding clearly highlighted that the natural environment played a very important role in the choice of settlement location and on its interaction with human activity. In particular, the fact that the most important factors were the distance from rivers and slope are linked to greater resource availability and easier workability of land for agricultural use during the Neolithic period when agricultural techniques were in their early development phase.
As a whole, the outputs from our investigations highlighted that: (i) the location of settlements was not random, but had specific spatial distribution reflecting the regional characteristics of social development; (ii) the combination of Variation Coefficient and Entropy Method made the weighting results more real and reasonable and weakened the influence of abnormal indexes. The satisfactory results herein achieved clearly suggested that the model herein proposed can be reliably used to predict the geographical location of unknown settlements.
Our analysis highlighted that predictive models can fruitfully constitute an important decision-making support system, providing useful information for defining survey priority and facilitating new site discovery, thus saving time and money, especially in large areas. Moreover, predictive models can also contribute to the preservation of archaeological areas and features, serve as witnesses to the human past, and provide useful information for reducing archaeological risks linked to both anthropic and natural risk factors.
To further improve the results from the proposed prediction model, in the future, the authors will explore the possibility of mining the spatial and temporal distribution of prehistoric settlement data, as well as the possibility of using those data as a predictive parameter selection factor. Earth observation technologies such as optical and radar satellite remote sensing and geophysics will also be used [58][59][60][61] to detect archaeological proxy indicators.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy reasons.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-05-04T22:06:18.539Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "8561824e25b9fccc4ac0e09968efa90eb411da66",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/10/4/217/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3815b3ecc18a7072e0c66ba0052d55122fa02cf8",
"s2fieldsofstudy": [
"History",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Geography"
]
} |
11575878 | pes2o/s2orc | v3-fos-license | No Strong Parallel Repetition with Entangled and Non-signaling Provers
We consider one-round games between a classical verifier and two provers. One of the main questions in this area is the \emph{parallel repetition question}: If the game is played $\ell$ times in parallel, does the maximum winning probability decay exponentially in $\ell$? In the classical setting, this question was answered in the affirmative by Raz. More recently the question arose whether the decay is of the form $(1-\Theta(\eps))^\ell$ where $1-\eps$ is the value of the game and $\ell$ is the number of repetitions. This question is known as the \emph{strong parallel repetition question} and was motivated by its connections to the unique games conjecture. It was resolved by Raz who showed that strong parallel repetition does \emph{not} hold, even in the very special case of games known as XOR games. This opens the question whether strong parallel repetition holds in the case when the provers share entanglement. Evidence for this is provided by the behavior of XOR games, which have strong (in fact \emph{perfect}) parallel repetition, and by the recently proved strong parallel repetition of linear unique games. A similar question was open for games with so-called non-signaling provers. Here the best known parallel repetition theorem is due to Holenstein, and is of the form $(1-\Theta(\eps^2))^\ell$. We show that strong parallel repetition holds neither with entangled provers nor with non-signaling provers. In particular we obtain that Holenstein's bound is tight. Along the way we also provide a tight characterization of the asymptotic behavior of the entangled value under parallel repetition of unique games in terms of a semidefinite program.
Introduction
municate, and leads to the definition of the non-signaling value of a game, which we denote by ω ns . Notice that for any game, ω ns ≥ ω * ≥ ω. For instance, it is not hard to see that ω ns (CHSH) = 1, since we can arrange the distributions on the answers in such a way that the marginal distributions are always uniform, and at the same time only winning answers are returned.
An important special case of two-prover games is that of unique games. Here, the verifier's decision is restricted to be of the form b = σ(a) for some permutation σ on [k]. If, moreover, k = 2, then the game is called an XOR game. An example of such a game is the CHSH game. It is very common for the answer set [k] in a unique game to be identified with some group structure (e.g., Z k ) and for the verifier to check whether the difference of the two answers a − b is equal to some value. If this is the case, then we refer to the game as a linear game. In recent years, unique games became one of the most heavily studied topics in theoretical computer science due to Khot's unique games conjecture [Kho02] and its strong implications for hardness of approximation (see, e.g., [KKMO07]).
Parallel repetition: One of the main questions in the area of two-prover games is the parallel repetition question. Here we consider the game G ℓ obtained by playing the game G in parallel ℓ times. More precisely, in G ℓ the verifier sends ℓ independently chosen question pairs to the provers, and expects as answers elements of [k] ℓ . He accepts iff all ℓ answers are accepted in the original game. It is easy to see that ω(G ℓ ) ≥ ω(G) ℓ since the provers can play their optimal strategy for G on each of the ℓ question pairs. Similarly, ω * (G ℓ ) ≥ ω * (G) ℓ and ω ns (G ℓ ) ≥ ω ns (G) ℓ . Although at first it might seem that equality should hold here, the surprising fact is that in most cases the inequality is strict. Even for a simple game like CHSH we have that ω(CHSH 2 ) = 5/8 (which is bigger than the 9/16 one might expect).
The parallel repetition question asks for upper bounds on the value of repeated games. This fundamental question has many important implications, most notably to tight hardness of approximability results (e.g., [Hås01]). The first dramatic progress in this area was made by Raz [Raz98], with more recent work by Holenstein [Hol07] and Rao [Rao08]. The following theorem summarizes the state of the art in this area. Theorem 1.1. Let G be a two-prover game with answer size k and value ω(G) = 1 − ε. Then for all ℓ ≥ 1, 1. [Hol07] ω(G ℓ ) ≤ (1 − ε 3 ) Ω(ℓ/ log k) ; 2. [Rao08] If G is a projection game (which is a more general class than unique games) then ω(G ℓ ) ≤ (1 − ε 2 ) Ω(ℓ) .
In an attempt to better understand the unique games conjecture, Feige, Kindler, and O'Donnell [FKO07] asked whether the bound on ω(G ℓ ) above can be improved to (1 − ε) Ω(ℓ) , a result called strong parallel repetition. Given the improved bound by Rao, it is only natural to hope that the exponent could be lowered all the way down to 1. They observed that if such a result holds, even just for unique games, then we would get an equivalence of the unique games conjecture to other better studied problems like MAX-CUT.
Somewhat surprisingly, Raz [Raz08] showed that strong parallel repetition does not hold in general. He showed an example of an XOR game (which is no other than the odd-cycle game mentioned above) whose value is 1 − 1/2n yet even after n 2 repetitions, its value is still at least some positive constant. Raz's example was further clarified and generalized in [BHH + 08] by showing a connection between ω(G ℓ ) and the value of a certain SDP relaxation of the game. We mention that strong parallel repetition is known to hold in the case of projection games that are free, i.e., the distribution on the questions to the provers is a product distribution [BRR + 09]. See also [AKK + 08] for an "almost strong" parallel repetition statement for unique games played on expander graphs.
Parallel repetition is much less well understood in the case of entangled provers. In fact, no parallel repetition result is known for the entangled value of general games, and this is currently one of the main open questions in the area. However, parallel repetition results are known for several classes of games with entangled provers, as described in the following theorem, in which we also mention Holenstein's [Hol07] parallel repetition result for the non-signaling value.
Theorem 1.2. Let G be a two-prover game with answer size k, entangled value ω * (G) = 1 − ε * , and non-signaling value ω ns (G) = 1 − ε ns . Then for all ℓ ≥ 1, Hence, we see that strong parallel repetition holds for the entangled value of linear games. In fact, in the case of XOR games, we have perfect parallel repetition.
All the above results involving the entangled value are derived by (i) showing that ω * is close (or in fact equal in the case of XOR games) to a certain SDP relaxation (which appears as SDP1 below), and (ii) showing that this SDP relaxation "tensorizes", i.e., that the value of the SDP corresponding to G ℓ is exactly the ℓth power of the value of the SDP corresponding to G.
The above naturally raises the question of whether the entangled value obeys strong parallel repetition, if not in the general case, then at least in the case of unique games. The nearly tight characterization of the entangled value of unique games using semidefinite programs [KRT08] (see Lemma 2.6 below) is one reason to hope that such a strong parallel repetition would hold. Raz's counterexample does not provide a negative answer to this question, since it is an XOR game, for which perfect parallel repetition holds in the entangled case. Similarly, in the case of non-signaling provers there has been no evidence that strong parallel repetition does not hold. In fact, because the non-signaling value is exactly given by a linear program (LP) (see, e.g., [Ton09]), one might conjecture that strong parallel repetition should hold since "all" one has to do is understand the tensorization properties of the corresponding LP.
Our results: We answer the above question in the negative, by giving a counterexample to strong parallel repetition for games with entangled provers. More precisely, we give a game with entangled value 1 − Ω(1/n) such that after n 2 repetitions the entangled value of the repeated game is still a positive constant. Our example (after a minor modification) is a unique game with three possible answers, the smallest possible alphabet size for such a counterexample, because unique games with two answers are by definition XOR games for which perfect parallel repetition holds. Hence we obtain an interesting 'phase transition' in the entangled value of unique games: whereas for alphabet size 2 we have perfect parallel repetition, already for alphabet size 3 we do not even have strong parallel repetition. Our result shows that the upper bound for unique games in Theorem 1.2.2 is essentially tight.
We also show that our game has a non-signaling value of 1 − Ω(1/n). This implies that strong parallel repetition fails also for the non-signaling value and that Holenstein's result (Theorem 1.2.4) is in fact tight.
As part of the proof we observe (see Theorem 4.1) using results from [KRT08] that the asymptotic behavior of the entangled value of repeated unique games is almost precisely captured by a certain SDP (SDP1 in Sec. 2). This is a pleasing state of affairs, since we now have a nearly tight SDP characterization both of the value of a unique game (SDP2 in Sec. 2) and of its asymptotic value (SDP1). Incidentally, SDP1 was also shown to characterize the asymptotic behavior of the classical value of repeated unique games, although the bounds there were considerably less tight, as they include some logarithmic factors (which are conjectured to be unnecessary) and also depend on the alphabet size (see Lemma 2.5).
Combining the above observation with our counterexample, we obtain a separation between SDP1 and SDP2. Namely, for the game described in our counterexample, SDP2 is 1 − Θ(1/n) (since it is very close to the value of the game) whereas SDP1 is 1 − Θ(1/n 2 ) (since it describes the asymptotic behavior). Both SDPs have been used before in the literature (e.g., [KV05, AKK + 08, KRT08, BHH + 08]) and to the best of our knowledge no gap between them was known before. Perhaps more interestingly, our example also implies that SDP2 does not tensorize, since for the basic game SDP2 is 1 − Θ(1/n) yet after n 2 repetition its value is still some positive constant (since it is a relaxation of the entangled value).
Our construction: Our counterexample is inspired by the odd-cycle game (yet it is neither a cycle nor is it odd). We call it the line game. Recall that the odd-cycle game was used by Raz [Raz08] as a counterexample to strong parallel repetition in the classical case. However, since it is an XOR game, it obeys perfect parallel repetition in the entangled case, and moreover, its non-signaling value is 1, so it cannot provide a counterexample in our setting.
Roughly speaking, in the line game the players are asked to color a path of length n with two colors in such a way that any two adjacent vertices have the same color, yet the leftmost vertex must be colored in color 1 and the rightmost vertex must be colored with color 2 (see Fig. 1a). More precisely, the verifier randomly chooses to send to the provers either two adjacent vertices or the same two vertices. He expects the two answers to be the same, unless both vertices are the leftmost vertex, in which case both answers must be 1, or both vertices are the rightmost vertex, in which case both answers must be 2.
It is not hard to see that the classical value of this game is 1 − Θ( 1 n ), as is the case for the oddcycle game. However, unlike the odd-cycle game, it turns out that the entangled value and even the non-signaling value of this game are also 1 − Θ( 1 n ). An intuitive way to see this is to argue about the marginals on Alice's and Bob's answer to each question. Forcing the ends of the line into a fixed answer forces the corresponding marginals to be close to distributions that always output 1 on the left and 2 on the right. The marginals for questions in between the ends must therefore move from the all-1 to the all-2 distribution, which can only be done at the expense of losing with probability Ω(1/n). For comparison, in the odd-cycle game we can manage with a strategy where all marginals are uniform, and hence its non-signaling value is actually exactly 1! As we will show in Section 4, after repeating the line game n 2 times, its entangled value (and even classical value) are still bounded from below by some positive constant. In particular, this implies that strong parallel repetition does not hold for the entangled value nor for the non-signaling value. This lower bound can be shown directly by explicitly demonstrating the provers' strategy. Instead, we will follow a slightly indirect route, using SDP1 to argue about the behavior of the game (or in fact of its unique game variant described below) under parallel repetition, as we feel this gives more insight into the behavior of parallel repetition of unique games.
As described above, the line game is not a unique game, due to the non-permutation constraints on both ends. In order to provide a counterexample for strong parallel repetition even for unique games, we present a simple modification of the game that leads to what we call the unique line game. Roughly speaking, this is done by increasing the answer size to 3, replacing the constraint on the leftmost vertex with a permutation that switches 2 and 3, and similarly replacing the constraint on the rightmost vertex with a permutation that switches 1 and 3. This has a similar effect to the non-permutation constraints in the original line game, and as a result, the classical, entangled, and non-signaling values of this game are more or less the same as in the line game, both for the basic game and its repetition.
Preliminaries
Games: We study one-round two-prover cooperative games of incomplete information, also known in the quantum information literature as nonlocal games. In such a game, a referee (also called the verifier) interacts with two provers, Alice and Bob, whose joint goal is to maximize the probability that the verifier outputs ACCEPT. In more detail, we represent a game G as a distribution over triples (s, t, π) where s and t are elements of some question set Q, and π : [k] × [k] → {0, 1} is a predicate over pairs of answers taken from some alphabet [k]. The game described by such a G is as follows.
• The verifier samples (s, t, π) according to G.
• He sends s to Alice and receives an answer a ∈ [k].
• He sends t to Bob and receives an answer b ∈ [k].
This definition of games is the one used by [BHH + 08] and is slightly more general than the one commonly used in the literature, which requires that each pair (s, t) is associated with exactly one predicate π. Our definition allows the verifier to associate more than one predicate π (in fact, a distribution over predicates) to each question pair (s, t). Such games are sometimes known as games with probabilistic predicates. We use this definition mostly for convenience, since as we shall see later, our counterexamples either do not use probabilistic predicates, or can be modified to avoid them (but see Remark 3.6 for one instance in which probabilistic predicates are provably necessary). Moreover, the results in [CSUU07, KRT08, BHH + 08] hold for games with probabilistic predicates, and this is in particular true for Lemmas 2.3, 2.4, and 2.6, which we need for our construction. Finally, Raz [Raz98] briefly discusses how to extend his parallel repetition theorem to games with probabilistic predicates, whereas the results in [Hol07,Rao08] most likely also extend to this case, although this remains to be verified; in any case, these results are not needed for our construction.
We define the (classical) value of a game, denoted by ω(G), to be the maximum probability with which the provers can win the game, assuming they behave classically, namely, they are simply functions from Q to [k]. We can also allow the provers to share randomness, but it is easy to see that this does not increase their winning probability. We define the entangled value of a game, ω * (G), to be the maximum winning probability assuming the provers are allowed to share entanglement. The precise definition of entangled strategies can be found in, e.g., [KRT08], but will not be needed in this paper. We essentially just have to know that the entangled value is bounded from above by the non-signaling value, which is defined as follows.
Definition 2.1. A non-signaling strategy for a game G is a set of probability distributions
where A s,t (a), B s,t (b) are the marginals of p s,t on the first and second answer respectively. The nonsignaling value of the game is t (a, b)π(a, b) where the maximum is taken over all non-signaling strategies {p s,t }.
Definition 2.2. A game is called unique if the third component of the triples (s, t, π) is always a permutation constraint, namely, it is 1 iff σ(a) = b for some permutation σ. We will sometimes think of such games as distributions over triples (s, t, σ). Furthermore, a unique game is called linear if we can identify [k] with some Abelian group of size k and the third component of (s, t, σ) is always of the form σ(a) = a + r for some element r of the group.
We denote the ℓ-fold product of G with itself by G ℓ . Clearly, ω(G ℓ ) ≥ ω(G) ℓ and similarly for ω * and ω ns , since the provers can play each instance of the game independently, using an optimal strategy. Parallel repetition theorems attempt to provide upper bounds on the value of repeated games. It is often convenient to speak about the amortized value of a game, defined as ω(G) = lim ℓ→∞ ω(G ℓ ) 1 ℓ ≥ ω(G), and similarly for ω * (G) and ω ns (G).
SDP Relaxations:
The main SDP relaxation we consider in this paper is SDP1, which is defined for any game G. The maximization is over the real vectors {u s a }, {v t b }.
SDP 1
Maximize: It follows from Theorem 5.5 and Remark 5.8 of [KRT08] that SDP1 has the tensorization property.
Lemma 2.3. For any game G and any
where ω SDP1 denotes the optimum value of SDP1 for a particular game.
The proof of this lemma is based on ideas from [FL92,MS07]. Ignoring some subtle issues, the essential reason that Lemma 2.3 holds is because SDP1 is bipartite, i.e., its goal function only involves inner products between u variables and v variables, and its constraints are all equality constraints and involve either only u variables or only v variables (see [KRT08] for details).
The value of SDP1 is an upper bound for the entangled value of the game, and in [KRT08] it is shown that its value is not too far from the entangled value of unique games.
Lemma 2.4 ([KRT08]). Let G be a unique game with
Moreover, in a recent result by Barak et al. [BHH + 08] it was shown that SDP1 essentially characterizes the amortized (classical) value of unique games, up to a factor that depends on the alphabet size and logarithmic corrections.
SDP 2
Maximize: Exp (s,t,π)∼G ∑ ab π(a, b) u s a , v t b Subject to: We now consider SDP2. Notice the extra variable z, the extra non-negativity constraints, and the extra z constraints. We clearly have that for any game G, ω SDP2 (G) ≤ ω SDP1 (G). Yet, as mentioned in [KRT08], SDP2 still provides an upper bound on the entangled value. Moreover, for unique games, this upper bound is almost tight.
Lemma 2.6 ([KRT08]). Let G be a unique game with
It was not known whether SDP2 satisfies the tensorization property.
The line game and its non-signaling value
We now describe and analyze our first counterexample, the line game (see Figure 1a for an illustration). , and answer size k = 2, in which the verifier chooses a triple (s, t, π) as follows. He first chooses an edge with endpoints s ≤ t uniformly among the 2n − 1 edges. The constraint π is set to be equality for all edges, except for the two loops at the ends, i.e., except s = t = 1 or s = t = n. In the former case, the constraint π forces a = b = 1 and in the latter case it forces a = b = 2.
Note that the line game G L is not a unique game due to the non-unique constraints on both ends of the line.
Proof: First, notice that the success probability of the classical strategy in which Alice and Bob always answer 1 is 1 − 1 2n−1 . Hence, 1 − 1 2n−1 ≤ ω(G L ) ≤ ω * (G L ) ≤ ω ns (G L ) and it remains to bound ω ns (G L ) from above. For this, we use the following simple claim.
σ(a))| is the total variation distance between A and σ(B). Moreover, for any marginal distributions A and B there exists a distribution p for which equality is achieved.
Proof: For simplicity assume that σ is the identity permutation; the general case follows by permuting the answers. Note that p(a, a) ≤ min(A(a), B(a) To construct a p such that equality holds, we can simply set p(a, a) = min(A(a), B(a)). It is easy to see that it is possible to complete this to a probability distribution.
We now bound the non-signaling value of G L by arguing about the marginal distributions of the provers' strategy. Let {p s,t |s, t ∈ [n]} be an arbitrary non-signaling strategy, let A 1 , . . . , A n be the marginal distributions on Alice's answers and B 1 , . . . , B n the marginal distributions for Bob, as in Def. 2.1. Note that except for question pairs (1, 1), (n, n) all constraints are equality constraints. Hence, using Claim 3.3 and denoting the number of edges by m = 2n − 1, the winning probability for this strategy is at most 1 (1, 1) + p n,n (2, 2)) 1 (1, 1) + p n,n (2, 2)) where in the first inequality we used the triangle inequality for total variation distance ∆. We complete the proof by noting that ∆(A 1 , B n ) ≥ A 1 (1) + B n (2) − 1, and recalling that by definition 1 (1, 1) and B n (2) ≥ p n,n (2, 2).
In order to show that the line game violates strong parallel repetition we will modify it to a unique game G uL by increasing the alphabet size to 3 and slightly changing the constraints. We will shortly see that G L and G uL have essentially the same non-signaling value and behave similarly under parallel repetition.
Definition 3.4 (Unique line game). Consider a path with vertices {1, . . . , n} with edges connecting any two successive nodes, as well as a loop on each vertex (so the total number of edges is 2n − 1). The unique line game G uL of length n is a game with question set Q = [n], and answer size k = 3, in which the verifier chooses a triple (s, t, σ) as follows. He first chooses an edge with endpoints s ≤ t uniformly among the 2n − 1 edges. The permutation σ is set to be the identity for all edges, unless s = t = 1 or s = t = n. In the former case, σ is chosen to be the identity with probability half and the permutation that switches 2 and 3 with probability half; in the latter case, σ is chosen to be the identity with probability half and the permutation that switches 1 and 3 with probability half.
Proof: First, the strategy that assigns answer 1 to all questions achieves winning probability 1 − 1 2(2n−1) . The upper bound can be shown by repeating the proof of Thm. 3.2 with minor modifications. Instead, let us show how to obtain the upper bound as a corollary to Thm. 3.2. Let {p s,t |s, t ∈ [n]} be an arbitrary non-signaling strategy for G uL , and consider the strategy obtained by mapping answer 3 to 2. More precisely, define {p s,t |s, t ∈ [n]} as the strategy with alphabet size 2 defined byp s,t (1, 1) = p s,t (1, 1),p s,t (1, 2) = p s,t (1, 2) + p s,t (1, 3),p s,t (2, 1) = p s,t (2, 1) + p s,t (3, 1), andp s,t (2, 2) = p s,t (2, 2) + p s,t (3, 2) + p s,t (2, 3) + p s,t (3, 3). Then it is easy to check thatp is also a non-signaling strategy and that moreover, its value under the game G L is at least the average between 1 and the value of p under G uL . Hence ω ns (G uL ) ≤ 2ω ns (G L ) − 1, as desired.
Remark 3.6. Notice that the unique line game uses probabilistic predicates, i.e., there are questions (namely, the two end loops) to which more than one predicate is associated. It is not difficult to avoid these probabilistic predicates by replacing the end loops with small gadgets, while keeping the classical and entangled values of the game as well as those of the repeated game more or less the same, hence leading to a counterexample to strong parallel repetition using unique games with deterministic predicates (namely, just add one extra vertex at each end, call it 1 ′ and n ′ and add equality constraints from 1 to 1 ′ and from 1 ′ to 1 ′ , as well as a constraint that switches 2 and 3 from 1 ′ to 1, and analogous modification for n ′ ). However, note that it is impossible to obtain a counterexample to strong parallel repetition for the non-signaling value that is both unique and uses deterministic predicates. The reason is that any unique game with deterministic predicates has non-signaling value 1: simply choose for each question pair (s, t) the distribution p s,t (a, b) = 1 k if a = σ st (b) and p s,t (a, b) = 0 otherwise, where σ st is the unique permutation associated with (s, t). This strategy is non-signaling, as all its marginal distributions are uniform. Hence any unique game that gives a counterexample to strong parallel repetition in the non-signaling case must use probabilistic predicates.
Parallel repetition of the line game
We now proceed to show that strong parallel repetition holds neither for G uL nor for G L . We will show this by first proving a general connection for unique games between the value of SDP1 and the repeated entangled value of the game. We emphasize that the following construction can also be presented more explicitly without resorting to SDPs; we feel, however, that the connection to SDPs gives much more insight into the nature of parallel repetition, and might also make it easier to extend our result to other settings.
Compare this to the classical case (Lemma 2.5), where we have a dependence on the alphabet size (as well as an extra log factor). In the entangled case, SDP1 gives a tight estimate on the amortized entangled value up to a universal constant.
Hence, in order to analyze the repeated entangled value of G uL it suffices to analyze its SDP1 value.
Proof: We construct a solution {u s a }, {v t b } ∈ R 2 for SDP1(G uL ) in the following way, as illustrated in Fig. 2a.
Clearly u s a , u s b = 0 for a = b and ∑ a u s a , u s a = 1 and similarly for the v vectors, so our solution for SDP1(G uL ) is feasible. Since the u vectors are equal to the v vectors, it is easy to compute its value which proves the lemma for all n ≥ 2.
Combining the above lemma with Lemma 2.4, we see that in fact ω SDP1 (G uL ) = 1 − Θ( 1 n 2 ). Moreover, Lemma 2.6 shows that ω SDP2 (G uL ) = 1 − Θ( 1 n ). Hence we obtain a quadratic gap between SDP1 and SDP2. Also note that the SDP1 solution above obeys the non-negativity constraint in SDP2: the inner products of any two vectors is non-negative. In fact we can modify the solution to a solution with similar value, so that it obeys the z-constraint of SDP2, at the expense of violating the non-negativity constraint, as shown in Fig. 2b. Hence our quadratic gap also holds between SDP2 and the two possible strengthenings of SDP1.
Combining Theorem 4.1 with the above lemma, we obtain that for all ℓ ≥ n 2 , ω * (G ℓ uL ) ≥ (1 − O(1/n 2 )) ℓ . In fact, the same lower bound also holds for the classical value. The reason for this is that the strategy constructed in Lemma 2.4 uses a shared maximally entangled state, and performs a measurement on it in an orthonormal basis derived from the SDP vectors. Since all the vectors in the SDP solution of Lemma 4.2 are in the same orthonormal basis (and the same is true for the resulting SDP solution of G ℓ uL ), we obtain that the strategy constructed in Lemma 2.4 is in fact a classical strategy. A final technical remark is that even though we obtained the above strategy by using a tensored SDP solution, the strategy itself is not a product strategy due to a "correlated sampling" step performed as part of the proof of Lemma 2.4. We summarize this discussion in the following theorem. Theorem 4.3. For ℓ ≥ n 2 , ω ns (G ℓ uL ) ≥ ω * (G ℓ uL ) ≥ ω(G ℓ uL ) ≥ (1 − O(1/n 2 )) ℓ . This shows that Holenstein's parallel repetition for the non-signaling value (Theorem 1.2.4) as well as the parallel repetition theorem for the entangled value of unique games (Theorem 1.2.2) are both tight up to a constant. 1 We complete this section by extending the above analysis to the line game, as shown in the following theorem. This shows that alphabet size 2 is sufficient to obtain a counterexample to strong parallel repetition for both the entangled value and the non-signaling value, and in particular shows that Theorem 1.2.4 is tight also for this case. The counterexample is not a unique game, but this is actually necessary: XOR games obey perfect parallel repetition both in terms of the entangled value (Thm. 1.2.1) and in terms of the non-signaling value (even with probabilistic predicates, as is not difficult to see).
Proof:
We first observe that the classical strategy for G ℓ uL constructed above has the property that both provers always answer 1 on a coordinate containing the question 1, and similarly, they always answer 2 on a coordinate containing the question n. Moreover, the provers never use the answer 3. This follows from the fact that the vectors constructed in Lemma 4.2 satisfy u 1 2 = v 1 2 = 0, u n 1 = v n 1 = 0, and for all s, u s 3 = v s 3 = 0. As a result, when taking the tensor product of these vector and applying Lemma 2.4 (as was done in Theorem 4.1), we obtain the aforementioned property of the classical strategy. Since the strategy does not use the answer 3, it is also a valid strategy for G ℓ L . Moreover, it is easy to check that the winning probability of the strategy in G ℓ L is equal to that in G ℓ uL ; this is because the strategy always answers 1 on 1 and 2 on n, and due to the way the games are constructed. | 2009-11-01T20:28:21.000Z | 2009-11-01T00:00:00.000 | {
"year": 2009,
"sha1": "1515586266b9778d840e144ba7ac2388e4a93452",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0911.0201.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "501f2631dd0d723420ef2b2bd4b5566a7f686500",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
233691108 | pes2o/s2orc | v3-fos-license | Inventory of some families of Hemiptera, Coleoptera (Curculionidae) and Hymenoptera associated with horticultural production of the Alto Valle de Río Negro and Neuquén provinces (Argentina)
. The knowledge of the entomological fauna in productive systems is important for the agroecological management since beneficial insects are a key resource for pest management in horticultural systems. Scientific information on the biodiversity present in a given area is essential as well as the ecological function and/or feeding habits of the insects. In Alto Valle de Río Negro and Neuquén, horticultural production systems can be described as highly dependent on chemical inputs for pest management and fertilization. The aim of this study is to carry out an inventory of the biodiversity of some families of Hemiptera, Coleoptera (Curculionidae) and Hymenoptera present in peri-urban and rural farms located in Neuquén and Río Negro, respectively. Insects were collected through pitfall and sweeping net on tomato and pepper crops and the surrounding non-cultivated areas. Idiosystatus Berg (Auchenorrhyncha) was cited for the first time from Argentina. Species cited for the first time from Neuquén: Hemiptera: Auchenorrhyncha: Acanaloniachloris(Berg), Syncharina punctatissima (Signoret), Amplicephalus dubius Linnavuori, Exitianus obscurinervis (Stål), Agalliana ensigera Oman and Bergallia signata (Stål); Hemiptera: Heteroptera: Harmostes (Harmostes) prolixus Stål and Atrachelus (Atrachelus) cinereus (Fabricius); Coleoptera: Curculionidae: Hypurus bertrandi and Sitona discoideus Gyllenhal and Hymenoptera: Xylocopa (Neoxylocopa) augusti Lepeletier and Pseudagapostemon (Neagapostemon) singularis Jörgensen. Species cited for the first time from Río Negro: Hemiptera: Auchenorrhyncha: AmplicephalusdubiusLinnavuori, AmplicephalusmarginellanusLinnavuori, Circulifertenellus (Baker) and Xerophloeaviridis(Fabricius); Hemiptera: Heteroptera: Tupiocoriscucurbitaceus (Spinola), Atrachelus (Atrachelus) cinereus (Fabricius), Dichelops furcatus (Fabricius) and Harmostes (Harmostes) prolixus Stål; Coleptera: Curculionidae: Naupactus xanthographus (Germar) and Hymenoptera: Diadasia pereyrae (Holmberg) and Dialictus autranellus (Vachal).
INTRODUCTION
Insects make up a large part of the overall diversity in agricultural landscapes and encompass a broad range of functional groups (Kremen et al., 1993). They do not only represent agricultural pest species, insects also serve as biological control agents, provide pollination services, and form an important food resource for many vertebrates in agricultural landscapes (Diekötter et al., 2008).
An increasing number of studies show that the intensification of land uses and homogenization in agricultural landscapes, with the aim of increasing food supply, decreases biodiversity. At the local field scale, increased uses of crop monocultures, greater inputs of fertilizers and pesticides, and decreased within-field heterogeneity may affect species diversity and composition and the provision of ecosystem services to agricultural productivity (Tscharntke et al., 2005).
In the Alto Valle de Río Negro and Neuquén provinces, in the north of the Argentinian Patagonia, the main economic-productive development is linked to fruit production of pears and apples. The second most important activity, in terms of arable land area and impact on the rural economy, is horticultural production (Fernández Lozano, 2012). In this region, both the fruit and vegetable production models can be described as systems highly dependent on chemical inputs for pest management and fertilization (FAO, 2015a(FAO, , 2015b. Besides, the progress of the real estate market and subsequent urbanization on former production areas is affecting the region's biodiversity due to habitat fragmentation. These productive activities take place within a dynamic and heterogeneous landscape. Horticulture farms are surrounded by fruit productive orchards, abandoned orchards, patches and corridors of spontaneous vegetation and poplar "shelterbelts" (e.g., Buck et al., 1999). This landscape heterogeneity provides resources such as nectar and pollen from a diversity of flowering plants, a variety of preys or hosts, and overwintering and nesting habitat for pollinators and predatory insects, which may regulate the incidence of pests and promote the presence of beneficial insects in crops.
Agroecological management has been proposed as an alternative to conventional agricultural management due to its alleged ability to rehabilitate degraded ecosystem services (De Leijster et al., 2019).
The knowledge of the entomological fauna present in productive systems is important for the agroecological approach, since beneficial insects are a key resource for pest management in horticultural systems, allowing a decrease in the use of agrochemicals, and providing other services such as pollination. The maintenance and management of agrobiodiversity is one of the most promising strategies in the search for sustainable agroecosystems. There is a growing consensus that a greater agrobiodiversity in its different dimensions (spatial, temporal, and structural) provides essential ecological services in agroecosystems (Stupino et al., 2014). The growing demand for productive systems with less dependence on chemical inputs promotes the search for management strategies to strengthen ecological processes weakened by a decrease in diversity.
In order to design productive systems with an agroecological approach, it is essential to have scientific information on the biodiversity present in a given area and the ecological function and/or feeding habits of the insects. For example, the order Hymenoptera includes families with a broad heterogeneity of functions: predators (e.g., Vespidae), pollinators (e.g., Apidae), and parasitoids (e.g., Braconidae).
The aim of this study is to carry out an inventory of the biodiversity of some families of Hemiptera, Coleoptera (Curculionidae) and Hymenoptera present in rural and peri-urban horticultural farms, taking into account tomato and pepper crops and the surrounding noncultivated areas (as spontaneous vegetation, abandoned fruit orchards, and poplar shelterbelts).
Study area
Alto Valle de Río Negro and Neuquén is situated in the north of the Argentinian Patagonia, along parallel 39°S and meridians 68°to 66°W (Gili et al., 2004). It develops along the lower basin of Limay and Neuquén rivers and the upper basin of Río Negro river, as seen in Figure 1. Natural and semi-natural habitats, urban centers, peripheries and rural areas are alternated along almost 130 km. The Alto Valle is a long strip about 6 to 20 km wide. The arable land with the highest quality is located near the river terraces of Limay, Negro and Neuquén rivers.
The climate is temperate and semiarid, with an average annual temperature between 13.6°C and 14.5°C and thermal amplitude between 16.1°C and 17.7°C. The rainfall varies between 130 and 170 mm per year, depending on the locality, with a slightly progressive increase from west to east.
It is an area of strong winds higher than 4 m/s on average, with predominant direction southwest-west. The typical vegetation is composed of shrubs of the genera Larrea ("jarillas") (L. divaricata Cav., L. cuneifolia Cav. and exceptionally L. nitida Cav.) and some Prosopis L. such as P. alpataco Phil. ("alpataco"), or Schinus (S. johnstonii F.A. Barkley ("molle"). Permanent and ephemeral grasses grow under these shrubs, although in some areas this vegetation has changed due to the implementation of gravity irrigation system. The usual summer water deficit is mainly supplied by a channel network derived from the Limay, Neuquén and Negro rivers.
In the Alto Valle region, horticultural activities are mostly performed by small and medium farmers. The most important crops are tomatoes, peppers, carrots, pumpkins, lettuce, and other vegetables. The farm activities show strong seasonality depending on the climate (e.g., summer water stress, frost, strong winds, hail) (FAO, 2015a(FAO, , 2015b. Localities studied. The coexistence of peri-urban and rural farms is frequent in the Alto Valle, for that reason we selected one peri-urban farm located on the eastern side of Plottier city (38°57'02.5" S; 68°12'29.5" W), Neuquén province and a rural one located in Campo Grande (38°41'11.5" S; 68°11'25.6" W), Río Negro province. The first one is about 6 hectares in size and belongs to a larger pear orchard (25 hectares in size), abandoned 10 years ago. Currently, this orchard is surrounded by real estate projects with different levels of development (Fig. 1a). The rural farm is about 3 hectares, located in a fruit production area of the Alto Valle. About 20 years ago this area was a pome orchard surrounded by other fruit and vegetable farms. The plot for cultivation is adjacent to the abandoned pear orchard (Fig. 1b).
Collecting insects
For collecting insects in the horticultural systems (peri-urban and rural), we used different sampling techniques, carried out every 30 days, from January to April 2017 (January 6 th , February 3 rd , March 3 rd and
Sampling design
The agricultural landscape was defined as a heterogeneous land area made up of a group of ecosystems, repeated across length and width in similar ways (Forman & Godron, 1986). The landscape represents a mosaic of farms, semi-natural habitats, human infrastructure and, occasionally, natural habitats (Marshall & Moonen, 2002). For this reason, the selected sites included not only tomatoes and pepper crops, but also feral plant communities located on the margins of these crops: abandoned pear orchards, spontaneous vegetation and poplar shelterbelts. Sampling stations were established within each sampling site. The number of stations was based on the site's surface. Each station consisted of the locations where each pitfall trap was placed.
Vegetation sampling
For vegetation sampling we applied quadrat method (Goodall, 1952). At each sampling station a 1 m x 1 m quadrat was randomly placed and all plants within the quadrat were recorded and identified at species level (Kennedy & Addison, 1987), when possible.
Characterization of sites
Peri-urban farm. Tomato and pepper crop. Accompanied by low coverage of herbaceous species (Table I).
Rural farm. Tomato and pepper crop. These crops are accompanied by a low coverage of herbaceous species (Table I).
Abandoned pear orchard. It contains pear, rosehip and wild vine plants within a plantation frame of 6 m x 4 m. Accompanied by an herbaceous stratum where grasses predominate (Table II).
Spontaneous vegetation. It shows the greatest complexity in vegetation structure, with herbaceous, shrub and tree layers (Table II).
Poplar shelterbelt. It is characterized by dominant arboreal species and a medium coverage of herbaceous layer (Table II).
Abandoned pear orchard. It is characterized by a herbaceous layer (Table II).
Spontaneous vegetation. It is characterized by a predominant shrub and herbaceous layers (Table II). Poplar shelterbelt. The arboreal layer of Populus L. "Poplar" is accompanied by a herbaceous shrub layer of medium to low coverage (Table II).
Suborder Auchenorrhyncha
Superfamily Fulgoroidea Family Delphacidae 31 th ), during the period of tomatoes and pepper harvest. The sampling methods were as follows: Sweeping net: Sequential sampling by net blows was carried out to capture the insects that inhabit the aerial part of the vegetation, walking through the field and passing the net over the vegetation. The movement with the net was performed with an angle of approximately 90°. Twenty net strokes were made at each sampling site. The captured insects were placed in transparent jars with 70% ethanol.
Pitfall trap: These traps were used to capture epigeous walking insects. They consisted of 220 ml white plastic containers (diameter: 10 cm; depth, 12 cm), buried and placed at ground level, with 100 ml of a solution of 70% ethanol, 20% water and 10% liquid petroleum jelly. The material obtained by each sampling method was separated for subsequent determination by the authors: Hemiptera (Table III).
Economic importance. It is a major pest of "maize" in Argentina as vector of MRCV (Remes Lenicov & Paradell, 2012). Table II (Table III).
Economic importance. It is probably another vector of MRCV in central Argentina (Remes Lenicov & Brentassi, 2017 (Table III and IV).
Phytosanitary importance. Species positive for Xylella fastidiosa Wells et al.
Circulifer tenellus (Baker)
Geographic distribution. Almost cosmopolitan (Nearctic, Palearctic, Oriental and Neotropical regions). In America it is present in Canada, USA, Central American and Caribbean countries, Brazil, Peru, Suriname, Colombia, Venezuela and Argentina (Zanol, 2006). Río Negro is a new province record.
Phytosanitary importance. It is a vector of phytoplasma in Mexico, especially on horticultural plants such as "radish root" and "pepper". It is also vector of Beet curly top virus (BCTV) that produces the Carrot purple leaf (Weintraub & Beanland, 2006;Lee et al., 2006).
Phytosanitary importance. This species transmits the bacteria Spiroplasma kunkelii (Entomoplasmatales: Spiroplasmataceae) under experimental conditions, suggesting that may be a vector of the disease called "Corn Stunt Spiroplasma" in Argentina (Carloni et al., 2011).
Phytosanitary importance. It is a vector of "Virus Sugar Beet Yellow-Wilt" causing the disease "Yellow Wilt" of "sugar beet". This is also a potential vector of fitoplasma 16SrIII X-disease, that causes "Garlic decline" disease (Paradell et al., 2014). (Tables III and IV).
Superfamily Naboidea
Feeding habits: Predator. Species in the genus are known predators of other Heteroptera, particularly Table III. Order, Family, and species collected in tomato and pepper crops in peri-urban and rural farms. * = new records.
Blissidae, Geocoridae, and Rhyparochromidae (Lattin, 1989). Economic importance. Although the members of Nabidae are generalist predatory species, and some species are frequently present in agroecosystems, the role of nabids in regulation of pest populations of importance to urban agriculture remains largely unknown (Braman, 2000).
Superfamily Reduvioidea
Feeding habits. Predator. Economic importance. No economic damages registered.
Harmostes (Neoharmostes) procerus Berg
Geographic distribution. Argentina: known from all provinces . It has been also recorded for Brazil, Peru, and Uruguay .
Economic importance. No economic damages registered.
Family Lygaeidae
This species has been also recorded for Brazil, Paraguay, and Uruguay (Dellapé & Henry, 2020).
Comments: Although N. simulans is an almost ubiquitous species in Argentina, it is commonly confounded or mixed with populations of other species in the genus, and with species of the closely relative genus Xyonysius. (Tables III and IV). Table IV. Order, Family, and species collected in fruit abandoned orchard, spontaneous vegetation and poplar shelterbelt in peri-urban and rural farms. * = new records. "portulaca". The larvae mine in Portulaca sp. leaves and adults also feed on leaves. Considered a pest in several countries, it is a potential pest in Argentina.
Feeding habits. Phytophagous. Plant associations. It is a primary pest of vegetables, found in many wild and cultivated hosts (more than 80).
Larvae destroy the tender young crown leaves of carrots and turnips. Adults often cause extensive damage by feeding on the leaves of small "tomato" and "potato" plants (Solanaceae), by cutting off the stems of plants at ground level. Females reproduce by parthenogenesis.
Lifecycle. Males are unknown or scarce and the species reproduces by parthenogenesis in most of its range. It is common in pastures, shrubs and crops of the Pampean biogeographic province.
Lifecycle. Adults of N. cervinus feed on foliage and larvae feed on roots. Under severe infestations, these weevils can consume the entire leaf, leaving only the midrib. Plants with severe root damage are more vulnerable to other biotic and abiotic factors (e.g., fungal infections with Phytopthora spp.) and may die during periods of drought. Larval damage can be serious on vegetable crops but relatively minor in citrus. Reproduction occurs without fertilization, a phenomenon known as parthenogenesis, except in small native areas Plant associations. It shows preference for Feeding habits. Phytophagous.
Lifecycle. Adults feed at the bases of leaf margins, leaving characteristic "notching". This feeding behavior injures plants seriously only if adults are very numerous. Larvae gnaw at tap roots, the basal parts of stems and the small lateral roots. When feeding is severe, plants turn yellow, wilt and die. Plants on which only a small amount of the cambium layer is eaten usually survive, but produce little or no crop. In lucerne, the larvae usually chew into the taproot, make a furrow along it and these results in the death of young plants. In "potatoes" damage is more spectacular, as larvae tunnel inside the tubers. The nitrogen fixation rate of Trifolium repens L. is reduced by 92% by N. leucoloma in New Zealand.
Except in some small areas of Argentina, populations of N. leucoloma include only parthenogenetic females.
Geographic distribution. Native to South America (Argentina, southern Brazil, Paraguay and Uruguay (Lanteri & del Río, 2020). Río Negro is a new province record.
Feeding habits. Phytophagous. The adults feed on shoots and leaves, being particularly injurious to young plants. The larvae live in soil during the whole year, eating the plant's roots.
Tribe Otiorhynchini
Otiorhynchus ovatus (L.) Common names. Strawberry Root Weevil Geographic distribution. It is native to Europe and has been introduced in Canada, USA, Australia, New Zealand, Chile and Argentina: Chubut, Neuquén, Río Negro and Santa Cruz. It is considered invasive due to its parthenogenetic reproduction and associations with many plant species.
Geographic distribution. It is native to Europe and
Geographic distribution. It is native to Europe (Palaearctic region) and introduced in several places around the world, North America (broadly distributed), Hawaii, Australia, New Zealand, Japan, Malaysia and Russia. In South America it is present in Chile and Argentina: Chubut, Neuquén and Río Negro provinces. In Argentina it was registered for the first time in 2000 (Lanteri et al., 2002). It is considered invasive due to its parthenogenetic reproduction and associations with many plant species.
Subfamily Xylocopinae
Biological comments. Pollinator. This species nests in solid wood and has a parasocial life behavior (Lucia et al., 2017). Xylocopa augusti is a polilectic species, the presence of 18 pollen types from 11 families of brood cells of several artificial nests was recorded (Lucia et al., 2017). This species presents buzzing behavior to collect pollen and was recorded visiting eggplant crops (Álvarez et al., 2014).
ORDER HYMENOPTERA
nest in the soil and present life habits from solitary to eusocial (Michener, 2007;Dalmazzo et al., 2014).
Family Vespidae
Subfamily Vespinae Vespula germanica (Fabricius) Geographic distribution. Native to the Palaearctic, and introduced in Australia, New Zealand, North America, South America, South Africa, Ascencion Island, Madeira, Canary Islands and Iceland (Beggs et al., 2011). In Argentina it was registered by Willink (1980), and it is distributed from the north of the province of Mendoza to the south of the province of Tierra del Fuego and from the Andes to the Atlantic Ocean (Masciocchi & Corley, 2013;Sola et al., 2015).
Biological comments. Eusocial and with generalist predator behavior, it can negatively affect natural ecosystems and economic activities, including beekeeping, horticulture and tourism (Masciocchi & Corley, 2013).
DISCUSSION
Among the species listed in the present work, 74% are herbivorous, of which 55% are pests, but within this percentage only 35% correspond to horticultural species and the rest to cereals and fruit trees. The remaining 45% of herbivourous species are not pests and can act as alternative prey for predator populations with interest for biological control. Among the nonherbivores (26%) three species are of interest for biological control and five for pollination. It is worth mentioning that only some of the insect families captured in the study are published in the present work. Even so, the species listed herein show the importance of the vegetation areas surrounding the crops. The predatory species were found mainly in the patches of vegetation with greater complexity in their structure since there are herbaceous, shrub and arboreal species (abandoned fruit orchard in the rural area and spontaneous vegetation patch in the peri-urban area), and which also present a greater number of species that represent a source of pollen and / or nectar for insects. Other studies analysing bees, true bugs, and carabids separately in each landscape confirmed that diversity patterns in mosaic agricultural landscapes are strongly determined by the interplay of species' dispersal abilities and landscape structure (Steffan-Dewenter & Tscharntke, 2002;Thomas, 2000). Something similar is observed with species that are pollinators. Floral resource availability is considered a major driving force that directly regulates the abundance and diversity of wild bees' communities (Potts et al., 2003;Roulston & Goodell, 2011).
ACKNOWLEDGMENTS
The authors thank Instituto Nacional de Tecnología Agropecuaria (INTA), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET) and Comisión de Investigaciones Científicas de la Provincia de Buenos Aires (CIC). This work was supported by INTA and was performed under a specific agreement between INTA-Facultad de Ciencias Naturales y Museo, Universidad Nacional de La Plata. | 2021-05-05T00:07:52.319Z | 2021-03-29T00:00:00.000 | {
"year": 2021,
"sha1": "960c705a50bebe987a33d7bf9eb4f6d9e7680189",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.25085/rsea.800106",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5c42274f7730253ab11ac3236230e84d559ceaa6",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
252443517 | pes2o/s2orc | v3-fos-license | The Synthesis and Biological Evaluation of Aloe-Emodin-Coumarin Hybrids as Potential Antitumor Agents
A series of novel aloe-emodin–coumarin hybrids were designed and synthesized. The antitumor activity of these derivatives was evaluated against five human tumor cell lines (A549, SGC-7901, HepG2, MCF-7 and HCT-8). Some of the synthesized compounds exhibited moderate to good activity against one or more cell lines. Particularly, compound 5d exhibited more potent antiproliferative activity than the reference drug etoposide against all tested tumor cell lines, indicating that it had a broad spectrum of antitumor activity and that it may provide a promising lead compound for further development as an antitumor agent by structural modification. Furthermore, the structure–activity relationship study of the synthesized compounds was also performed.
Introduction
Aloe-emodin (1,8-dihydroxy-3-hydroxymethyl-anthraquinone), a representative natural anthraquinone compound, is isolated from some traditional medicinal plants such as Rheum palmatum L. and Aloe vera L ( Figure 1). Modern pharmacological studies have shown that aloe-emodin exhibits a broad range of bioactivity profiles, which include antitumor [1][2][3][4][5], anti-inflammatory [6,7], antiviral [8], antimicrobial [9] and antifibrosis effects [10]. It is noteworthy that its antitumor activity has attracted special attention. Some studies have shown that aloe-emodin exerts the anti-tumor effect through inhibiting proliferation and inducing the apoptosis of certain cancer cells [11][12][13][14][15]. However, aloe-emodin itself is not sufficient to apply as an anti-tumor drug, and the development of its derivatives to improve its therapeutic efficacies is necessary. Coumarin represents an important class of heterocyclic skeletons, which is widely distributed in diverse natural products. Coumarin-based natural and synthetic derivatives exhibit a wide range of pharmacological properties [16][17][18][19], and antitumor activity is one of the most important pharmacological activities among them [20][21][22]. Biological investigations have revealed that coumarin derivatives can exert antitumor efficiency through binding to various biological targets such as sulfatase [23], aromatase [24] and protein kinase [25]. Owing to its privileged structure and impressive antitumor activity, coumarin is often introduced into many natural and synthetic compounds in the design of antitumor drugs [26,27]. For example, Belluti et al. adopted a molecular hybridization strategy to produce stilbene-coumarin hybrids, and the hybrid compounds showed higher antitumor activities than the reference compound resveratrol [28]. Guo and co-workers synthesized a series of dihydroartemisinin-coumarin hybrids as potential antitumor agents, and the evaluation of their antitumor activity also proved that the synthetic compounds had great antitumor activity against the MDA-MB-231 and HT-29 cell lines [29]. The above research indicates that the hybridization of coumarin with other antitumor compounds is an effective strategy to search for new antitumor agents, and the resulting hybrid compounds often have more potent antitumor activity than the parent compound. These stimulated us to design and synthesize the hybrids of aloe-emodin and coumarin as antitumor agents by a molecular hybridization strategy to improve the affinity and efficacy of aloe-emodin.
Herein, a series of aloe-emodin-coumarin hybrids were synthesized, and their antiproliferative activity was evaluated against a panel of human tumor cell lines using etoposide as a reference. The structure-activity relationship of the derivatives is also discussed.
Chemistry
The synthetic strategy employed for the synthesis of aloe-emodin-coumarin derivatives is depicted in Scheme 1. O-propargyl and -butynyl coumarin derivatives 2a-m were obtained in a single step by the reaction of corresponding hydroxycoumarin with propargyl bromide or 4-bromobut-1-yne in the presence of K 2 CO 3 in DMF. The treatment of commercially available aloe-emodin with CBr 4 and PPh 3 in tetrahydrofran afforded bromide 3 in 94% yields [30]. Then, the intermediate bromide 3 was reacted with NaN 3 through nucleophilic substitution to provide azido derivative 4, which was finally subjected to the click reaction with O-propargyl or -butynyl coumarin derivatives (2a-m) in the presence of copper(I) thiophene-2-carboxylate in dichloromethane to afford the target products (5a-m) [31]. To further investigate the structure-activity relationship of 1-and 8-disubstitution on the anthraquinone core of compound 5d (the most active compound among derivatives 5a-m), a series of 5d analogs 7a-e were synthesized. The methylation or benzylation of 4 with methyl iodide or benzyl bromide in the presence of K 2 CO 3 was performed to give dimethylated or dibenzylated intermediates 6a and 6b, respectively. Diacetylated intermediate 6c and dibenzoylated intermediate 6d were obtained by the acylation of 4. The sulfonylation of 4 with p-toluenesulfonyl chloride gave disulfonylated intermediate 6e. Finally, 6a-e reacted with 2d under the catalysis of copper(I) thiophene-2-carboxylate to give 7a-e. The structures of the target compounds are shown in Table 1 and were identified by HRMS, 1 H-NMR and 13 C-NMR spectral analysis ( 1 H-NMR and 13 C-NMR spectra of the synthesized compounds are available in the Supplementary Materials).
Biological Results and Discussion
All of the target compounds were evaluated for their antiproliferative activities against five cancer cell lines (A549, SGC-7901, HepG2, MCF-7 and HCT-8) with etoposide as a positive reference using the MTT assay. To determine the selectivity, the cytotoxic activities of several compounds with good antitumor activities were also evaluated against the Hk-2 normal cell line (Human Kidney-2). The results are summarized in Table 2.
Compared to the parent compound aloe-emodin, some compounds displayed promising antiproliferative activity against one or more cell lines. Among them, compound 5d exhibited the most potent antiproliferative activity against all the tested cell lines. It showed 3-9-fold increased activity compared to the positive control, etoposide. In addition, 5f also exhibited strong potency against all five cell lines, which was comparable or superior to that of etoposide. It is worth noting that five synthesized compounds (5a, 5d, 5f, 5g and 5j) exhibited moderate to good activities against all the tested cell lines, which indicated that these derivatives possessed a broad spectrum of antitumor activity. The results of the cytotoxic activities of 5a, 5d, 5f, 5g and 7c against the Hk-2 cell line showed that the synthetic compounds also displayed similar cytotoxic activity compared with the tested tumor cell lines, which did not show obvious selectivity. The preliminary structure-activity relationship (SAR) study indicated that the linking position of coumarin moiety had an effect on their antiproliferative activity. The derivatives linked at the 3-, 4-and 7-position of coumarin were more active than others linked at the 6-position. In the same series of derivatives (containing the same linking position of coumarin), both the position and electron donor-acceptor properties of the substituents on coumarin significantly affected the activity. For the derivatives linked at the 4-position of coumarin, the chloro group in the 5-position of coumarin (5d) showed better activity than others, and the derivatives bearing a methyl (5f) or methoxy (5g) group in the 6-position of coumarin showed slightly weaker activity than 5d. It was noted that the replacement of the chloro with the fluoro group (5e) in the 5-position of coumarin resulted in the loss of activity. Regarding the derivatives linked at the 6-position of coumarin, the substituent at the 3-position of coumarin (5k) was better than that at the 4-position (5l). Moreover, to investigate the influence of the linker length to activity, 5m with an extended linker length was then synthesized, and the activity of it showed equal to or slightly less than that of 5a against four tumor cell lines (SGC-7901, HepG2, MCF-7 and HCT-8). The results indicated that extending the linker length had no significant effect on activity.
The effect of the substituent at the 1-and 8-position on an anthraquinone core was also investigated. The results suggested that the substituent size and steric properties had a remarkable effect on their antiproliferative activity. 1, 8-diacetyl-substituted derivative 7c displayed similar or slightly decreased potency to 1, 8-unsubstituted derivative 5d against four tumor cell lines (SGC-7901, HepG2, MCF-7 and HCT-8). However, after replacement of the acetyl group with the bulky benzoyl and benzenesulfonyl moieties, compounds 7d and 7e exhibited weak or no activity. A similar phenomenon could also be found by the replacement of the methyl (7a) group with the benzyl (7b). The results suggested that the introduction of the bulky substituents at the 1-and 8-position on anthraquinone was detrimental for activity.
Biological Results and Discussion
All of the target compounds were evaluated for their antiproliferative activities against five cancer cell lines (A549, SGC-7901, HepG2, MCF-7 and HCT-8) with etoposide as a positive reference using the MTT assay. To determine the selectivity, the cytotoxic 50 values are shown as mean ± SD from the average of three replicates; b IC 50 : concentration that causes a 50% reduction in cell growth; "-": not detected.
General Information
NMR spectra were recorded on a Bruker AVANCE III 600 or a Bruker AVANCE III 500 NMR spectrometer instrument (Bruker, Rheinstetten, Germany). Solvent signals (DMSO-d 6 : δ H = 2.50 ppm/δ C = 39.52 ppm) were used as references. High-resolution mass spectra (HRMS) were recorded on a Waters SYNAPT G2 HDMS (Waters, Milford, MA, USA). The reactions were monitored by thin-layer chromatography on plates (GF 254 ) supplied by Yantai Chemicals (Yantai, China). Silica gel column chromatography was performed using 200-300 mesh silica gel supplied by Tsingtao Haiyang Chemicals (Tsingtao, China). Unless otherwise noted, all common reagents and solvents were obtained from commercial suppliers (Beijing InnoChem Science & Technology Co., Ltd., Beijing, China) without further purification.
General Procedures for O-Propargyl and O-Butynyl Coumarin Derivatives (2a-m)
3-bromoprop-1-yne or 4-bromobut-1-yne (2.0 mmol) and anhydrous K 2 CO 3 (1.5 mmol) were added to a solution of the corresponding hydroxycoumarin (1.0 mmol) in DMF (5 mL). The reaction was heated to 60 • C for 2 h. Upon completion, the reaction was cooled down to room temperature, and then the reaction mixture was diluted with water and extracted three times with ethyl acetate. The organic layers were dried with dry Na 2 SO 4 , they were filtered, and the filtrate was removed under reduced pressure. The residue was purified by column chromatography to give compounds 2a-m (Yield: 40~96%). CBr 4 (15.34 g, 46.25 mmol) was added portion-wise to a solution of PPh 3 (12.13 g, 46.25 mmol) in THF (50 mL) at room temperature. The mixture was left stirring for 10 min, and then aloe-emodin (5.0 g, 18.5 mmol) was added. After being stirred at room temperature for 6 h, the reaction mixture was filtered, and the filtrate was evaporated to give a residue that was subjected to column chromatography (petroleum ether:CH 2 Cl 2 = 1:1) to obtain compound 3 (5.83 g, 94%). 1 NaN 3 (117 mg, 1.8 mmol) was added to a solution of compound 3 (500 mg, 1.5 mmol) in dry CH 3 CN (10 mL) at room temperature. The reaction mixture was heated to 60 • C for 3 h. Upon completion, the reaction was cooled down to room temperature, and then the reaction mixture was diluted with water and extracted three times with CH 2 Cl 2 . The organic layers were dried with dry Na 2 SO 4 , they were filtered, and the filtrate was removed under reduced pressure. The residue was purified by column chromatography (petroleum ether:CH 2 Cl 2 = 1:1) to give compound 4 (375 mg, 85%). 1 (5) Compound 2 (0.24 mmol) and copper(I) thiophene-2-carboxylate (0.08 mmol) were added to a solution of compound 4 (0.2 mmol) in CH 2 Cl 2 (10 mL). After being stirred at room temperature for 6 h, the reaction mixture was concentrated, and the residue was purified by silica gel column chromatography to give compound 5. Iodomethane (2.0 mmol) or benzyl bromide (1.0 mmol) and anhydrous K 2 CO 3 (2.0 mmol) were added to a solution of 4 (0.2 mmol) in DMF (3 mL). The reaction was heated to 60 • C for 4 h. Upon completion, the reaction was cooled down to room temperature, and then the reaction mixture was diluted with water and extracted three times with CH 2 Cl 2 . The organic layers were dried with dry Na 2 SO 4 , they were filtered, and the filtrate was removed under reduced pressure. The residue was purified by column chromatography to give compound 6a or 6b. | 2022-09-23T15:33:57.128Z | 2022-09-20T00:00:00.000 | {
"year": 2022,
"sha1": "eb4433ccde8e677bcf41d00d70f9986876730b21",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/19/6153/pdf?version=1663900607",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb3646a35853919632b78dc60654f92b2fb7b91c",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
255957229 | pes2o/s2orc | v3-fos-license | Two forms of CX3CL1 display differential activity and rescue cognitive deficits in CX3CL1 knockout mice
Fractalkine (CX3CL1; FKN) is a chemokine expressed by neurons that mediates communication between neurons and microglia. By regulating microglial activity, CX3CL1 can mitigate the damaging effects of chronic microglial inflammation within the brain, a state that plays a major role in aging and neurodegeneration. CX3CL1 is present in two forms, a full-length membrane-bound form and a soluble cleaved form (sFKN), generated by a disintegrin and metalloproteinase (ADAM) 10 or 17. Levels of sFKN decrease with aging, which could lead to enhanced inflammation, deficits in synaptic remodeling, and subsequent declines in cognition. Recently, the idea that these two forms of CX3CL1 may display differential activities within the CNS has garnered increased attention, but remains unresolved. Here, we assessed the consequences of CX3CL1 knockout (CX3CL1-/-) on cognitive behavior as well as the functional rescue with the two different forms of CX3CL1 in mice. CX3CL1-/- mice were treated with adeno-associated virus (AAV) expressing either green fluorescent protein (GFP), sFKN, or an obligate membrane-bound form of CX3CL1 (mFKN) and then subjected to behavioral testing to assess cognition and motor function. Following behavioral analysis, brains were collected and analyzed for markers of neurogenesis, or prepared for electrophysiology to measure long-term potentiation (LTP) in hippocampal slices. CX3CL1−/− mice showed significant deficits in cognitive tasks for long-term memory and spatial learning and memory in addition to demonstrating enhanced basal motor performance. These alterations correlated with deficits in both hippocampal neurogenesis and LTP. Treatment of CX3CL1−/− mice with AAV-sFKN partially corrected changes in both cognitive and motor function and restored neurogenesis and LTP to levels similar to wild-type animals. Treatment with AAV-mFKN partially restored spatial learning and memory in CX3CL1−/− mice, but did not rescue long-term memory, or neurogenesis. These results are the first to demonstrate that CX3CL1 knockout causes significant cognitive deficits that can be rescued by treatment with sFKN and only partially rescued with mFKN. This suggests that treatments that restore signaling of soluble forms of CX3CL1 may be a viable therapeutic option for aging and disease.
(Continued from previous page) Conclusions: These results are the first to demonstrate that CX3CL1 knockout causes significant cognitive deficits that can be rescued by treatment with sFKN and only partially rescued with mFKN. This suggests that treatments that restore signaling of soluble forms of CX3CL1 may be a viable therapeutic option for aging and disease.
Keywords: Fractalkine, CX3CL1, Neuroinflammation, Microglia, Aging, Neurodegeneration, Cognition, Long-term potentiation, Neurogenesis Background Microglia, myeloid-derived macrophage-like cells, are the resident immune cells of the central nervous system (CNS). Upon detection of an environmental disturbance, microglia become activated and may transition through several activation states in order to eliminate immunogens and/or promote neuroprotection. Indeed, microglia are highly active cells that exist in multiple states, constantly surveying the environment and responding to signals from neurons and other glial cells [1,2]. As pleiotropic cells, microglia constantly sense and respond differently to their environment depending on the stimuli they encounter. When responding to an insult, microglia release a number of factors that can be inflammatory or cytotoxic such as interleukin (IL)-1β, IL-6, tumor TNF-α, and a number of reactive oxygen and nitrogen species to neutralize immunogens. Although beneficial in the short-term, when prolonged, this form of microglial activation can also promote cellular stress and compromise the health of neural tissue, leading to neuronal damage, neurodegeneration, and subsequent deficits in cognitive or motor function.
To prevent a state of chronic inflammation, microglia are regulated by a number of factors including CD200, CD22, CD47, and fractalkine. Fractalkine (CX3CL1; FKN) is a chemokine that is expressed predominately by neurons in the CNS, with lower levels of expression occurring in astrocytes, and it has been shown to play an important neuroprotective role through the regulation of microglial activity [3,4]. Specifically, CX3CL1 is known to decrease microglial production of inflammatory mediators by binding to its cognate receptor (CX3CR1) on the surface of microglia (reviewed by [5]). CX3CL1 is constitutively expressed as a membrane-bound protein, which can be cleaved by proteases, such as a disintegrin and metalloproteinase (ADAM) 10 or 17, to generate a diffusible, soluble form of the protein (sFKN [6,7];). Under normal physiological conditions in the periphery, membrane-bound CX3CL1 has been shown to play a role in the recruiting and adhesion of infiltrating leukocytes [8]. sFKN, on the other hand, acts as both a chemoattractant involved in cellular migration and a neuroprotective signaling molecule that helps maintain microglia in a quiescent state [9,10]. While membrane-bound CX3CL1 may also bind receptors on the microglial cell surface, recent research suggests that the anti-inflammatory activity of CX3CL1 in the brain is mediated predominately by sFKN [3,9,11].
The importance of the CX3CL1/CX3CR1 signaling axis for aging and disease is also well documented. Mice lacking the CX3CR1 receptor display enhanced susceptibility to inflammatory challenge with lipopolysaccharide (LPS) and 1methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), chemical models of systemic inflammation, and Parkinson's disease, respectively [12]. Similarly, CX3CR1 knockout also accelerates disease progression in the G93A mutant Cu, Zn-superoxide dismutase (SOD1) mouse model of amyotrophic lateral sclerosis [12]. We have also shown that mice deficient in CX3CR1 demonstrate several behavioral deficits in both hippocampal-and cerebellar-dependent learning tasks, such as contextual fear conditioning and rotarod, respectively [13]. This effect correlates with increased levels of IL-1β and other inflammatory cytokines and can be blunted by blockade of IL-1β signaling. Furthermore, postnatal CX3CR1 −/− mice demonstrate significant abnormalities in synaptic function, such as an increased number of synapses, indicative of impairments in synaptic pruning by microglia, and electrophysiological disturbances consistent with immature synaptic function and impaired development of functional circuits [14]. These alterations in brain connectivity carry into adulthood and correlate with behavioral deficits [15]. Levels of CX3CL1 have been shown to be reduced in aged animals and CSF from Alzheimer's disease patients [16,17]. This suggests that perturbations in the CX3CL1/CX3CR1 axis may have a significant impact on cognitive function in aging and disease, and restoring CX3CL1/CX3CR1 signaling may be a viable therapeutic approach to treat these conditions. Indeed, intrastriatal administration of the chemokine domain of CX3CL1 significantly preserved tyrosine hydroxylase immunoreactivity following injection of 6hydroxydopamine, and this preservation was accompanied by decreased numbers of activated microglia [18]. Moreover, overexpression of the soluble form of CX3CL1 has been shown to mitigate neurodegeneration induced by overexpression of alpha-synuclein in a model of Parkinson's disease, and has also been shown to reduce tau phosphorylation and improve cognition in a model of tauopathy [11,19,20]. The benefits of CX3CL1 administration have also been observed more generally in aged animals, as treatment with recombinant CX3CL1, comprising only the chemokine domain, significantly increased neurogenesis, and reduced microglial activation in aged rats [16]. The therapeutic potential of CX3CL1 in Alzheimer's disease is more complex; however, knockout of CX3CR1 appears to be detrimental in mouse models of tauopathy, but beneficial in amyloid expressing mice [21][22][23][24].
To date, most studies assessing CX3CL1 activity as it relates to aging and disease have focused predominately on sFKN signaling or signaling of a truncated soluble form containing only the chemokine domain; however, more recent work has begun to take into account the activity of full-length membrane-bound CX3CL1 as well [9,11,25]. Emerging evidence suggests that these two forms may display differential effects on aging and neurodegenerative disease processes, and that these differences in activity may be highly context-specific. For example, it has been reported that, similar to CX3CR1 deficiency, amyloid precursor protein/presenilin-1 (APP/PS1) mice lacking CX3CL1 showed reduced amyloid pathology, and expression of obligate sFKN in this context did not improve or exacerbate this effect [25]. Furthermore, deficiency in membrane-bound CX3CL1 specifically resulted in enhanced microglial activation and tau phosphorylation. On the other hand, in models of Parkinson's disease, it was shown that membrane-bound CX3CL1 had no effect on overall disease progression and pathology while sFKN administration significantly improved motor function and preserved TH-positive neurons in the substantia nigra [9,11]. These studies suggest that membrane-bound CX3CL1 and sFKN may display differing degrees of therapeutic efficacy depending on disease context; however, the individual roles of membrane-bound CX3CL1 and sFKN on motor function and cognition in a normal physiological setting have not yet been elucidated and may shed light on the therapeutic benefits and functions of each form of CX3CL1.
In this study, we confirm that CX3CL1 deficiency was sufficient to induce cognitive impairment. Furthermore, we used CX3CL1 knockout (CX3CL1 −/− ) mice to evaluate the differential abilities of both a mutated, obligate membrane-bound form of CX3CL1 (mFKN) and sFKN to rescue deficits caused by suppressed CX3CL1 signaling. To our knowledge, our results are the first to demonstrate that loss of CX3CL1 leads to significant cognitive impairment, in good agreement with our previous observations for CX3CR1, and to define the differing activities of mFKN and sFKN in this context.
AAV Production
Recombinant AAV serotype PhP.B (rAAV) vectors expressing either mFKN or sFKN (GI 114431260) were cloned using PCR on cDNA isolated from mouse brain as previously described [9]. sFKN protein expressed using this vector comprises amino acids 1-336, which includes both the chemokine domain and mucin-like stalk. mFKN DNA contained two mutations (R337A and R338A) to prevent cleavage by ADAM10/17 into the soluble form. mFKN protein expressed using this vector comprises all 395 amino acids of the full-length CX3CL1 protein with arginine to alanine substitutions at positions 337 and 338. mFKN DNA contained two mutations (R337A and R338A) to prevent cleavage by ADAM10/17 into the soluble form. The vector includes the AAV2 terminal repeats and chicken beta-actin (CBA) promoter for mRNA transcription of mFKN and sFKN. Both sFKN and mFKN were tagged with hemaglutinin (HA) at the C-terminus for easy detection of exogenous protein. rAAV particles were quantified using a modified dot plot protocol as described by Burger and Nash [26] and are expressed as vector genomes (vg)/mL.
Animals
The following work using animals was conducted according to the NIH guidelines for animal use and IACUC of the University of South Florida College of Medicine. CX3CL1 −/− mice (Merck Sharp and Dohme Corp.) were obtained with a material transfer agreement and maintained in a colony with WT littermates at the University of South Florida. Genotyping was outsourced and performed using a commercially available service (Transnetyx Inc. Cardova, TN). Only male mice were used for experiments. Mice were treated at 2 months of age with a single tail vein injection of rAAV expressing either green fluorescent protein (GFP), mFKN, or sFKN at a concentration of 7x10 12 vg/mL, followed by behavioral assessment between 3 and 4 months of age. Animals were maintained on a 12-h light/dark cycle and ad libitum access to food and water.
Behavioral testing Open Field
General locomotion and exploratory activity were assessed by observing the animals in a novel environment. The mice were placed in 40 cm square box and allowed to navigate the space for 15 min. Distance traveled around the arena was recorded and quantified with ANY-Maze software from Stoelting.
Rotarod
Mice were tested on the rotarod apparatus (UgoBasile) in order to examine innate motor coordination and motor learning and performance over time. This machine consists of a 3-cm diameter textured rod, initially rotating at 4 rpm when the mice begin testing, and accelerating to 40 rpm within 5 min. Trials were terminated once the mouse fell off the beam onto a platform, denoting the latency to fall. The mice completed 4 trials a day with 30-min intertrial interval, over 2 days for a total of 8 trials.
Fear conditioning
Mice were placed in a 17 × 17-cm plexiglass box within a soundproofed chamber and allowed to acclimate to the novel environment for 3 min, while locomotion and baseline freezing behavior were evaluated by the ANY-Maze software. Freezing behavior in the program was operationally defined as 2 or more consecutive seconds of motionless inactivity and was corroborated by hand scored data. White noise and a gentle fan were used inside the chamber to reduce attention to noise cues that may be present outside of the testing room. A 30-s 90-dB tone (cued stimulus) was delivered to the apparatus at 3 min followed by a foot shock (0.5 mA), which was administered during the last 2 s of the tone, so that the two stimuli terminate simultaneously. This stimulus block was repeated at 5 min, and freezing behavior was assessed until the 7th min of the test.
Mice were then re-exposed to the context at 24 h and 2 weeks post-training in order to observe the freezing behavior after training. Association of the context with an aversive stimulus is mediated by the hippocampus, and freezing behavior is representative of hippocampal function. Mice were placed back into the conditioning chamber with all of the cues present during training, but received no foot shock. Freezing behavior was monitored and analyzed at 14 days post training.
Mice were also tested in an altered context test in which all salient cues are removed, except for the tone. Freezing behavior was observed in a novel context with and without the tone to determine the association to the tone and shock. This association is mediated by the amygdala.
Barnes maze
Barnes maze was conducted after 6 pm during the animals' dark cycle and under bright lights to encourage mobility throughout the testing. Moreover, a continuous 2500-Hz tone was played throughout both training and testing phases to further encourage mobility and escape. Behavior in the arena was recorded and analyzed using the ANY-Maze software. Training is consisted of 4 trials daily for 4 days. Each trial was terminated when the mouse completely escaped through the target hole or after 3 min had elapsed. After each trial, the mouse was allowed to remain in the covered target hole for 30 s, and the tone was silenced upon entry into the escape chamber. A probe trial was tested at 24 h after training, by replacing the escape chamber with a false bottom to prevent entry. Recall was assessed by comparing the number of entries into each hole around the maze.
Electrophysiology
Following behavior experiments, a cohort of mice (n = 4/group) was euthanized, and the hippocampus was dissected out for field potential LTP recordings as previously described [27,28]. The brain was removed rapidly and placed in oxygenated ice cold cutting solution with the following composition: 110 mM sucrose, 60 mM NaCl, 3 mM KCl, 1.25 mM NaH 2 PO 4 , 28 mM NaHCO 3 , 5 mM glucose, 0.6 mM ascorbate, 0.5 mM CaCl 2 , and 7 mM MgCl 2 . Hippocampal slices were cut on a vibratome to 400 μm thickness filled with ice cold cutting solution. The slices were then incubated with 50% cutting solution and 50% artificial cerebrospinal fluid solution (ACSF) with the following composition: 125 mM NaCl, 2.5 mM KCl, 1.25 mM NaH 2 PO 4 , 25 mM NaHCO 3 , 25 mM glucose, 2 mM CaCl 2 , and 1 mM MgCl 2 with constant 95% O 2 /5% CO 2 for 10 min to equilibrate. Slices were then transferred onto the nylon mesh of the recording chamber (Automate Scientific) and allowed to recover for 1 h with constant supply of oxygenated ACSF (rate 1 ml/min). The recording chamber was maintained at 30°C ± 0.5°C. Following recovery, the Schaffer collaterals rising from CA3 region of the hippocampus were stimulated using electrodes made of formvar-coated nichrome wire that delivers biphasic stimulus pulses (1-15 V, 100 μs duration, 0.05 Hz). The delivery of the stimulation was controlled by the pClamp 9.0 software (Molecular Devices) using a stimulus isolator (model 2200; A-M systems) and Digidata1322 A interface (Molecular Devices). The field excitatory post-synaptic potential (fEPSPs) recordings were recorded from the stratum radiatum in CA1 region of the hippocampus using glass electrodes filled with ACSF with resistance 1-4 mΩ. The signals obtained were amplified using a differential amplifier (model 1800; A-M systems). The amplified signals were filtered at 1 kHz and digitized at 10 kHz. The input output curve was determined by stimulating the slices in 0.5 mV increment from 0 to 15 mV. The 50% of the maximum fEPSPs determined using the input output curve was used as the baseline stimulus intensity in all the experiments. Paired-pulse facilitation (PPF) was performed to measure short-term plasticity. The slices were stimulated at 50% of the maximum intensity with sequential pulses for every 2 s up to 30 s. For LTP field potential recordings, the slices were stimulated with theta burst protocol containing two trains of four pulse bursts at 100 Hz separated by 20 s, repeated six times with an intertrain interval of 10s. For analysis, the last 20 min of the fEPSP recording was averaged and compared.
Tissue isolation
Mice received lethal intraperitoneal injections of sodium pentobarbital. Once deeply anesthetized, they were transcardially perfused with 0.1 M PBS. The brains were removed then cut in half along the midline. One hemisphere was allowed to chill in ice cold PBS before rapidly dissecting out areas of interest, which were flash frozen in liquid nitrogen. The other hemisphere was drop fixed in 4% paraformaldehyde for 24 h, which was replaced with a 30% sucrose solution for cryoprotection.
ELISA quantification of CX3CL1
Flash frozen brain tissue, excluding the hippocampus and cortex, was homogenized using an electric tissue homogenizer in 1:10 weight-to-volume ratio of ice-cold RIPA buffer (Millipore) containing protease inhibitors and EDTA (Pierce). Lysates were then centrifuged at 10, 000×g at 4°C for 15 min, and supernatant was collected and analyzed to determine protein concentration using a BCA assay (Pierce). Lysates were then analyzed using ELISA kit with antibodies directed towards the Nterminus of the CX3CL1 protein, such that all forms of the CX3CL1 protein were detected. Samples were loaded in triplicate at a concentration of 50 μg total protein per well and incubated overnight at 4°C. Following incubation, the manufacturer's (R&D Systems) suggested protocol was followed. Optical density values for each ELISA plate were measured on a plate reader (BioTek), and sample concentrations for total CX3CL1 were calculated based upon the supplied standard curve.
Immunohistochemistry
Immunohistochemical analysis was conducted using 40 μm thick sections for every 6th sagittal section of the right hemisphere. In order to ensure sampling of the entire hippocampus, the sections collected included those immediately medial and lateral of the hippocampus. Free floating sections were treated with 3% hydrogen peroxide in methanol to remove endogenous peroxidase activity. They were blocked with 10% normal serum corresponding to the species each secondary antibody was raised in (horse or goat) with 0.1% Triton X-100 diluted in PBS. Primary antibodies to doublecortin (DCX; Santa Cruz, SC-8066; 1:150) or Ki67 (Novocastra, NCL#Ki67p; 1:500) were also diluted with 3% serum with 0.1% Trition X-100 overnight, oscillating at 60 rpm at 4°. Secondary biotinylated antibodies were diluted 1:500 and 1:1000 in PBS containing 3% normal serum with 0.1% Triton X-100 for DCX and Ki67, respectively, and incubated for 2 h at room temperature. Avidin-biotin complex was used to amplify signal (Vector Labs), and diaminobenzadine (Sigma) was used for color development.
Stereology
Cells labeled positively for either DCX or Ki67 were quantified in the subgranular zone (SGZ) of the dentate gyrus (DG) on a Nikon Eclipse 600 microscope using the optical fractionator method of unbiased stereology and Stereo Investigator software (MicroBrightField). A grid size of 225 × 225 and counting frame of 150X150 was employed for both for DCX and 100 × 100 grid size and counting frame for Ki67 staining such that at least 200 cells were counted for each animal. Anatomical structures were outlined using a 10×/0.45 objective, and cells were counted using a 40 × 0.95 objective.
Results
CX3CL1 is expressed in brain tissue following tail vein injection of AAV Approximately 2 months following tail vein injection of the rAAV vectors expressing GFP or CX3CL1, brain tissue was isolated and homogenized to assess expression of CX3CL1. Expression of exogenous sFKN or mFKN in the brain was evaluated using ELISA. Expression was observed only in CX3CL1 −/− animals that received rAAV expressing sFKN, mFKN, or in WT animals, while CX3CL1 −/− mice that received rAAV expressing GFP did not display any positive immunoreactivity (Fig. 1).
Mice receiving AAV-mFKN demonstrated expression at levels similar to those observed in WT mice expressing endogenous CX3CL1, while mice that received the vector expressing sFKN displayed levels that were approximately twice that of WT controls. CX3CL1 −/− mice demonstrate impaired associative learning and long-term memory that is disparately impacted by mFKN and sFKN Contextual and cued fear conditioning was used to assess associative learning and long-term memory in CX3CL1 −/− mice. Mice were subjected to a standard two-shock training protocol as described above, and freezing behavior was assessed before and after each shock for a duration of 7 min. All groups of mice performed similarly in this training paradigm (Fig. 2a). As we determined that no differences in contextual memory between WT mice and CX3CL1 −/− mice were present after 24 h (data not shown), mice were allowed 2 weeks to rest during which no testing took place. After 2 weeks, mice were placed back into the context in which they initially experienced a foot shock and monitored for freezing behavior over the course of 3 min. At this later time point, CX3CL1 −/− mice that received GFP showed significantly less freezing behavior than their WT counterparts (Fig. 2b). Mice that were administered the AAV-mFKN also showed a similar decrease in freezing behavior compared to WT controls. On the other hand, mice administered AAV-sFKN showed increased freezing behavior that was not significantly different from WT mice, but did not reach significance versus CX3CL1 −/− mice treated with GFP (Fig. 2b).
Freezing in response to a conditioned stimulus in a novel context was also observed after 2 weeks. Mice were allowed 3 min to acclimate to the novel context before presentation of the conditioned stimulus (tone) for three additional minutes. All mice displayed normal freezing behavior in response to the conditioned stimulus, and no significant differences were observed between groups (Fig. 2c).
CX3CL1 −/− mice display impairments in the Barnes maze test for spatial memory that are corrected by treatment with mFKN and sFKN Cognition was also assessed by evaluating spatial learning and memory in WT and CX3CL1 −/− mice using a Barnes maze task. Mice were subjected to 4 days of training during which they were taught to locate an escape pod beneath one of the holes on the perimeter of the maze (Fig. 3b). All mice learned the task at a similar rate regardless of treatment or genotype (Fig. 3a).
On the fifth day, the escape pod was replaced by a false bottom, and mice were allowed to explore the maze for 1 min. The number of times each mouse poked its head into each hole was then observed. WT mice explored the target hole more often than any other hole in the maze, with the number of head pokes decreasing as distance from the target hole increased. mFKN-and sFKN-treated animals displayed a similar trend (Fig. 3b, c). CX3CL1 −/− mice that were administered GFP displayed a trend towards decreased exploration of the target hole compared to WT animals, although this difference did not reach significance (Fig. 3c). Mice that received AAV-GFP explored the three adjacent holes on either side of the target (zones − 1 and + 1) the most and made significantly more head pokes in the − 1 zone than their WT counterparts. Mice that were treated with either mFKN or sFKN showed a trend towards increased exploration of the target zone, similar to WT mice, and showed significantly less exploration in the − 1 (mFKN and sFKN) and + 1 (mFKN) zones than mice receiving GFP (Fig. 3b, c).
CX3CL1 −/− mice display enhanced motor performance that is differentially affected by mFKN and sFKN
To determine if CX3CL1 knockout or treatment with sFKN or mFKN affected motor function, mice were assessed by accelerating rotarod over a period of 2 days. On day 1 (trials 1-4), no difference in motor performance was observed between WT mice, and CX3CL1 −/− mice administered AAV expressing either GFP or sFKN; however, CX3CL1 −/− mice treated with AAV expressing mFKN demonstrated increased latency to fall in comparison to WT mice (trials 1-4) and GFP mice (trial 1). On day two of testing (trials 5-8), CX3CL1 −/− mice treated with GFP displayed significantly longer latency to fall in comparison to WT controls (trials 5 and 7). Treatment with mFKN and sFKN differentially impacted motor coordination, with mFKN significantly increasing latency to fall in comparison to WT mice (trials 5 and 7), and showing a trend towards enhanced coordination compared to CX3CL1 −/− mice treated with GFP. On the other hand, CX3CL1 −/− mice treated with sFKN performed similarly to WT controls (Fig. 4a). Latency to fall was then compared for trials 1 and 8 to assess overall improvement in motor coordination and indicative of motor learning. All mice showed significant improvement in motor coordination and appeared to learn the task at similar rates despite increased baseline motor coordination in mFKN-treated animals (Fig. 4b).
Mice were also subjected to an open field test to determine if CX3CL1 knockout impacted spontaneous locomotion, which may in turn impact rotarod performance and fear conditioning. All mice showed similar levels of exploration in the open field arena as measured by distance traveled, suggesting that neither CX3CL1 knockout nor treatment with sFKN or mFKN affects spontaneous activity (Fig. 4c). We have previously shown that CX3CR1 −/− mice show cognitive dysfunction in hippocampal-dependent tasks Fig. 1 FKN is expressed in brain tissue following tail vein injection of adeno-associated viral vector (AAV). Quantitative assessment of FKN expression in brain homogenates. Animals were sacrificed, and brain tissue was collected for analysis by ELISA. CX3CL1 −/− mice treated with GFP displayed negligible levels of FKN expression in brain while mice injected with AAV expressing mFKN displayed expression levels similar to that of WT mice. Animals receiving AAV expressing sFKN demonstrated high levels of expression in comparison to WT controls. Data were analyzed by one-way ANOVA (n = 4, F (3, 13) = 22.85; p < 0.0001) with Tukey's post hoc test. *p < 0.05, **p < 0.01, and ***p < 0.001 that correlates with decreased LTP [13]. Given the similar cognitive deficiencies observed in CX3CL1 −/− mice in the current study, we next evaluated LTP to determine if these mice display altered synaptic plasticity. LTP was induced in hippocampal slices by theta burst stimulation (five, four pulse, 200 Hz bursts separated by 20 s) following baseline recordings conducted over a period of 20 min. Changes in fEPSP slope were then monitored for a duration of 60 min and expressed as a percent of baseline. In comparison to WT controls, CX3CL1 −/− animals treated with AAV-GFP showed impaired LTP and returned to baseline levels as shown in the analysis of the last 20 min (Fig. 5c). In contrast, stimulation of the Schaffer collateral resulted in strong LTP response in slices from the sFKN-treated CX3CL1 −/− mice similar to that observed for WT animals (Fig. 5a, c). Though the slices from the mFKN-treated CX3CL1 −/− mice showed robust LTP maintenance on average, the individual fESP signals recorded from the stratum radiatum of CA3 were quite variable as noted in the large error bars and shown in Fig. 5b, suggesting that sFKN and mFKN may not have equivalent actions. The efficacy of the neurotransmitter release measured using an input-output curve was closer for sFKN-treated CX3CL1 −/− and WT mice, whereas the mFKN treated CX3CL1 −/− mice were similar to GFP-injected CX3CL1 −/− mice. These data indicate that CX3CL1 −/− mice showed significant impairment in hippocampal plasticity and treatment with both sFKN and mFKN rescued the impairment, although to different extents.
CX3CL1 −/− mice show deficits in neurogenesis that are rescued by treatment with sFKN, but not mFKN We have previously shown that CX3CR1 −/− mice show significant deficits in hippocampal neurogenesis. To evaluate if CX3CL1 −/− mice show a similar decrease, we used unbiased stereology to quantify proliferating cells within the SGZ of the DG as indicated by Ki67 staining. CX3CL1 −/− mice receiving AAV expressing GFP showed a significant decrease in Ki67-postive (Ki67+) cells in comparison to WT controls. This deficit was partially rescued by treatment with sFKN, while treatment with mFKN had no effect (Fig. 6a). The number of DCXpositive (DCX+) cells in the SGZ was also quantified by stereology as a marker of neurogenesis. Similar to the trend observed with Ki67, mice that were administered GFP showed significantly fewer numbers of DCX+ cells in comparison to WT controls. Treatment with mFKN had no effect on this deficit, while treatment with sFKN restored neurogenesis to levels comparable to WT controls (Fig. 6b).
Discussion
Our previous work was the first to demonstrate that CX3CR1 plays a physiological role in cognition and Mice were subjected to a standard two-shock training protocol for fear conditioning. Shocks were administered at 3 min and 5 min, and freezing behavior was assessed before and after each shock for a duration of 7 min. All groups of mice performed similarly in this training paradigm. Data were analyzed using two-way ANOVA for repeated measures (n = 8, F (18, 288) = 1.486, p = 0.09). b After 2 weeks, mice were placed back into the context in which they initially experienced a foot shock and monitored for freezing behavior over the course of 3 min. CX3CL1 −/− mice that received AAV-GFP showed significantly less freezing behavior than their WT counterparts. Mice that were administered the AAV-mFKN also showed a similar trend in decreased freezing behavior. Mice administered AAV-sFKN showed a trend towards increased freezing behavior that was more similar to WT mice than mice treated with GFP; however, this trend did not reach significance. Data were analyzed by one-way ANOVA (n = 8, F (3, 65) = 4.210; p < 0.010) with Tukey's test. *p ≤ 0.05. c Mice were allowed 3 min to acclimate in a novel context before presentation of the conditioned stimulus (tone) for three additional minutes. Freezing was monitored for the duration of the test. All mice displayed normal freezing behavior in response to the conditioned stimulus, and no significant differences were observed between groups. Data were analyzed by two-way ANOVA for repeated measures (n = 8, F (3, 44) = 1.132; p = 0.347) memory [13]. In the current study, we further demonstrate that genetic knock out of its ligand, CX3CL1, also produces physiological consequences similar to those observed with receptor knockout. In particular, CX3CL1 knockout significantly impaired hippocampal dependent learning and memory processes in both a fear conditioning task for long-term memory, and a Barnes maze task to assess spatial learning and memory (Figs. 2 and 3). Interestingly, in the Barnes maze, the deficit was a widening of the spatial search pattern during the probe trial, indicating impaired spatial mapping that is correlated with activity in the dentate gyrus [31]. Impaired mice display impairments in the Barnes maze test for spatial memory that are corrected by treatment with mFKN and sFKN. a Mice were assessed for spatial learning and memory using a Barnes maze task. Mice were subjected to 4 days of training during which they were taught to locate an escape pod beneath one of the holes on the perimeter of the maze. All mice learned the task at a similar rate regardless of treatment of genotype. Data were analyzed by two-way ANOVA for repeated measures (n = 8, F (9, 153) = 1.876, p = 0.059). b On the fifth day, the escape pod was replaced by a false bottom, and mice were allowed to explore the maze for 1 min. The figure shows a map of the Barnes maze zones and representative heat maps for each treatment group. The maze was split into eight zones for scoring purposes as indicated. Representative heatmaps of one animal from each treatment group were generated to illustrate the different exploration strategies observed for each group during the probe test, and indicate the relative amount of time spent at different locations around the maze. c Quantitative assessment of the number of times each mouse poked its head into each hole of the Barnes maze. WT mice explored the target hole more often than any other hole in the maze, with the number of head pokes decreasing as distance from the target hole increased. mFKN-and sFKN-treated animals displayed a similar trend. CX3CL1 −/− mice that were administered GFP displayed a trend towards decreased exploration of the target hole compared to WT animals, although this difference did not reach significance. Mice that received AAV expressing GFP explored the three adjacent holes on either side of the target (zones − 1 and + 1) the most and made significantly more head pokes in the − 1 zone than their WT counterparts. Mice that were treated with either mFKN or sFKN showed a trend towards increased exploration of the target zone, similar to WT mice, and showed significantly less exploration in the − 1 (mFKN and sFKN) and + 1 (mFKN) zones than mice receiving GFP. Data were analyzed using two-way ANOVA for repeated measures (n = 8, F (21, 329) = 2.482, p < 0.001) with Tukey's test. *p < 0.05, **p < 0.01, and ***p < 0.001 for each respective zone memory correlated with deficits in hippocampal neurogenesis as demonstrated by a significant decrease in both Ki67+ and DCX+ neurons within the SGZ of CX3CL1 −/− animals in comparison to their WT counterparts (Fig. 6). Moreover, recordings from isolated hippocampal slices indicated that CX3CL1 −/− mice display a marked deficit in LTP maintenance (Fig. 5). Collectively, these data suggest that cognitive dysfunction in these mice may be the result of impaired synaptic plasticity and reduced neurogenesis. While the phenotype caused by CX3CL1 knockout seems to be somewhat different than that demonstrated by CX3CR1 −/− mice, these data are in excellent agreement with our previous findings and those of others.
In addition to examining the consequences of CX3CL1 knockout on cognitive function, we also evaluated the ability of both mFKN and sFKN to rescue the effects of CX3CL1 deficiency. Using rAAV to express different forms of the CX3CL1 protein, we determined that sFKN showed a definitive trend towards improving performance in a contextual fear conditioning task for long-term memory while mFKN treatment did not alter the effects of FKN knockout in this test (Fig. 2). This may suggest that sFKN activity is particularly important for executing hippocampal-dependent associative learning and memory tasks. Further, when spatial learning and memory were assessed by Barnes maze, we similarly noted that animals treated with sFKN displayed a significantly altered search pattern when seeking the target hole during the probe trial than their GFP-treated counterparts. Indeed, these mice tended to spend more time in the target zone and significantly less time searching zones adjacent to the target when compared to GFP-treated mice (Fig. 3), signifying that sFKN activity may be broadly important for hippocampal-dependent memory tasks. Additionally, although it did not improve performance in contextual fear conditioning, administration of rAAV expressing mFKN did enhance performance in the Barnes maze in a manner similar to that of sFKN (Fig. 3). This observation could indicate a specific role for mFKN in spatial memory formation, such as an ability to enhance function within the dentate gyrus, independent of neurogenesis.
While both sFKN and mFKN appear to display some activity in enhancing hippocampal-dependent functions in CX3CL1 −/− mice, it is noteworthy that only sFKN appears to rescue hippocampal neurogenesis in these animals. Treatment with AAV-sFKN partially restored expression of both Ki67 and DCX in the SGZ, indicative of increased neurogenesis, suggesting that its ability to mitigate cognitive deficits in CX3CL1 −/− mice could be dependent on this activity. However, mFKN did not by mFKN and sFKN. a Mice were assessed by accelerating rotarod over a period of 2 days. On day 1 (trials 1-4), no difference in motor performance was observed between WT mice, and CX3CL1 −/− mice administered a AAV expressing either GFP or sFKN; however, CX3CL1 −/− mice treated with AAV expressing mFKN demonstrated superior motor performance in comparison to WT mice (trials 1-4) and GFP mice (trial 1). On day two of testing (trials 5-8), mice treated with GFP displayed significantly enhanced motor performance in comparison to WT controls (trials 5 and 7). Treatment with mFKN and sFKN differentially impacted motor coordination with mFKN significantly enhancing motor performance in comparison to WT mice (trials 5-8), and showing a trend towards enhanced coordination compared to CX3CL1 −/− mice treated with GFP. On the other hand, CX3CL1 −/− mice treated with sFKN displayed a trend towards decreased motor coordination in comparison mice treated with GFP, and performed more similarly to WT controls. Data were analyzed using two-way ANOVA for repeated measures (n = 8, F (3, 101) = 4.826, p < 0.01) with Tukey's test to compare differences between groups. *p < 0.05 and **p < 0.01 in comparison to WT controls for each trial. b All mice showed significant improvement in motor coordination between trials 1 and 8. Data were analyzed by two-way ANOVA for repeated measures (n = 8, F (1, 101) = 84.61, p < 0.0001) with Sidak's test to compare differences between trials within each treatment group. *p < 0.05 in comparison to trial 1. Slopes are not significantly different, as determined by linear regression (F (3, 202) = 0.8176, p = 0.485), suggesting that all mice learned the task at a similar pace. c Mice were observed using an open field paradigm to assess spontaneous locomotion. No differences in total distance travelled were observed between any of the treatment groups. Data were analyzed by one-way ANOVA (n = 8, F (3, 57) = 2.632, p = 0.0586) appear to have any effect on neurogenesis in this region despite its ability to improve spatial memory (Fig. 6). In spite of this discrepancy, both sFKN and mFKN appear to partially restore LTP (Fig. 5). While it has been established that neurogenesis can play an important role in facilitating LTP in mice, it has also been observed that mice deficient in hippocampal neurogenesis develop compensatory mechanisms to sustain LTP [32]. Although, mFKN-treated CX3CL1 −/− mice showed LTP maintenance, examining a single signal (Fig. 5b) that showed the varied fEPSPs post theta burst indicating inconsistent maintenance of LTP. Collectively, this could suggest that mFKN may play a role in facilitating the formation of such compensatory mechanisms in CX3CL1 −/− mice in order to partially restore LTP; however, mFKN signaling may not be sufficient in and of itself to reliably sustain this function.
While our observations on the effect of CX3CL1 knockout and restoration on cognitive function seem to be in good agreement with our prior findings in CX3CR1 −/− mice, the effects of CX3CL1 knockout on motor learning and function were different from those previously observed in CX3CR1 −/− mice. Indeed, CX3CL1 knockout appeared to enhance motor performance as assessed by a c Mean slope of the fEPSP was calculated for the last 20 min of the monitoring period, confirming that CX3CL1 −/− mice treated with GFP show significant deficits in LTP that are partially, but significantly ameliorated by treatment with either mFKN or sFKN. Data were analyzed by one-way ANOVA (n = 11, F (1.5, 15.03) = 1696, p < 0.0001) with Tukey's test. ***p < 0.001. d The input/output curve for CX3CL1 −/− mice treated with sFKN were closer together with WT mice, whereas CX3CL1 −/− mice treated with mFKN were closer together with CX3CL1 −/− mice treated with GFP. Slopes are significantly different as determined by linear regression (n = 30, F (3, 112) = 170, p < 0.0001) suggesting that the presynaptic input and the post synaptic outpour for CX3CL1 −/− mice treated with GFP and mFKN were different from that of WT and CX3CL1 −/− mice treated with sFKN rotarod task in direct contrast to knockout of CX3CR1, which significantly impairs motor function [13]. Despite improved motor performance, however, motor learning, which was measured as the rate at which mice improved in the rotarod task over time, did not show any differences when comparing CX3CL1 −/− mice to their WT counterparts (Fig. 4b). Indeed, although CX3CL1 −/− animals showed a slight trend towards increased motor learning, this difference did not reach significance when compared to the other treatment groups.
The effects of restoring mFKN and sFKN signaling on rotarod performance were also evaluated in CX3CL1 −/− mice. Interestingly, all mice, regardless of treatment, learned the task at a similar rate, indicating no differences in motor learning between groups (Fig. 4b); however, mFKN and sFKN treatment had opposing effects on the overall motor performance. While mFKN-treated mice behaved more similarly to GFP-treated CX3CL1 −/− mice, displaying significantly enhanced motor performance and endurance in comparison to WT mice, particularly on day two of the rotarod task, treatment with AAV-sFKN altered motor performance such that it was more similar to that of WT controls (Fig. 4a). Open field observations did not reveal any significant differences in spontaneous locomotor activity between mFKN-treated, sFKN-treated, CX3CL1 −/− , or WT mice, suggesting that improvements in motor performance were not due to hyperactivity (Fig. 4c). While the process underlying these unexpected results is not clear, it is possible that enhanced motor performance may be due to peripheral effects on tissues outside the CNS such as skeletal muscle or circulating macrophages. Additionally, it is also possible that loss of CX3CL1 activity during development in specific areas of the brain involved in motor function, such as the striatum and cerebellum, may have consequences for overall motor performance. More specifically, CX3CL1 is expressed in high levels within the striatum [33], and loss of this protein may impact the development and maturation of neural pathways associated with motor function. In this context, restoring sFKN signaling may normalize the function of such pathways, while mFKN signaling does not appear to influence this process. In similar fashion, it has recently been observed that CX3CL1 signaling is necessary for activity-dependent synaptic remodeling to occur in the cortex following sensory lesion induced by whisker cutting in mice, and that inhibition of ADAM10, one of the proteases responsible for cleavage of full-length CX3CL1 into sFKN in the brain, impairs this process [30]. This supports the idea that membrane-bound forms of CX3CL1 may not play a significant role in the remodeling of synaptic circuits, a process that could be involved here in normalizing motor function, and suggests instead that this synaptic remodeling may be predominately mediated by sFKN signaling.
Collectively, our data suggest that CX3CL1 signaling plays an important role in maintaining normal cognitive function in mice and demonstrates that a loss of CX3CL1 signaling could underlie the development of cognitive impairment. Moreover, we demonstrate that neurogenesis that are rescued by treatment with sFKN, but not mFKN. a Unbiased stereology was used to quantify proliferating cells within the subgranular zone (SGZ) of the dentate gyrus as indicated by staining for Ki67. CX3CL1 −/− mice receiving AAV expressing GFP showed a significant decrease in Ki67-postive (Ki67+) cells in comparison to WT controls. This deficit was partially rescued by treatment with sFKN, while treatment with mFKN had no effect. Data were analyzed by one-way ANOVA (n = 6, F [3, 29] = 26.41, p < 0.0001) with Tukey's test. b The number of DCX-positive (DCX+) cells in the SGZ was also quantified by stereology as a marker of neurogenesis. Mice that were administered GFP showed significantly fewer numbers of DCX+ cells in comparison to WT controls. Treatment with mFKN had no effect on this deficit, while treatment with sFKN restored neurogenesis to levels comparable to WT controls. Data were analyzed by one-way ANOVA (n = 6, F [3, 30] = 20.14, p < 0.0001) with Tukey's test. **p < 0.01 and ***p < 0.001 membrane-bound CX3CL1 and sFKN display differential activities on cognitive function that could affect their suitability as therapeutic targets. As perturbed CX3CL1 signaling has been observed in both aging and disease, the CX3CL1/CX3CR1 axis has garnered significant attention as a potential target for the treatment of several neurodegenerative diseases including Alzheimer's disease, Parkinson's disease, ALS, and multiple sclerosis, as well as ischemic stroke [9, 11, 12, 17-25, 29, 34-36]. However, until recently, the importance of considering the differential functions of different forms of the CX3CL1 protein had not been taken into account. With this in mind, our data could have significant implications for the development of treatments targeting the CX3CL1/CX3CR1 axis as it suggests that sFKN has a much greater and more consistent impact on mitigating cognitive deficits than membrane-bound forms of the CX3CL1 protein, and thus may be a better therapeutic candidate for treating diseases with a significant cognitive component. However, there are still significant gaps in our knowledge regarding the functions of these two forms of the CX3CL1 protein as well as limitations to the current study that must be taken into account.
For example, the differences observed between sFKN and mFKN activity in hippocampal-dependent behavioral tasks could be due to differences in overall availability of the two proteins. As CX3CL1 is not expressed by every neuron following rAAV-PhP.B administration, the inability of mFKN to freely diffuse to nearby neurons could significantly impact its activity in comparison to sFKN. Moreover, although equal concentrations of mFKN and sFKN rAAV were administered to the mice, expression of the sFKN protein we observed was approximately 2-fold higher than that observed for mFKN ( Fig. 1). Thus, it cannot be ruled out that higher expression levels of sFKN contributed to its greater and more consistent effects on cognitive function, neurogenesis, and LTP in comparison to mice treated with mFKN, which showed expression comparable to WT levels ( Fig. 1). This could also suggest that over-expression of sFKN is required to induce positive alterations in cognition in the context of aging or disease; however, further study is needed to test this hypothesis. Furthermore, it is important to note that levels of sFKN detected in WT brain tissue are higher than those observed for membranebound forms of the protein, suggesting that sFKN could be more biologically active even at physiological concentrations [9]. As the ELISA used to detect FKN from brain homogenates in this study that detects both forms of the protein present in WT animals, the data presented here for WT mice include the total levels of both sFKN and membrane-bound CX3CL1, with sFKN being the predominate species, while the data for both sFKN-and mFKN-treated animals represent levels of only one form of the protein. Therefore, it is likely that mFKN levels in rAAV-treated animals are also over-expressed in comparison to those of membrane-bound CX3CL1 found in WT animals, though not to the extent of sFKN. With this consideration in mind, it is possible that even overexpression of mFKN may not be sufficient to positively and consistently influence cognition in the context of CX3CL1 depletion. Furthermore, while sFKN shows appeal as a therapeutic agent given its broad range of effects on cognitive processes, several studies to date have highlighted the need to better characterize the effects of sFKN, which contains the entirety of the mucin-like stalk, versus truncated versions of the soluble CX3CL1 ligand comprising only the chemokine domain [37]. These studies have indicated that different versions of the soluble ligand can produce vastly different outcomes in the context of both neuropathic pain and Alzheimer's disease that is likely linked to their ability to elicit different changes in microglial phenotype [19,20,25,38,39]. This distinction has likely been a source of significant variation among studies evaluating the potential of the CX3CL1/CX3CR1 axis as a target for therapeutic development and illustrates the need for further study to define the differential roles of all forms of the CX3CL1 protein.
Conclusions
Our work demonstrates a role for CX3CL1 in both cognitive and motor function and suggests that loss of CX3CL1 signaling is sufficient to induce cognitive impairment that is likely linked to deficits in both hippocampal neurogenesis and LTP. Further, we demonstrate that sFKN and an obligate membrane-bound form of CX3CL1, mFKN, display differential activities in the context of cognitive function. However, while the current study provides evidence that sFKN more consistently and reliably influences cognition than membranebound forms of the CX3CL1 protein, and illustrates the therapeutic potential of sFKN as a target for aging and disease, additional considerations are required for future development. | 2023-01-18T14:28:19.480Z | 2020-05-14T00:00:00.000 | {
"year": 2020,
"sha1": "e5a7c1c851a26087f5c39f5778cc5341a6a036cd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12974-020-01828-y",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "e5a7c1c851a26087f5c39f5778cc5341a6a036cd",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": []
} |
9996851 | pes2o/s2orc | v3-fos-license | A Modified KZ Reduction Algorithm
The Korkine-Zolotareff (KZ) reduction has been used in communications and cryptography. In this paper, we modify a very recent KZ reduction algorithm proposed by Zhang et al., resulting in a new algorithm, which can be much faster and more numerically reliable, especially when the basis matrix is ill conditioned.
I. INTRODUCTION
For any full column rank matrix A ∈ R m×n , the lattice L(A) generated by A is defined by The columns of A form a basis of L(A). For any n ≥ 2, L(A) has infinity many bases and any of two are connected by a unimodular matrix Z, i.e., Z ∈ Z n×n and det(Z) = ±1. Specifically, for each given lattice basis matrix A ∈ R m×n , AZ is also a basis matrix of L(A) if and only if Z is unimodular, see, e.g., [1]. The process of selecting a good basis for a given lattice, given some criterion, is called lattice reduction. In many applications, it is advantageous if the basis vectors are short and close to be orthogonal [1]. For more than a century, lattice reduction have been investigated by many people and several types of reductions have been proposed, including the KZ reduction [2], the Minkowski reduction [3], the LLL reduction [4] and Seysen's reduction [5] etc.
Lattice reduction plays an important role in many research areas, such as, cryptography (see, e.g., [6]), communications (see, e.g., [1], [7]) and GPS (see, e.g., [8]), where the closest vector problem (CVP) and/or the shortest vector problem (SVP) need to be solved: The often used lattice reduction is the LLL reduction, which can be computed in polynomial time under some conditions and has some nice properties, see, e.g., [9] for some latest results. In some communication applications, one needs to solve a sequence of CVPs, where y's are different, but A's are identical. In this case, instead of using the LLL reduction, one usually uses the KZ reduction [2] to do reduction, since sphere This work was supported by NSERC of Canada grant 217191- 12. decoding for solving these CVPs becomes more efficient, although the KZ reduction costs more than the LLL reduction.
In this paper, we will propose a new KZ reduction algorithm, which improves the basis expansion method proposed in [13]. Like [13], we assume floating point arithmetic with fixed precision is used in the computation. Numerical results indicate that the modified algorithm can be much faster and more numerically reliable.
The rest of the paper is organized as follows. In section II, we introduce the LLL and KZ reductions. In section III, we introduce our modified KZ reduction algorithm. Some simulation results are given in section IV to show the efficiency and numerical reliability of our new algorithm. Finally, we summarize this paper in section V.
In this paper, boldface lowercase letters denote column vectors and boldface uppercase letters denote matrices. For a matrix A, let a ij be its (i, j) element and A i:j,k:ℓ be the submatrix containing elements with row indices from i to j and column indices from k to ℓ. Denote e 1 = [1, 0, . . . , 0] T , whose dimension depends on the context.
II. LLL AND KZ REDUCTIONS Assume that A in (1) has the QR factorization
∈ R m×m is orthogonal and R ∈ R n×n is upper triangular. After the QR factorization of A, the LLL reduction [4] reduces the matrix R in (4) toR through the QRZ factorization: whereQ ∈ R n×n is orthogonal, Z ∈ Z n×n is unimodular andR ∈ R n×n is upper triangular and satisfies the following conditions: δr 2 k−1,k−1 ≤r 2 k−1,k +r 2 kk , k = 2, 3, . . . , n where δ is a constant satisfying 1/4 < δ ≤ 1. The matrix AZ is said to be LLL reduced. Equations (6) and (7) are referred to as the size-reduced condition and the Lovász condition, respectively. Similarly, after the QR factorization of A, the KZ reduction reduces the matrix R in (4) toR in (5), whereR satisfies (6) and The matrix AZ is said to be KZ reduced. Note that if a matrix is KZ reduced, it must be LLL reduced for δ = 1.
III. A MODIFIED KZ REDUCTION ALGORITHM
In this section, we first introduce the KZ reduction algorithm given in [13], then propose a modified algorithm.
A. The KZ Reduction Algorithm in [13] From the definition of the KZ reduction, the reduced matrix R satisfies both (6) and (8). If the QRZ factorization in (5) givesR satisfying (8), then we can easily apply size reductions toR such that (6) holds. Thus, in the following, we will only show how to obtainR such that (8) holds.
The algorithm needs n − 1 steps. Suppose that at the end of step k − 1, one has found an orthogonal matrix Q (k−1) ∈ R n×n , a unimodular matrix Z (k−1) ∈ Z n×n and an upper triangular R (k−1) ∈ R n×n such that where for i = 1, . . . , k − 1, At step k, like [1], [13] uses the LLL-aided Schnorr-Euchner search strategy [14] to solve the SVP: Then, unlike other KZ reduction algorithms, [13] finds the unimodular matrix by expanding R (k−1) k:n,k:n x (k) to a basis for the lattice {R (k−1) k:n,k:n x : x ∈ Z n−k+1 }. Specifically, [13] first constructs a unimodular matrix and then finds an orthogonal matrix Q Based on (9) and (13), we define Here Q (k) is orthogonal, R (k) is upper triangular and Z (k) is unimodular. Then, combining (9) and (13), we obtain At the end of step n − 1, we get R (n−1) , which is justR in (5). In the following we explain why (8) holds.
In the following, we introduce the process of obtaining the unimodular matrix Z (k) in (12) proposed in [13]. (There are some other methods to find Z , see, e.g., [15, pp.13].) Suppose that z = [p, q] T ∈ Z 2 and gcd(p, q) = d, then, there exist two integers a and b such that ap + bq = d. Obviously, is unimodular and it is easy to verify that U −1 z = d e 1 .
From (11), we can conclude that After getting can be obtained by applying a sequence of 2 by 2 unimodular transformations of the form (20) to transform x (k) to e 1 , i.e., ( Z (k) ) −1 x (k) = e 1 (see (12)). Specifically they eliminate the entries of x (k) from the last one to the second one. The resulting algorithm for finding Z (k) is described by Algorithm 1 and the corresponding KZ reduction algorithm is described by Algorithm 2.
Here we make a remark. Algorithm 2 does not show how to form and update Q, as it may not be needed in applications. If an application indeed needs Q, then we can obtain it by the QR factorization of AZ after obtaining Z. This would be more efficient.
B. Proposed KZ Reduction Algorithm
In this subsection, we modify Algorithms 1 and 2 to get a new KZ reduction algorithm, which can be much faster and more numerically reliable.
First, we make an observation on Algorithm 2 and make a simple modification. At step k, if x (k) = ± e 1 (see (11)), then, obviously, the basis expansion algorithm, i.e., Algorithm 1 is not needed and we can move to step k + 1. Later we will come back to this issue again.
In the following, we will make some major modifications. But before doing it, we introduce the following basic fact, which can be found in the literature: For any two integers p and q, the time complexity of finding two integers a and b such that ap + bq = d ≡ gcd(p, q) by the extended Euclid algorithm is bounded by O(log 2 (min{|p|, |q|})) if fixed precision is used.
In Algorithm 2, after finding x (k) (see (11)), Algorithm 1 is used to expand R (k−1) k:n,k:n x (k) to a basis for the lattice {R (k−1) k:n,k:n x : x ∈ Z n−k+1 }. There are some serious drawbacks with this approach. Sometimes, especially when A is ill-conditioned, some of the entries of x (k) may be very large such that they are beyond the range of consecutive integers in a floating point system (i.e., integer overflow occurs), very likely resulting in wrong results. Even if integer overflow does not occur in storing x (k) , large x (k) may still cause problems. One problem is that the computational time of the extended Euclid algorithm will be long according to its complexity result we just mentioned before. The second problem is that updating Z and R in lines 4 and 5 of Algorithm 1 may cause numerical issues. Large x i and x i+1 are likely to produce large elements in U . As a result, integer overflow may occur in updating Z, and large rounding errors are likely to occur in updating R. Finally, R is likely to become more ill-conditioned after the updating, making the search process for solving SVPs in later steps expensive.
In order to deal with the large x (k) issue, we look at line 4 in Algorithm 2, which uses the LLL-aided Schnorr-Euchner search strategy to solve the SVP. Specifically at step k, to solve (11), the LLL reduction algorithm is applied to R (k−1) k:n,k:n : where is LLL-reduced. Then, one solves the reduced SVP by the Schnorr-Euchner search strategy: The solution of the original SVP is x (k) = Z z : z ∈ Z n−k+1 }. Thus, before doing the expansion, we update Q (k) , R (k) and Z (k) by using the LLL reduction (21):Q Now we do expansion. We construct a unimodular matrix Z (k) ∈ Z (n−k+1)×(n−k+1) whose first column is z (k) , and find an orthogonal matrix Q Then, we updateQ (k) ,Ř (k) andŽ (k) as follows (cf. (14)- (16)): and we obtain the QRZ factorization of R in the same form as (17) at step k. Unlike x (k) in (11), which can be arbitrarily large, z (k) in (22) can be bounded. Actually by using the LLL reduction properties and the fact that | we can show the following result: where δ is the parameter in the LLL reduction (see (7)).
Because of the limitation of space, we omit its proof. Now we discuss the benefits of the modification. First, since R is LLL reduced, it has a very good chance, especially when R is well-conditioned and n is small (say, smaller than 30), that z (k) = ± e 1 (see (22)). This was observed in our simulations. As we stated before, the basis expansion is not needed in this case and we can move to next step. Second, the entries of z (k) are bounded according to Theorem 1, but the entries of x (k) are not. Our simulations indicated that the former are smaller or much smaller than the latter. Thus, the serious problems with using x (k) for basis expansion mentioned before can be significantly mitigated by using z (k) instead.
To further reduce the computational cost, we look at the basis expansion process at step k of Algorithm 2. After z (k) is obtained, Algorithm 1 is used to find a sequence of 2 by 2 unimodular matrices in the form of (20) to eliminate its entries form the last one to the second one. We noticed in our simulations that often z (k) has a lot of zeros and we would like to explore this to make the basis expansion process more efficient. Specifically, if z = [p, q] T ∈ Z 2 with q = 0, then gcd(p, q) = p, and U = I 2 in (20). Thus, in this case we do not need to do anything and move to eliminate the next element in z (k) . Now we can describe the modified KZ reduction algorithm in Algorithm 3.
IV. NUMERICAL TESTS
In this section, we compare the performance of the proposed KZ algorithm Algorithm 3 with Algorithm 2. All the numerical tests were done by MATLAB 14b on a desktop computer with Intel(R) Xeon(R) CPU W3530 @ 2.80GHz×4. The MATLAB code for Algorithm 2 was provided by Dr. Wen Zhang, one of the authors of [13]. The parameter δ in the LLL reduction was chosen to be 1.
We first give an example to show that Algorithm 2 may not even give a LLL reduced matrix (for δ = 1), while Algorithm 3 does.
Example. Let compute the LLL reduction of R k:n,k:n (see (21)) and update R, Z (see (24)-(25)); 5: solve min z ∈Z n−k+1 \{0} R k:n,k:n z 2 2 by the Schnorr-Euchner search strategy to get the solution z; 6: if z = ± e 1 then 7: k = k + 1; It is easy to check that R is not LLL reduced (for δ = 1) since r 2 33 > r 2 34 + r 2 44 . Moreover, the matrix Z obtained by Algorithm 2 is not unimodular since its determinant is −3244032, which was precisely calculated by Maple. The reason for this is that A is ill conditioned (its condition number in the 2-norm is about 1.0 × 10 5 ) and some of the entries of x (k) (see (11)) are too large, causing severe inaccuracy in updating R and integer overflow in updating Z (see lines 4-5 in Algorithm 1). In fact, x (4) = 691989751, 2 T .
The condition numbers in the 2-norm of R(k : 5, k : 5) obtained at the end of step k = 1, 2, 3, 4 of Algorithm 2 are respectively 2.9 × 10 8 , 1.5 × 10 15 , 6.2 × 10 18 and 1.1 × 10. A question one may raise is that if A is updated by the unimodular matrices produced in the process (i.e., Z is not explicitly formed) is AZ LLL reduced? We found it is still not by looking at the R-factor of the QR factorization of AZ. Although we cannot verify if R is KZ reduced, we can verify that indeed it is LLL reduced. All of the solutions of the four SVPs are e 1 (note that the dimensions are different). Thus, no basis expansion is needed. The condition numbers in the 2norm of R(k : 5, k : 5) obtained at the end of step k = 1, 2, 3, 4 of Algorithm 3 are respectively 2.1, 1.9, 1.6 and 1.4. Now we consider two more general cases for comparing the efficiency of the two algorithms: • Case 1. A = randn(n, n), where randn(n, n) is a MAT-LAB built-in function to generate a random n × n matrix, whose entries follow the normal distribution N (0, 1). • Case 2. A = U DV T , U , V are random orthogonal matrices obtained by the QR factorization of random matrices generated by randn(n, n) and D is a n × n diagonal matrix with d ii = 10 3(n/2−i)/(n−1) . In the numerical tests for each case for a fixed n we gave 200 runs to generate 200 different A's. Figures 1 and 2 display the average CPU time over 200 runs versus n = 2 : 2 : 20 for Cases 1 and 2, respectively. In both figures, "KZ" and "Modified KZ" refer to Algorithms 2 and 3, respectively. Figure 2 gives the results for only n = 2 : 2 : 10. This is because when n ≥ 12, Algorithm 2 often did not terminate within ten hours.
In Case 1, sometimes Algorithm 2 did not terminate within a half hour and we just ignored this instance and gave one more run. The number of such instances was much smaller than that for Case 2.
From Figures 1 and 2, we can see that Algorithm 3 is faster than Algorithm 2 for Case 1 and much faster for Case 2. Also, when we ran Algorithm 2 we got a warning message "Warning: Inputs contain values larger than the largest consecutive flint. Result may be inaccurate" several times, for both Cases 1 and 2 in the tests. But this did not happen to Algorithm 3. Thus Algorithm 3 is more numerically reliable.
V. SUMMARY AND COMMENT
In this paper, we modified the KZ reduction algorithm proposed by Zhang et al. in [13]. The resulting algorithm can be much faster and more numerically reliable.
The modified basis expansion strategy proposed in this paper can be applied in designing algorithms for the Minkowski reduction (see, e.g., [13]) and the block KZ reduction (see [12] and [16]). | 2015-04-20T08:29:35.000Z | 2015-04-20T00:00:00.000 | {
"year": 2015,
"sha1": "eb028341f7ce95dab16588cfa42e1fbc9755c029",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.05086",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3eae0b2d5bf9f43f07ff04cb7c0a33b6de9db6bd",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
239469818 | pes2o/s2orc | v3-fos-license | Does Oxygen Content Play a Role in Spontaneous Closure of Perimembranous Ventricular Septal Defects?
(1) Background: the impact of a series of laboratory parameters (haemoglobin, haematocrit, foetal haemoglobin, peripheral oxygen saturation, iron, transferrin, ferritin, and albumin) on perimembranous ventricular septal defects spontaneous healing was tested. (2) Methods: one hundred and seven patients were enrolled in the study (57% males; mean age 2.1 ± 0.4 years) and were subsequently subdivided into two groups: self-healing (n = 36) and in need of intervention (n = 71). Self-healing subjects were defined on the basis of an absence of residual shunts at colorDoppler across the previous defect. (3) Results: no statistically significant differences were reported in the size of perimembranous ventricular septal defects between the two groups (p = ns). Conversely, prevalence of anaemia was significantly higher in those requiring intervention than in the self-healing group (p < 0.03), while haemoglobin, iron, ferritin, and albumin levels were lower (p < 0.001, p < 0.05, p < 0.02, p < 0.007, respectively). In multivariable linear regression analysis, only haemoglobin and albumin were found to be associated with spontaneous closure (p < 0.005 and p < 0.02, respectively). In multiple logistic regression analysis, haemoglobin independently increased the probability of self-healing of perimembranous ventricular septal defects (p = 0.03). All patients needing an interventional closure of perimembranous ventricular septal defects presented with haemoglobin <12.7 g/dL. (4) Conclusion: the self-resolution of perimembranous ventricular septal defects seems to rely on numerous factors, including oxygen content, which is likely to promote cell proliferation as well as tissue regeneration. Haemoglobin blood concentration seems to influence the natural history of perimembranous ventricular septal defects and improvement of anaemia by supplementation of iron intake might represent a simple and reliable method to promote self-healing.
Introduction
Ventricular septal defect, manifested either as an isolated event or in conjunction with other cardiac abnormalities in syndromic and non-syndromic patients, is by far the most frequently encountered congenital heart defect (CHD) after bicuspid aortic valve in clinical practice, accounting for approximately 20% of all diagnoses when isolated [1,2]. Numerous classifications for ventricular septal defects have been proposed, although it is indisputable that perimembranous ventricular septal defect is the most frequently observed subtype in children (Figure 1), whilst a muscular presentation is most common in newborns.
At birth, approx. 8% of detected ventricular septal defects are perimembranous, i.e., involving both the membranous septum and the adjacent muscular area [3]. Diagnosis of ventricular septal defect is usually simple, being mainly based on echocardiography, At birth, approx. 8% of detected ventricular sept involving both the membranous septum and the adjac ventricular septal defect is usually simple, being ma widely used technique that also facilitates follow-up defect [4].
Numerous studies have demonstrated self-heali septal defects in 10-30% of cases, thus avoiding the p tions associated with repairs performed by means of c ter-occluding devices [5,6].
Anaemia is frequently observed in both children CHD, including ventriular septal defects at all ages iron metabolism, malnutrition, and circulatory conge ward the occurrence of anemia in CHD [9].
This study aimed to assess the impact of a series taneous full healing of perimembranous ventricular s tocrit, foetal haemoglobin, peripheral oxygen saturatio Numerous studies have demonstrated self-healing of perimembranous ventricular septal defects in 10-30% of cases, thus avoiding the potential peri-procedural complications associated with repairs performed by means of conventional surgery or transcatheteroccluding devices [5,6].
Anaemia is frequently observed in both children and adults affected by all types of CHD, including ventriular septal defects at all ages [7,8]. Renal impairment, abnormal iron metabolism, malnutrition, and circulatory congestion contribute independently toward the occurrence of anemia in CHD [9].
This study aimed to assess the impact of a series of laboratory parameters on spontaneous full healing of perimembranous ventricular septal defect: haemoglobin, haematocrit, foetal haemoglobin, peripheral oxygen saturation, iron, transferrin, ferritin, and albumin blood levels.
Patients in the Study
In this retrospective study, medical records of patients affected by perimembranous ventricular septal defect attending the Paediatric Cardiology Unit of the University of Cagliari (Italy) from January 1986 to February 2006 were examined. No additional subjects could be evaluated due to closure of the above-cited unit. Following a review of hardcopy medical records and electronic health records, one hundred and seven patients were included in the study (57% males), ranging from 1 day of life to 6 years (mean age 2.1 ± 0.4 years). Clinical and echocardiographic characteristics of study participants are listed in Table 1. Inclusion criteria were: presence of a perimembranous ventricular septal defect without other CHD, with the exception of transient patent ductus arteriosus, small atrial septal defect and/or one or more minor muscular ventricular septal defects; when surgical closure was indicated due to a large volume left-to-right shunt (Qp/Qs > 2:1), in the presence of significant pulmonary arterial hypertension (pulmonary arterial pressure > 50% systemic), clinical signs of congestive heart failure, banding of the pulmonary artery, or when closure of the perimembranous ventricular septal defect was deemed clinically necessary by the treating physician. Exclusion criteria comprised: ventricular septal defect exceeding 10 mm and/or a ventricular septal defect size/aortic diameter ratio higher than 2/3 (in which case spontaneous closure is improbable); patients presenting with multiple severe muscular ventricular septal defects and/or complex CHD; presence of ongoing bacterial infections/sepsis; haematological disorders (mainly haemolytic disease of the newborn); insufficient laboratory findings. Patients were subdivided into two groups: self-healing (n = 36) and those in need of intervention (n = 71).
Diagnosis was formulated and echocardiographic follow-up implemented. A HP/Philips Sonos 5500, (Amstedam, The Netherlands) echo machine coupled with two probes (3-5 Mhz and 8-10 Mhz) was used. Patients diagnosed with perimembranous ventricular septal defect were followed-up until either the hole in the interventricular septum closed spontaneously or surgical closure was performed. The net defect was measured, which may be smaller than the true due to tricuspid pouch formation. The self-healing group was defined on the basis of an absence of residual shunts at colorDoppler across the former perimembranous ventricular septal defect.
Written informed consent was waived owing to the retrospective nature of the research, as per Italian Law. The research was formally approved by the internal Ethics Committee of the University of Cagliari (PG/2015/1859) and conducted in accordance with the Helsinki declaration.
Laboratory Tests
The following blood parameters capable of affecting tissue oxygenation were evaluated on a blood sample taken from an antecubital vein-haemoglobin: normal haemoglobin values for children vary according to the child's age, sex, and race. Anaemia is defined in the presence of a haemoglobin value at or below the 2.5th percentile for age, race, and sex [9,10] Tables 2-4). Table 2. Laboratory parameters tested in the study.
1. Haemoglobin: an iron-containing oxygen carrier protein present in red blood cells. As a general rule, oxygen is transported in the blood in two forms: dissolved in plasma and red blood cell water (around 2%) and reversibly bound to haemoglobin (about 98%). Haemoglobin may be saturated with no more than four oxygen molecules (oxyhaemoglobin) or desaturated without oxygen molecules (deoxyhaemoglobin). Its affinity for oxygen may impair or enhance the release of oxygen to tissues. data 2. Hematocrit: the fractional volume of blood occupied by red blood cells with levels depending on the age and, following adolescence, sex of the individual ( Table 3). As a general rule, a rise in haematocrit increases oxygen concentration in the arteries and delivery to tissues. However, the latter may decrease in the presence of haemoconcentration and polycythaemia as a result of decreased venous return and cardiac output, respectively. However, as a compensatory mechanism, low flow velocity leads to extended transit time of red cells through the capillary network, thus facilitating preservation of tissue oxygenation. On the contrary, even in the event of a decrease in oxygen content caused by haemodilution (low haematocrit and normal blood volume), opposite mechanisms may contribute towards preserving tissue oxygenation by means of increased cardiac output and blood flow to the organs based on lower blood viscosity, reduced total peripheral resistance, and increased venous return. data data data data 3. Foetal haemoglobin: at birth, 50%-95% of haemoglobin in an infant born at term is represented by foetal haemoglobin, although levels decline rapidly over the initial six months of life as synthesis of adult-type haemoglobin is activated and synthesis of foetal haemoglobin comes to a standstill. However, foetal haemoglobin has been detected in the blood of adults (<1% of all haemoglobin). The most striking difference compared with adult haemoglobin is represented by the observation that foetal haemoglobin displays a higher affinity in binding to oxygen than the adult form, thus facilitating the capture of oxygen deriving from the mother's bloodstream by the foetus. A series of genetic abnormalities may induce a failure in the switch to adult haemoglobin synthesis, thereby heralding onset of a state of hereditary persistence of foetal haemoglobin into adulthood. This condition is usually asymptomatic and may at times alleviate the severity of certain haemoglobinopathies and thalassemias, which are not uncommon in Sardinia, i.e., the Italian region where the research was carried out. data data 4. Oxygen saturation: the amount of oxygen travelling through the body with red blood cells. In humans, normal levels of oxygen saturation range from 95%-100%, with levels below 90% being considered low and resulting in hypoxemia. Blood oxygen levels of less than 80% (cyanosis) may affect organ development and function and should be promptly addressed. Oxygen saturation is usually measured by means of pulse oximetry. 5. Serum iron levels: laboratory test measuring the quantity of transferrin-and serum ferritin-bound iron (approx. 90% and 10%, respectively) present in blood. Approximately 65% of total body iron is bound to haemoglobin molecules, as part of a heme group, in red blood cells, with approx. 5% present in myoglobin molecules. Almost 30% of total body iron is stored mainly in the liver, bone marrow, and spleen as ferritin or hemosiderin. A lack of iron may result in onset of anaemia. 6. Transferrin: a group of blood plasma glycoproteins which binds to iron and regulates free iron levels in biological fluids. Transferrin synthesis occurs mainly in the liver. Increased plasma transferrin levels are frequently observed in patients affected by iron deficiency anaemia. As plasma transferrin level increase, a concomitant decrease in the percentage of transferrin iron saturation is manifested. 7. Ferritin: an intracellular protein responsible for the storage and controlled release of iron. Liver stores of ferritin represent the primary body reserve of iron and protect against iron deficiency. In the presence of low ferritin levels, iron deficiency may ensue, potentially resulting in anaemia. Low serum ferritin is a highly specific laboratory test used to detect iron-deficiency anaemia.
data Table 3. Haemoglobin hematic concentration (g/dL) normal values. Adult women 38-46% The above blood parameters were examined every 3-6 months until ventricular septal defect self-healing was achieved or surgical/interventional procedures were under taken and their averaged value used for statistical analysis.
Statistical Analysis
The results obtained in the study population (n = 107) were analysed and the findings obtained in the self-healing group subsequently compared with those from the group requiring intervention using the parametric Student's t-test, as the sample was normally distributed. Normality was assessed by means of the Kolmogorov-Smirnov test. Multivariate analysis was also performed to analyse more than one outcome variable. The model included the hematic variables which tested statistically significantly different between the two subgroups (i.e., haemoglobin, iron, ferritin, albumin) along with age and gender. For the purpose of this paper, statistical significance was set at <0.05. Commercially available computer software (SPSS version 22.0, SPSS Inc., Chicago, IL, USA) was used for all analyses.
Results
The findings obtained in the self-healing group vs the group requiring intervention are summarized in Table 5. Regarding the time required by the two groups to reach their outcome, it was statistically significant. Since no statistically significant differences were detected in the dimensions of ventricular septal defect between the two groups, the need for an early intervention was likely due to a drop in pulmonary vascular resistances, which led to a significant left-to-right shunt and in turn to pulmonary overflow and heart failure. Furthermore, anaemia was significantly more prevalent amongst subjects requiring intervention than in the self-healing group (p < 0.03), while haemoglobin, iron, ferritin, and albumin levels were lower (p < 0.001, p < 0.05, p < 0.02, p < 0.007, respectively).
At multivariable linear regression analysis only haemoglobin and albumin featuring an association with spontaneous closure of perimembranous ventricular septal defect (p < 0.005 and p < 0.02, respectively). Multiple logistic regression analysis revealed that haemoglobin independently raised the probability that self-resolution of ventricular septal defect would be achieved (p = 0.03).
Haemoglobin levels in all patients in the needing intervention group was less than 12.7 g/dL (Figure 2). left-to-right shunt and in turn to pulmonary overflow and heart failure. anaemia was significantly more prevalent amongst subjects requiring inte in the self-healing group (p < 0.03), while haemoglobin, iron, ferritin, and a were lower (p < 0.001, p < 0.05, p < 0.02, p < 0.007, respectively).
At multivariable linear regression analysis only haemoglobin and albu an association with spontaneous closure of perimembranous ventricular sep 0.005 and p < 0.02, respectively). Multiple logistic regression analysis reve moglobin independently raised the probability that self-resolution of vent defect would be achieved (p = 0.03).
Haemoglobin levels in all patients in the needing intervention group 12.7 g/dL (Figure 2).
Figure 2.
All the patients in the study whose perimembranous ventricular septal def by itself had a haemoglobin level less than 12.7 g/dL.
Discussion
In clinical practice, the most commonly observed CHD is represented b septal defect, of which a considerable percentage displays a tendency to de and to self-heal. Numerous mechanisms of spontaneous closure of perimem tricular septal defect have been described previously [19].
The present study excluded both muscular ventricular septal defects a types. The former present with a completely different natural history comp perimembranous form, i.e., spontaneous healing in 85%-90% of cases due to muscularization of the left ventricle, while for other subtypes the rate of spon ing is very low [20].
Furthermore, perimembranous ventricular septal defects with dimensio 10 mm and/or a ventricular septal defect size/aortic diameter >2/3 ratio w from the study due to the improbability of spontaneous closure occurring [2 explain why our findings highlighted no significant differences in ventricu fect size between the two groups studied.
A slight prevalence of the disease amongst males was detected in our c ally speaking, no significant sex-related differences in ventricular septal defe are reported in literature [22].
Numerous anatomical parameters have been put forward as potential predictors influencing the spontaneous healing of ventricular septal defec more detailed knowledge should be acquired, and the biological mechanism All the patients in the study whose perimembranous ventricular septal defect did not heal by itself had a haemoglobin level less than 12.7 g/dL.
Discussion
In clinical practice, the most commonly observed CHD is represented by ventricular septal defect, of which a considerable percentage displays a tendency to decrease in size and to self-heal. Numerous mechanisms of spontaneous closure of perimembranous ventricular septal defect have been described previously [19].
The present study excluded both muscular ventricular septal defects and other subtypes. The former present with a completely different natural history compared with the perimembranous form, i.e., spontaneous healing in 85%-90% of cases due to a progressive muscularization of the left ventricle, while for other subtypes the rate of spontaneous healing is very low [20].
Furthermore, perimembranous ventricular septal defects with dimensions exceeding 10 mm and/or a ventricular septal defect size/aortic diameter >2/3 ratio were excluded from the study due to the improbability of spontaneous closure occurring [21]. This might explain why our findings highlighted no significant differences in ventricular septal defect size between the two groups studied.
A slight prevalence of the disease amongst males was detected in our cohort. Generally speaking, no significant sex-related differences in ventricular septal defect prevalence are reported in literature [22].
Numerous anatomical parameters have been put forward as potential independent predictors influencing the spontaneous healing of ventricular septal defects. However, more detailed knowledge should be acquired, and the biological mechanisms implicated better elucidated [23]. Accordingly, a series of hematic factors potentially underlying the spontaneous resolution of perimembranous ventricular septal defects were examined.
Anemic infants have ventricular septal defects more frequently than those with normal haemoglobin levels [24]. Haemoglobin is capable of promoting tissue regeneration, cells proliferation and wound healing, as both processes rely heavily on oxygenation. There is no doubt that an adequate supply of nutrients and oxygen to regenerating cells is crucial for their survival and functional maintenance [25].The natural healing of perimembranous ventricular septal defects involves a series of different mechanisms, including tissue growth from the remnant membranous septum or tricuspid valve and adhesion of tricuspid valve leaflets [19]. Tissue regeneration seems to be influenced by haemoglobin-related tissue oxygenation in a number of clinical scenarios [26]. Moreover, correct blood viscosity represents another factor of importance in promoting the adhesion of tricuspid valve leaflet to the tissue surrounding perimembranous ventricular septal defects [19]. Blood viscosity is intricately linked to haemoglobin content: the higher the haemoglobin levels, the higher the hematic viscosity and vice versa [27]. Anaemia, and subsequently reduced blood viscosity, may exert a negative influence on the above-stated adhesion, thus impinging on the natural resolution of ventricular septal defect [28]. Not only, but left-to-right shunt across ventricular septal defect may lead to a volume overload to the lungs, as expressed by QP/Qs ratio higher than 1.1, with consequent further dilution of haemoglobin [29]. Major and minor forms of thalassemia and other types of anaemia are considerably diffuse in Sardinia, i.e., the Italian region where the research was carried out, which may have negatively affected the number of self-healing patients in our cohort [30]. Our research showed a significantly lower prevalence of anaemia in the self-healing group than in those needing intervention. Our findings also suggest a value of 12.7 g/dL in haemoglobin content, above which level the self-healing of ventricular septal defect is unlikely. Overall, as confirmed also at multivariate analysis, haemoglobin blood concentration seems to influence the natural history of ventricular septal defect, and improvement of anaemia through supplementation of iron intake might represent a simple and reliable method of promoting the spontaneous healing of perimembranous vetricular septal defects [31].
Albumin levels were significantly higher in the self-healing group than in patients requiring intervention. In a previous study, hypoalbuminemia, commonly observed in patients with CHD, was found to be associated with an increased risk of death, even after adjustment for disease complexity, functional class, and other risk factors [32]. Low albumin levels are likely caused by acute and/or chronic heart disease due to systolic or diastolic left ventricular dysfunction, which are not uncommon in ventricular septal defect patients [33,34]. Some animal models have also suggested a role of albumin in promoting early wound healing [35,36].
This study is undoubtedly hampered by several limitations: (1) the study was of a single centre retrospective design and therefore contingent on the inherent bias associated with this type of study (missing data; referral and selection bias); (2) over the period examined , the majority of surgical procedures were delayed compared with current standard decision making and a smaller number of interventional perimembranous ventricular septal defect closures were performed [37]. This may have slightly influenced our findings; (3) other unevaluated factors of an epigenetic nature might have influenced the results [38]. In this setting, artificial intelligence and machine learning may represent a potential way to detect important predictors of perimembranous ventricular septal defect self-resolution. The superior performance of machine learning in detecting haemoglobinrelated and genetic predictors of cardiovascular endpoints has been already shown in some papers [39,40]; (4) anaemia, or conversely, polycythaemia and ventricular septal defect are not uncommon in Trisomy 21 patients, although the possible effect of Down syndrome itself on self-healing was not investigated [41,42]; recently, a scoring system aimed at predicting spontaneous healing of perimembranous ventricular septal defect, mainly based on anatomical factors, was proposed, although it was not tested in this study [43].
Conclusions
Overall, the self-resolution of perimembranous ventricular septal defect seems to rely on numerous different factors, including oxygen content, which likely promotes cell proliferation as well as tissue regeneration. Accordingly, in the presence of low oxygen saturation at a high altitude, ventricular septal defects are more likely to remain open [44]. Furthermore, more in-depth studies are needed to better clarify this intriguing finding [45].
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-10-18T17:11:28.220Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "9037537139be2285d091a0911f793d477b98f601",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/8/10/881/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bcec3c043d74703fa70783d44d1628b76395cd14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237861333 | pes2o/s2orc | v3-fos-license | Effect of tourniquet on skeletal muscle ischaemia and return of function: A prospective randomized clinical study
Objectives: The purpose of this prospective randomized trial was to examine the effect of tourniquet use on skeletal muscle damage and return of function after closed tibial interlocking nail. Methodology: 74 patients who underwent closed tibial nailing were randomly allocated to the use of an inflated pneumatic tourniquet (group 1, n =37) or no tourniquet (group 2, n =37). Patients with Compound tibia fractures; Compartment syndrome; Multiple fractures; Polytrauma and infection were excluded. The primary outcome measures were pain, measured with a visual analog scale (VAS), return of activity measured by ability to do SLR(Straight Leg Raising) postoperatively and serum creatine phosphokinase (CPK) levels monitored preoperatively and 24 hours postoperatively. Results: The 2 groups did not differ in baseline demographics. Mean duration of surgery was similar in both groups. There was threefold increase in mean CPK in group1when the duration of surgery is < 90 minutes and fivefold increase when duration exceeded 90 minutes, but only 2 times increase of CPK in group2 which was not affected by the duration of surgery. The difference in the mean scores between the two groups was statistically significant from 2nd to 4th postoperative day (P-value < 0.05). The mean days to regain SLR in patients with duration of surgery more than 90 minutes were significantly higher than that of mean scores in patients with duration of surgery less than 90 min in group1 only. Conclusions: It is much safer not to use tourniquet where surgery is going to be prolonged than 90 minutes. Early postoperative pain is lesser and return of function is faster in patients who were operated without a tourniquet as compared to patients with prolonged surgery with tourniquet.
Introduction
A tourniquet is a constricting or compressing device used to control venous and arterial circulation to an extremity for some time. Operations on the extremities are made easier by the use of a tourniquet [1] . The tourniquet is a potentially dangerous instrument that must be used with proper knowledge and care [2] . In some procedures, a tourniquet is a luxury, whereas, in others, such as delicate operations on the hand, it is a necessity. Tibia shaft fractures are commonly treated with closed reduction and internal fixation with an interlocking intramedullary nail. However, there is no consensus regarding the use of a tourniquet during intramedullary nailing for tibia fractures. There are lacunae in the existing literature regarding the degree of muscle damage occurring during intramedullary nailing of tibia fractures and its relation with the use of a tourniquet and the duration of surgery. Many factors have been implicated in the development of post tourniquet complications but the duration of a tourniquet and tourniquet pressure is of the most importance. Two hours of tourniquet time is generally accepted as a safe duration for the human lower limb, however, experimental evidence to support this clinical aphorism is sparse [3] . Creatine phosphokinase (CPK) is an intracellular enzyme expressed by various tissues and cell types including muscles. Due to the relatively small molecular size of Creatine Phosphokinase (CPK), it leaks out of ischaemic muscle cells into the circulation. CPK catalyzes the conversion of creatine and consumes adenosine triphosphate (ATP) to create phosphocreatine (PCR) and adenosine diphosphate (ADP) [4] .
The release of intracellular enzymes to the extracellular space is considered by some to be the earliest reliable biochemical signs of ischaemic cellular damage to the muscles [5] . Walter Gail L, Smith Graham S [6] , stated that "Increased serum activities of Creatine Phosphokinase (CPK), Aspartate Aminotransferase (AST), and lactate dehydrogenase (LDH) occur with a muscle injury. Of these three, CPK is generally the most sensitive, with peak activity reached within about 6-12 h following the insult. CPK has a relatively short half-life (about 2-4 h), and activity returns rapidly to a normal following cessation of myodegeneration or necrosis [6] . Oda J, Tanaka H et al. in analysed 372 patients with Crush syndrome caused by the Hanshin-Awaji earthquake and found that peak serum CPK concentration increased with the number of crushed extremities. They concluded that the degree of CPK elevation correlates with the degree of muscle injury [7] . Although, till date, most of studies have focused on the benefits of tourniquets in extremity surgery, their effect on post-operative rehabilitation remains sparsely studied [1,2,5] . This study aims to evaluate the tourniquet induced muscle injury in intramedullary nailing of tibial shaft fractures. Our primary hypothesis was that the use of a pneumatic tourniquet would result in delayed return to activities, as well as increased serum CPK values as tourniquet use is will cause muscle ischemia and weakness. Theoutcome variable of our study included postoperative pain and postoperative return of active straight leg raise (SLR). We analysed the association of these outcomes with the use of tourniquet, tourniquet time, duration of surgery, and the change in CPK levels. The purpose of this prospective randomized trial was to examine the effect of tourniquet use on rehabilitation, return to activities, and muscle damage.
Materials and Methods
This institutional-based Randomized Prospective comparative study was conducted in the Department of Orthopaedics, tertiary care University teaching hospital after approval of the Institutional Ethics Committee. 80 patients with tibial shaft fractures were enrolled according to inclusion and exclusion criteria which are: Inclusion Criteria: Adult patients (above 18 years of age); Closed tibia shaft fractures; Fit for Closed reduction and internal fixation with Intramedullary Interlocking Nailing. Exclusion Criteria: Compound tibia fractures; Compartment syndrome; Multiple fractures other than the tibia fracture being managed operatively; Polytrauma; Patients suffering from ischaemic heart disease; Patients suffering from peripheral vascular disease; Patients suffering from sickle cell disease; Fractures being treated by open reduction and plate and screw fixation; Brain injury/stroke; Convulsions; Delirium tremens; Dermatomyositis or polymyositis; Electric shock; Myocarditis; Pulmonary infarction; Muscular dystrophies; Myopathy; Infection. They were informed about the nature and objective of the study and written informed consent was taken before recruiting them into the study. Preoperative counselling & pre-operative anaesthetic assessment were done. Pre-operative radiographic assessment using plain radiographs of the involved leg in anteroposterior and lateral view was obtained. Six patients were excluded from the study as they required either and open reduction or additional injuries were detected later leaving 74 patients to be included in final study. Venous blood samples were collected for assessment of Serum Total Creatine Phosphokinase (CPK) levels. 1st sample: 1 day before the surgery 2nd sample: 24 hours after the surgery Anaesthesia: All surgeries were performed under spinal anaesthesia (subarachnoid block). No local or Epidural analgesia was given to any case.
Randomization: Patients included in the study were randomized into two groups using Alternate Allocation method as follows: 1. Group 1:-Intramedullary tibia nailing with the use of a tourniquet (n=37 patients). 2. Group 2:-Intramedullary tibia nailing without the use of a tourniquet (n=37 patients). Tourniquet used: A single bladder pneumatic tourniquet cuff, of a size suitable for the size of the limb, was applied to the mid-thigh of the limb to be operated on. Cotton soft-roll padding was used beneath the tourniquet cuff. The tourniquet cuff, after application, was additionally wrapped with crepe bandage to prevent it from slipping. The cuff was connected to microprocessor-controlled pneumatic tourniquet device. Before the skin incision, the limb was elevated and exsanguinated using sterile Esmarch's bandage. Immediately after the exsanguination, the tourniquet pressure was elevated to 280 to 300 mm of Hg. The tourniquet timer was started at the time of inflation of the tourniquet and was stopped at the deflation of the tourniquet.
Statistical analyses
The data on demographic parameters like age and gender was obtained for patients in two treatment arms and was expressed in terms of numbers and percentages. The mean and standard deviation of the age was also obtained. The distribution of patients as per the duration of surgery and tourniquet used was also obtained. The CPK levels of patients in both treatment arms before and after surgery were obtained and expressed in terms of mean, standard deviation, and median. The correlation of CPK levels was also obtained with the duration of surgery. The number of days for straight leg raising post-surgery and VAS (Pain score) was expressed in terms of mean, standard deviation, and median; and was compared between two groups using Multivariate ANCOVA. The significance level was set at P <.05.
Observations & Results
Baseline characteristics were similar in both groups. In a group, 1 mean duration of surgery was 77.68 +/-20.65 min and in a group, 2 was 86.03+/-20.75 minute. The difference in the mean duration between the two groups was statistically insignificant with a P-value of 0.087 (P > 0.05).
CPK Level:
In Group 1, the mean CPK level was 211+/-71.1, while after surgery, the mean level was 903.78+/-268.09. The difference between the pre and post-surgery CPK levels was significant with P-value < 0.0001. In Group 2, the mean CPK level before surgery was 191.76 +/-71.94 and post-surgery, the mean level was 389.49+/-108.04. The difference between the pre and post-surgery CPK levels was statistically significant (Table1). As regards comparing CPK levels between two arms surgery, the difference in the CPK levels of two arms was statistically significant with P-value 0.000(P <0.05).
SLR:
In Group 1, the mean duration for SLR was 3.41+/-.599, whereas in Group 2 the mean duration of SLR was 2.51+/-.511, and the difference between two groups is statistically significant (Table1). On applying MANOVA we observed SLR is also affected by the duration of surgery, with P value .000 (P <0.05) i.e. the mean scores in patients with duration of surgery more than 90 minutes were significantly higher than that of mean scores in patients with duration of surgery less than 90 min (Table1).
VAS Score: Difference in the mean scores between two groups was statistically significant at the 2nd -4th day as indicated by P-values < 0.05. The difference was statistically insignificant on the 5th day with P-value > 0.05(Figure1). The mean scores in patients with duration of surgery more than 90 minutes were significantly higher than that of mean scores in patients with duration of surgery less than 90 min in both groups.
Discussion
The use of tourniquet has been known for ages, however, the side effects of using tourniquet for surgical procedures had been only sparsely studied in the recent literature and that too only after the advent of total knee replacement. There has been constant confusion about whether tourniquet should be used as a routine or not. The review of the literature suggests that tourniquet used for more than 90 minutes causes postoperative complications like oedema, infection, deep vein thrombosis, the problem of wound healing and functional recovery of the quadriceps muscle [3,12] . Keeping this in mind, we have studied the effect of a tourniquet, as indicated by Creatine Phosphokinase levels on functional recovery of muscle and joint. We observed that levels of CPK depends on both tourniquet application and duration of the surgery. On applying MANOVA we observed that there was a threefold increase in mean CPK when the duration of surgery was less than 90 minutes while there was fivefold increase in mean CPK when surgical duration exceeded 90 minutes in tourniquet group 1 (Table2 and Figure2) but only 2 times increase in CPK in group 2, and not affected by the duration of surgery. Thereby, implying that it is much safer not to use tourniquet where surgery is going to be prolonged. The mean CPK level was also affected by velocity of trauma, as we observed that mean preop CPK score in both group was less in simple fracture where duration of surgery was less than 90 min as compared to mean preop score in high velocity trauma where duration of surgery was more than 90 min, but this increase was not statistically significant. Looking at the primary outcome measure pain, we observed that there was a significant increase in mean pain score in group 1 as compared to group 2 between day 2 to 5 postoperatively. The difference in the mean scores between the two groups was statistically significant from 2nd to 5th postoperative day as indicated by P-values < 0.05. This difference became statistically insignificant from 6th to 7th postoperative day with P-value > 0.05. There is also a correlation between postoperative mean pain score and duration of surgery as the difference is statistically significant. Tsarouhas A et al. [9] observed that in arthroscopic meniscectomy, where the duration of surgery and tourniquet time is usually less than 30 minutes, the difference in pain perception between the tourniquet and non-tourniquet group was not statistically significant, which is in contrast to our study, where we observed that that early postoperative pain was significantly less in patients who were operated without a tourniquet in comparison to the patients who were operated with tourniquets. Tai T W et al. and Ledin H et al. [10,11] also noted similar findings in their study. As far as the Straight Leg Raising is concerned, when the tourniquet was used the difference in the time required for straight leg raising between two duration categories (Less than 90 minutes and more than 90 minutes) was statistically significant (P < 0.05). This indicates that the return of SLR was much more delayed in patients, where the tourniquet was used for more than 90 minutes which corresponds with the findings of Li B et al., Abdel-Salem et al. [8,12] . When the tourniquet was not used the difference in the time required for straight leg raising between two duration categories (Less than 90 minutes and more than 90 minutes) was statistically insignificant with a P-value of 0.1198 (P < 0.05). The mechanism behind the reduced range of motion is unclear. However, the postoperative thigh pain might reflect muscle injury due to physical damage, as well as reperfusion injury, which might both cause a degree of muscle fibrosis as stated by Ledin et al. [10] . We observed a significantly greater number of complications like infection, induration, and swelling in the tourniquet group. Twenty-three percent of patients in the tourniquet group developed wound infection compared with only 5.8 % of the non-tourniquet group (p <0.05). The increased incidence of postoperative swelling and ecchymosis could be attributed to post op anoxic inflammatory oedema after prolonged tourniquet application as stated by Silver R et al., Bannister GC, and Li B et al. [8,13,14] . Our study has certain limitations. First, slightly less small sample size with smaller subgroups for the various time interval could have failed to eliminate sampling errors. An a priori power analysis was performed. For a repeated-measures analysis, including between-subjects and within-subject interactions, between 2 independent groups with a minimum effect size of 0.20 and at least 2 consecutive measurements, a total sample size of 80 was calculated to have greater than 80% power to address the test hypothesis. We could recruit only 74 subjects. Second, different fracture patterns of the tibia influenced the muscle injury, duration of surgery, and the resultant postoperative outcomes, which were not accounted for in this study. Third, the surgery was performed by various surgeons who had a varying influence on soft tissue handling and resultant muscle injury.
Conclusion
Our study showed that tourniquet use for more than 90 minutes during tibial nailing has negative effect on postoperative pain and return of function. This tourniquetinduced muscle damage is detectable in the systemic circulation in form of raised CPK levels and significantly greater number of complications like infection, induration, and swelling in the tourniquet group. | 2021-09-28T01:10:19.590Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "98e141e16ab3e9ecd98fc48e5909e931e2e19dd7",
"oa_license": null,
"oa_url": "https://www.orthopaper.com/archives/2021/vol7issue3/PartD/7-3-26-952.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "61df653692bc4b8bb1464807c92c2672208b4e4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54971379 | pes2o/s2orc | v3-fos-license | A Novel SIW H-Plane Horn Antenna Based on Parabolic Reflector
A new type of H-plane horn antenna based on the parabolic reflector principle is proposed in this paper. The parabolic reflector is constructed with the substrate integrated waveguide technology and is used for generating large radiation aperture and uniform phase distribution at the aperture, yielding a fan beam with very narrow beamwidth in H-plane. A feeding source composed of a probe and an inductive-post reflector is designed as the feed, which can transmit a unidirectional incident wave toward the parabolic reflector. Two metallic strips with post arrays are designed as a transition for the matching between the horn aperture and the free space and also work as a director to realize unidirectional radiation and reduce the front-to-back ratio. The antenna has the advantages of narrow beam, compact size, and easy integration, and the operation bandwidth is from 27GHz to 35.5 GHz. The experimental results show good agreement with the simulated results.
Introduction
The rectangular waveguide horn has been widely used in microwave and antenna applications due to its advantages such as high gain, wide bandwidth, versatility, low standingwave ratio, ease of excitation, and simplicity in fabrication [1].However, its application in the planar integration circuits is limited due to its large volume, heavy weight, and cost.This problem is resolved by the new technology of substrate integrated waveguide (SIW) [2,3], which has low cost and low loss and is easy to be integrated with planar circuits.Therefore, it has developed rapidly since it was proposed and has been widely used in various types of microwave and millimeter-wave components [4][5][6].
Based on SIW structure an H-plane horn is proposed in [7], which evolves directly from the conventional H-plane horn based on rectangular waveguide.In [8] the dimension of the antenna is successfully reduced by modifying the profile of the SIW horn to be superelliptical.However, the beamwidth is relatively large for all these designs due to the limited effective aperture in H-plane [1], and this is unfavorable for some applications of H-plane horn antenna, such as radar.Thus construction of horn antenna array is required in order to achieve narrower beamwidth in H-plane [9] but is followed by the shortcoming of complex feeding network that occupies more space.Another method is to use metamaterial to generate uniform phase distribution at the horn aperture to achieve high gain and narrow beam [10,11], but it is difficult to design the complicated structure and the bandwidth is limited due to the characteristic of the metamaterial.
Parabolic reflector antenna is a conventional type of antenna with excellent advantages of high gain and wide bandwidth [12,13].It operates based on the principle of geometrical optics, which creates a uniform phase distribution at the large aperture, yielding high gain and wide bandwidth.However, it suffers from large volume, is bulky, and is impossible to be integrated with planar circuits.In [14] an SIW parabolic reflector is used to feed a two-dimensional slot array antenna to realize multibeam radiation, and offset feeding is employed for the parabolic reflector.Compared with the offset feeding, frontal feeding has more difficulties, since the unidirectional illumination on the reflector is difficult to be realized by the frontal feeding in a planar structure.In [15] a method of frontal feeding is used for a parabolic reflecting wall in the design of a surface-wave antenna with wide bandwidth from 6.1 GHz to 18 GHz, which is proposed for conformal mounting on aircraft surface.The surfacewave launcher is excited by a coaxial probe, and a metallic capacitive post is used for improving the impedance matching performance.However, as the author claimed, the uniformity of phase distribution at the aperture is not very good due to the superposition of the reflected wave by the parabolic wall and the wave transmitted directly from the probe, because the capacitive post cannot reflect the wave transmitted from the probe.We noticed that the parabolic reflecting principle has great potentials in planar antennas, not only for the two antenna types in [14,15], but also for the planar horn antennas mentioned above.So in this paper a novel H-plane horn antenna is proposed, in which SIW parabolic reflector is employed to realize narrower fan beam and higher gain in H-plane than the conventional H-plane horn antennas mentioned in [7][8][9] without constructing antenna array.In addition, an inductive post is adopted and placed near the probe, acting as a reflector to reduce the wave transmitted directly from the probe and hence to improve the uniformity of the phase distribution at the aperture.This design has potential applications in millimeter-wave communication and radar with properties of high direction accuracy and easy integration with planar circuits and devices.
Another important problem for planar horn antenna is the mismatch between the horn aperture and the free space, which also has influence on the radiation property.There are some methods for improving the matching performance, such as loading a dielectric lens [7,9], printed parallel plates [16], or grating blocks [8] at the horn aperture.In this paper a new transition structure is proposed for improving the matching performance between the horn aperture and the free space.
In summary the antenna proposed in this paper has the following advantages: (1) It works based on the principle of parabolic reflection, so it has a narrow beamwidth in Hplane and a relatively wide bandwidth.(2) It has low profile, can be fabricated with PCB process, and hence is easy to be integrated with millimeter-wave planar circuits.(3) Complex feeding network is not needed, so a lot of space can be saved, making the design simpler.This paper is organized as follows: in Section 2, the configuration of the proposed antenna is described and the operation principle is analyzed; in Section 3, the simulated and experimental results are given and discussed.
Configuration and Theory
The overall configuration of the proposed antenna is shown in Figure 1, implemented on a Rogers RT5880 substrate with dielectric constant = 2.2 and thickness ℎ = 1.575 mm.The antenna is composed of three parts: the parabolic reflector, the feeding structure, and the transition.In the part of parabolic reflector, the SIW technology is adopted to construct an array of via holes in parabolic shape.The diameter of via is 1 and the profile of the parabolic reflecting wall can be expressed as 2 = 4, where is the focal length ( = 30.3mm).The value of should not be too small for the purpose of constructing a large radiation aperture.Also, large value of can reduce the power density along the center line and hence make the illumination on the parabolic reflector more uniform and reduce the return loss.The advantage of the parabolic shape is that the phase wave front emitted from the focal point can be transformed from cylindrical wave to plane wave and reach the uniform phase distribution at the aperture.Therefore, the effective aperture can be significantly enlarged compared with that of the common types of H-plane horn.The feeding structure consists of a coaxial probe and a metallic post, as shown in Figure 1(b).The coaxial probe is the inner conductor of a 2.92 mm airline connector, inserted into the substrate with depth of ℎ 1 .The distance between the probe and the parabolic reflecting wall is equal to the focal length .The metallic post penetrates the entire substrate and acts as a shunt inductive reactance whose value is related with the diameter of the post.This metallic inductive post plays the role of a reflector which reflects the wave transmitted directly from the probe in + direction, yielding a unidirectional incident wave toward the parabolic reflector.Therefore, the adoption of the inductive post can effectively reduce the influence of the wave transmitted directly from the probe on the phase distribution at the aperture, which is beneficial for constructing the uniform phase distribution and further generating very narrow beamwidth in H-plane.
A transition structure is loaded at the aperture of the parabolic reflector for the purpose of improving the impedance matching between the horn aperture and the free space, as shown in Figure 1.It is composed of two pairs of metallic strips on the top and bottom of the substrate and two arrays of metallic posts along the strips.The two strips have different width, and the metallic posts connect the top and bottom strips.Also, this transition can act as a wave director, leading the wave propagating in + direction.The performance of the transition can be optimized by adjusting the widths 1 and 2 of the strips, the diameter 4 , and the period of the posts, in order to obtain a better matching performance.
Field Distribution and Return
Loss.The field distributions inside the substrate are shown in Figures 2(a) and 2(b), which are obtained without and with the reflective post in the feeding structure, respectively.By comparing the two figures, we can find that the phase distribution with the reflective post is more uniform at the aperture than that without the reflective post.This is because the reflective post has an inductive reactance and can reduce the wave transmitted by the probe in + direction, which would interfere with the wave reflected by the parabolic reflector.Therefore, by using the reflective post, a more uniform phase distribution at the aperture can be achieved.So the reflective post in the feeding structure is a key element in the feeding structure.The spacing between the reflective post and coaxial probe needs to be optimized carefully in order to obtain the optimal result of unidirectional illumination on the parabolic reflector.In addition, the diameter 3 of the reflective post and the depth ℎ 1 of the probe inserted into the substrate need to be optimized in order to achieve good feeding performance.
All of these factors have significant effects on the uniformity of the phase distribution at the aperture.The optimal parameters are obtained by a lot of simulations as follows: ℎ 1 = 1.3 mm, 3 = 1.6 mm, and = 3 mm.
Figure 3 shows the reflection coefficient 11 of the antenna with and without the inductive reflective post in the feeding structure.It can be seen that the bandwidth of 27 GHz-35.5GHz can be obtained by both cases, demonstrating that the adoption of the reflective post does not have obvious effect on the bandwidth of the design.The transition at the aperture of the parabolic reflector in Figure 1(a) is adopted for improving the matching performance between the aperture and the free space.We noticed that in [16] the transition is composed of two identical parallel plates without posts.In this paper the two strips have different widths for more flexible adjustment, and more importantly two rows of metallic post arrays are employed along the strips for better performance.Figure 4 shows the comparison of reflection coefficient 11 of the antenna with and without the post arrays in the strips.It can be seen that the reflection coefficient of the antenna can be significantly reduced by mounting the metallic post arrays along the strips, and the −10 dB bandwidth from 27 GHz to 35.5 GHz can be achieved.
The fabricated prototype of the proposed H-plane horn antenna is shown in Figure 5.In Figure 6 the experimental result of the reflection coefficient 11 of the antenna is given, showing good agreement with the simulated result.Also, it can be observed that the proposed horn antenna with reflective post has higher gain than that without the reflective post.This is because the reflective post can reduce the wave transmitted by the probe in + direction and hence generate a more uniform phase distribution at the aperture, as discussed in Section 3.1.
Radiation Properties of the
The results of the simulated and measured peak gain of the proposed horn antenna are presented in Figure 8.It can be seen that the simulated peak gain increases from 12.7 dBi to 18 dBi when the frequency changes from 27 GHz to 36 GHz.The measured results of peak gain match well with the simulated results.In addition, the simulated peak gain when the reflective post is not employed in the feeding structure is given for comparison.It can be seen that the peak gain with the reflective post is higher than that without the reflective post in the whole working frequency range.
The simulated and measured results of 3 dB beamwidth in H-plane of the proposed antenna are shown in Figure 9.As depicted in Figure 9, the 3 dB beamwidth in H-plane has a small value, decreasing from 4.5 ∘ to 2.9 ∘ when the frequency changes from 27 GHz to 36 GHz.The measured results match well with the simulated results at the testing frequencies of 30 GHz, 32 GHz, and 34 GHz.
Conclusion
A novel SIW H-plane horn antenna is proposed in this paper, in which a parabolic reflector structure is adopted to construct a uniform phase distribution at the aperture of the antenna in order to realize a narrow beamwidth in H-plane.A new transition structure with metallic post arrays is used at the aperture for matching with the free space.The proposed antenna can achieve a bandwidth from 27 GHz to 35.5 GHz.The radiation pattern and the main beam are very stable in this bandwidth, and a very narrow beam in H-plane can be achieved.This design has the advantages of compact size, easy feeding, narrow beamwidth in H-plane, and stable radiation properties and is suitable for the integration with millimeterwave planar circuits.
2 InternationalFigure 1 :
Figure 1: Configuration of the proposed H-plane horn antenna.(a) Top view.(b) Side view (not to scale).
Figure 2 :
Figure 2: Field distributions inside the substrate: (a) without the reflective post in the feeding structure, (b) with the reflective post in the feeding structure.
Figure 3 :
Figure 3: Comparison of reflection coefficient 11 of the antenna with and without the reflective post in the feeding structure.
Figure 4 :Figure 5 :Figure 6 :
Figure 4: Comparison of 11 of the antenna with and without two rows of metallic post array in the transition.
Figure 7 .
Figure 7.It can be seen that the antenna has a fan beam, and the radiation patterns in both E-and H-planes are very stable when the frequency changes in the bandwidth.The main
Figure 7 : 6 International
Figure 7: Measured and simulated radiation patterns of the H-plane horn antenna with the reflective post and the simulated radiation patterns without the reflective post.(a) E-plane at 30 GHz.(b) H-plane at 30 GHz. (c) E-plane at 32 GHz.(d) H-plane at 32 GHz.(e) E-plane at 34 GHz.(f) H-plane at 34 GHz.
Figure 8 :
Figure 8: Measured and simulated peak gain of the H-plane horn antenna with the reflective post and the simulated peak gain without the reflective post versus frequency.
Figure 9 :
Figure 9: Measured and simulated 3 dB beamwidth of the H-plane horn antenna versus frequency. | 2018-12-08T00:42:34.531Z | 2016-08-25T00:00:00.000 | {
"year": 2016,
"sha1": "e649ed3f24ef13702a59f9b27fd0136c14a54944",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ijap/2016/3659230.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e649ed3f24ef13702a59f9b27fd0136c14a54944",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
209491723 | pes2o/s2orc | v3-fos-license | Patient-Derived Cells to Guide Targeted Therapy for Advanced Lung Adenocarcinoma
Adequate preclinical model and model establishment procedure are required to accelerate translational research in lung cancer. We streamlined a protocol for establishing patient-derived cells (PDC) and identified effective targeted therapies and novel resistance mechanisms using PDCs. We generated 23 PDCs from 96 malignant effusions of 77 patients with advanced lung adenocarcinoma. Clinical and experimental factors were reviewed to identify determinants for PDC establishment. PDCs were characterized by driver mutations and in vitro sensitivity to targeted therapies. Seven PDCs were analyzed by whole-exome sequencing. PDCs were established at a success rate of 24.0%. Utilizing cytological diagnosis and tumor colony formation can improve the success rate upto 48.8%. In vitro response to a tyrosine kinase inhibitor (TKI) in PDC reflected patient treatment response and contributed to identifying effective therapies. Combination of dabrafenib and trametinib was potent against a rare BRAF K601E mutation. Afatinib was the most potent EGFR-TKI against uncommon EGFR mutations including L861Q, G719C/S768I, and D770_N771insG. Aurora kinase A (AURKA) was identified as a novel resistance mechanism to olmutinib, a mutant-selective, third-generation EGFR-TKI, and inhibition of AURKA overcame the resistance. We presented an efficient protocol for establishing PDCs. PDCs empowered precision medicine with promising translational values.
Statistical analysis. In univariate analysis, the Fisher's exact test and Mann-Whitney U test were applied to investigate association between PDC establishment and variables. In multivariate analysis, multivariate logistic regression model was used.
Results
positive cytological diagnosis of malignancy and tumor colony formation impact pDc establishment. A total of 23 PDCs were established from malignant effusions of advanced lung adenocarcinoma at a success rate of 24.0%. Established PDCs were free of stromal cells by light microscopy, strongly positive for EpCAM (an epithelial cell marker), could be frozen/thawed, and propagated at least 10 times (Supplementary Table 3 and Supplementary Fig. 3A) 7,21,22 .
Previous studies have shown that several factors including genetic alteration impact success rate of patient-derived xenograft model establishment, whereas little is known about establishing PDC from advanced lung adenocarcinoma [23][24][25] . To address this question, we reviewed association of factors to PDC establishment. Univariate analysis revealed that positive cytological diagnosis of malignancy (M+) and tumor colony formation in the initial primary culture (TCF+) were strongly correlated with PDC establishment (OR = 8.3654, P < 0.001; OR = 22.0772, P < 0.001) as well as multivariate analysis (OR = 4.8336, P = 0.0239; OR = 14.1733, P = 0.0131) ( Table 1 and Supplementary Fig. 1). As expected, high concordance was observed between M+ and TCF+ group in malignant effusions (Fig. 1A). The success rate was high in M+/TCF+subgroup (20/41, 48.8%), implying that these factors may be a powerful indicator of successful model establishment (Fig. 1B). A major reason for failure of model establishment was a paucity of tumor cells in samples (62/73; 84.9%) followed by tumor cell senescence (11/73; 15.1%). characteristics of pDcs. We characterized PDCs by direct sequencing (n = 23) and whole-exome sequencing (WES) (n = 7) ( Table 2, Supplementary Table 4, and Supplementary Fig. 4). Fourteen EGFR-mutant cell lines were generated from EGFR-mutant tumors progressing to first-(n = 8), second-(n = 1), or third-generation EGFR-TKIs (n = 5). Routine genetic testing of rebiopsy samples at recurrence were available in 9 patients with EGFR-mutant NSCLC. Notably, EGFR genotypes detected in the rebiopsy samples were concordant to those in corresponding PDCs (Table 2). Three PDCs which were originated from ALK-positive NSCLC maintained EML4-ALK fusion genes. Five ROS1-fusion PDCs which were generated from ROS1-positive NSCLC maintained various ROS1 fusion genes (SLC34 A2-ROS1, CD74-ROS1, and TPM3-ROS1) ( Table 2). WES identified BRAF K601E as a driver mutation in YU-1070 cells that were derived from NSCLC without druggable genomic alterations (Supplementary Table 4 and Supplementary Fig. 4A). These results demonstrate that PDCs largely maintain known patient driver mutations.
Extensive passaging may result in a genetic drift of cell lines 26,27 . To investigate this issue, we analyzed 5 PDCs at early and later passages using direct sequencing (YU-1092, YU-1096, YU-1152, and YU-1097) or WES (YU-1094). A mutation allele frequency (MAF) of EGFR mutations were preserved upto approximately 30 passages ( Supplementary Fig. 4B). Furthermore, somatic mutations and copy number variations were stably maintained between passages ( Supplementary Fig. 4A,C, and D). These results may suggest that driver mutations and tumor-related genes are stably maintained at least in tested PDCs.
Next, we compared in vitro sensitivity to TKI in PDC with response in the clinic. Ten patients in our study received subsequent TKI therapy after PDC establishment (5 osimertinib, 3 first-generation EGFR-TKI, 2 entrectinib). Twelve PDCs established from these patients were screened with TKI which the patients were treated with ( Fig. 2A). Two out of five patients with EGFR-mutant NSCLC were positive for EGFR T790M mutation, a marker of sensitivity to osimertinib, and received clinical benefits from osimertinib therapy, achieving a partial response (PR) and relatively long progression-free survival (PFS) 3 . Two corresponding PDCs (YU-1090 and YU-1073) exhibited in vitro sensitivity to osimertinib. Three PDCs (YU-1093, YU-1152, and YU-1094) generated from patients who were treated with osimertinib and had progressive disease as a best response were resistant to the drug (Fig. 2B). Three patients who received first-generation EGFR-TKI treatment did not achieve a partial response and had short PFS (n = 3). Accordingly, 4 corresponding PDCs (YU-1088, YU-1099, YU-1095, and YU-1091) were not responsive to gefitinib (Fig. 2C). Two patients with ROS1-positive NSCLC received entrectinib. One patient experienced a partial response with PFS of 6.5 months and corresponding PDC (YU-1080) was sensitive to entrectinib ( Supplementary Fig. 3B). The other patient displayed cardiac toxicity to entrectinib therapy [not evaluable according to RECIST (Response Evaluation Criteria In Solid Tumors)] and switched to crizotinib. PFS on crizotinib was 4.2 months, indicating intrinsic resistance to the therapy (Fig. 2D). YU-1082 and YU-1083 cells were established from the patient before the start of crizotinib therapy and were resistant to the drug in vitro. A similar pattern was observed for YU-1085 cells that were established from the patient after crizotinib therapy ( Fig. 2D and E). Together, these data suggest that PDCs may reflect patient treatment response to TKI.
PDCs can guide the selection of potentially effective therapy in oncogene-driven lung adenocarcinoma. BRAF mutations are found in 1-3% of lung adenocarcinoma 2 . The two main types of BRAF mutations, www.nature.com/scientificreports www.nature.com/scientificreports/ V600E and non-V600E, are associated with different clinicopathological features of lung adenocarcinoma and exhibit different therapeutic response to BRAF-targeted targeted agents 1,28 . While dabrafenib alone or in combination with trametinib demonstrated promising efficacy in BRAF V600E mutant NSCLC, appropriate treatment paradigms are still under investigation for non-V600E mutations 29,30 .
To identify an effective therapy for treatment of non-V600E BRAF mutant NSCLC, we tested efficacy of the single-agent and combination targeted therapy in YU-1070 cells harboring a BRAF K601E mutation. YU-1070 cells were highly resistant to vemurafenib, dabrafenib, and trametinib ( Supplementary Fig. 5A). On the other hand, treatment with trametinib sensitized YU-1070 cells to dabrafenib (Fig. 3A). The combination of dabrafenib with trametinib induced c-Raf phosphorylation and completely blocked ERK phosphorylation (Fig. 3B). These data demonstrate that the BRAF K601E mutation may respond to the dabrafenib/trametinib combination therapy.
Most NSCLC patients harboring common EGFR mutations, such as deletions in exon 19 or the L858R mutation in exon 21, respond dramatically to EGFR-TKIs. However, there is a paucity of data regarding the activity of www.nature.com/scientificreports www.nature.com/scientificreports/ EGFR-TKIs in NSCLC harbor uncommon EGFR mutations, such as G719X, L861Q, S768I alone or in combination with each other, which occur in approximately 10% of EGFR-mutant NSCLC 31 .
EGFR exon 20 insertions are among the rarer EGFR mutations (approximately 9% of EGFR-mutant NSCLC patients) and treatment for these mutations remain elusive without an approved inhibitor 32,33 . To identify optimal EGFR-TKIs, we investigated YU-1074 cells harboring the EGFR D770_N771insG mutation (Fig. 3C). Afatinib www.nature.com/scientificreports www.nature.com/scientificreports/ potently inhibited growth of YU-1074 cells, whereas osimertinib was less effective than afatinib (Fig. 3C). Together, these data suggest that afatinib among all EGFR-TKIs tested may be the most effective treatment for the uncommon EGFR mutations.
EGFR C797S mutation is one of the most commonly reported mechanisms of acquired resistance to third-generation EGFR-TKIs 5 . EGFR T790M mutation in cis to C797S mutation confers resistance to third-generation EGFR-TKIs as well as first-generation EGFR-TKIs 34 . A combination of brigatinib and cetuximab has been introduced to overcome the C797S-mediated resistance 35 . We aimed to evaluate EGFR-TKI efficacies in YU-1097 cells harboring an EGFR exon 19 del/T790M/C797S mutation (T790M in cis to C797S). YU-1097 cells were resistant to single-agent gefitinib, afatinib, osimertinib, and brigatinib ( Supplementary Fig. 5B). Notably, YU-1097 cells were highly sensitive to the combination of brigatinib and cetuximab (Fig. 3E). The drug combination synergistically suppressed phosphorylation of AKT and ERK (Fig. 3F). These results show that the triple mutation may respond to the brigatinib/cetuximab combination therapy.
AURKA as a potential therapeutic target in EGFR-mutant nScLc resistant to third-generation eGfR-tKi. Mechanisms www.nature.com/scientificreports www.nature.com/scientificreports/ progressing to third-generation EGFR-TKIs, posing a challenge to clinical decision making for these patients 36,37 . Using our clinically-relevant cell lines, we aimed to provide therapeutic strategies in this setting. In our panel of PDCs resistant to third-generation EGFR-TKIs, WES revealed genetic alterations (EGFR C797S, MET amplification, PIK3CA amplification, and PTEN loss) associated with osimertinib resistance [36][37][38] . However, known genetic alteration associated with drug response was not observed in YU-1089 cells (Fig. 4A, Supplementary Fig. 4A and E). First-, second-, and third-generation EGFR-TKIs failed to inhibit growth of YU-1089 cells (Fig. 4B). The EGFR-TKIs suppressed phosphorylation of EGFR and ERK but had no effect on phosphorylation of AKT (Fig. 4C).
To overcome the EGFR-independent mechanism of olmutinib resistance using YU-1089 cells, we comprised a panel of 79 investigational or FDA-approved drugs which target a wide range of kinases (Supplementary Table 5). Then, we performed drug combination screening on YU-1089 cells with olmutinib and each drug in the panel to nominate potent drug combinations. The screening identified 41 drugs with synergistic effects (CI < 1). The most strong synergy was observed with tozasertib, which targets Aurora kinases (Fig. 4D) 39 .
We next characterized the synergistic effect of combined EGFR and aurora kinase inhibition. The combination of olmutinib with tozasertib potently inhibited colony formation of YU-1089 cells compared to either agent alone (Fig. 4E). The robust synergism was confirmed in a 5 × 5 dose response matrix by using the Chou-Talalay method, resulting in a combination index (CI) value of 0.029 at 50% growth inhibition. Furthermore, the combination of olmutinib with tozasertib synergistically decreased phosphorylation of AKT and ERK and increased expression of apoptotic markers in YU-1089 cells. The comparable antitumor synergy was also shown by a combination of olmutinib with alisertib, a highly selective Aurora A kinase inhibitor under clinical development (CI = 0.196 at 44% growth inhibition), and a combination of osimertinib with tozasertib (CI = 0.189 at 52% growth inhibition) ( Supplementary Fig. 7A) 40 . Recently, Shah et al. has shown that AURKA confers resistance to third-generation EGFR-TKIs in NSCLC and inhibition of AURKA can resensitize the tumor to EGFR-TKIs 41 . Thus, we tested if this drug combination strategy is applicable to other osimertinib-resistant PDCs. However, the osimertinib/alisertib combination was less potent in YU-1095, YU-1096, and YU-1097 cells than in YU-1089 cells (Supplementary Fig. 7B). These differential responses to combined EGFR and AUKRA inhibition may be due to difference in AURKA expression 41,42 . Supporting this hypothesis, AURKA expression was lower in PDCs that were not responsive to the drug combination ( Supplementary Fig. 7C) 41,42 . Together, these results suggest that Aurora kinase A may be an actionable therapeutic target to overcome acquired resistance to third-generation EGFR-TKIs in EGFR-mutant NSCLC.
Discussion
In this study, we established 23 PDCs that represent molecularly heterogeneous subsets of advanced lung adenocarcinoma. Among them, cell lines of ROS1 fusions with various fusion partners, uncommon EGFR mutations, a resistant C797S mutation, and a rare BRAF mutation were included ( Table 2) 5,43 . To the best of our knowledge, there are no commercially-available NSCLC cell lines endogenously harboring these mutations. Using novel cell lines, we presented effective therapeutic strategies which may inform future clinical decision making.
Selection of appropriate tumor specimens is important for successfully establishing patient-derived models 23,24 . Previous studies have shown that tumor cellularity in malignant effusions of NSCLC is highly variable, ranging from 0.1% to 90% 44 . Furthermore, cytological diagnosis of malignant effusions can be misleading because of potential mimics such as reactive mesothelial cells 45 . To use the malignant effusion as starting material, there is an urgent need to optimize establishment procedures. Our findings provided the evidence that both positive cytological diagnosis of malignancy (M+) and tumor colony formation (TCF+) were crucial to establishing PDCs from malignant effusions. Indeed, using M+/TCF+ malignant effusions can increase the success rate of PDC by approximately 2-fold (24.0% vs 48.8%). Additionally, M−/TCF−, M−/TCF+, and M+/TCF− malignant effusions (55/96, 57.3%) that have a low potential for establishing PDCs (3/55; 5.5%) can be excluded in a stepwise manner, thereby substantially reducing time and effort needed for sample processing and subsequent long-term culture. Carter et al. has shown that tumor cellularity in malignant effusions of advanced NSCLC is not correlated to sample volume 44 . Accordingly, we observed sample volume did not impact cytological diagnosis (P = 0.42372) or PDC establishment (P = 0.58232) ( Table 1 and Supplementary Fig. 9). Although the difference was not statistically significant (OR = 0.5904, P = 0.3064) ( Table 1), we observed a higher success rate in EGFR wild-type cases (31.0%) than EGFR mutant cases (20.9%). Similarly, John et al. and our group reported the negative correlation between EGFR mutations and NSCLC PDX model establishment from surgical resection, which may reflect a favorable prognostic value of EGFR mutations [46][47][48] . Interestingly, we noted tumor cell senescence in some M+/ TCF+ primary cultures (11/41; 26.8%) between 4 to 7 passages. Despite high tumor purity, 5 PDCs became senescent between 10 to 23 passages, whereas other PDCs stably propagated over serial passage (Supplementary Table 3). These results show that some advanced lung adenocarcinoma (16/41; 39.0%) may depend on niche factors, which are not provided by R10 medium or autocrine signaling, for optimal growth. Notedly, recent study has utilized Wnt, FGF7, and FGF10 to establish NSCLC organoid models 49 . The success rate for organoids was higher than the success rate for PDX or PDC, implying that these specific factors may be associated with niche factor dependency observed in the subset of advanced lung adenocarcinoma 48,49 . Direct comparison between these control, # p < 0.05 vs the value at the indicated comparison, n = 3). (F) YU-1097 cells were treated with the indicated concentrations of brigatinib alone or in combination with cetuximab for 6 hours. Cell lysates were immunoblotted with the indicated antibodies. (A and C) Cell viability was measured by CellTiter-Glo. Data are presented as the mean ± SEM (n = 3). (B, D, and F) Immunoblots are representative of 3 independent experiments. The full-length blots can be found in Supplementary Fig. 6 www.nature.com/scientificreports www.nature.com/scientificreports/ patient-derived models may provide insight into tumorigenesis of NSCLC and therapeutic potential for targeting these niche factors and related signaling pathways.
To demonstrate clinical relevance, we tested efficacy of single-agent or combination targeted therapies in our PDCs harboring a BRAF K601E mutation and uncommon EGFR mutations (L861Q, G719C/S768I, D770_ N771GinsG). Our data suggest that the BRAF K601E mutation may respond to a combination of dabrafenib and trametinib in a similar manner to a BRAF V600E mutation 29 . Indeed, the drug combination has demonstrated efficacy in a PDX model of BRAF K601E mutated melanoma 50 . Generally, NSCLC with uncommon EGFR mutation has been known to be less sensitive to first-generation EGFR-TKIs 31,51,52 . Similar to our findings in a PDC harboring the L861Q mutation, afatinib has shown lower IC 50 values than first-or third-generation EGFR-TKIs in genetically-engineered Ba/F3 cells 31,53,54 . To our knowledge, we first reported in vitro efficacy of EGFR-TKIs against the EGFR G719C/S768I mutation and demonstrated that afatinib was the most potent among other EGFR-TKIs against the EGFR G719C/S768I mutation. These previous findings and ours corroborate the clinical activity of afatinib in patients with the uncommon mutations with an overall response rate (ORR) of 71.1% and median PFS of 10.7 months 32 .
However, we noted that IC 50 values of osimertinib in YU-1092 cells and YU-1074 cells were comparable to the reported mean plasma concentration in patients receiving osimertinib (≈120 nmol/L), suggesting a potential of osimertinib against these mutations 55 . Consistent with our preclinical findings, osimertinib was shown to achieve an ORR of 60% in 5 patients with NSCLC harboring uncommon EGFR mutations (G719X, G719X/ S768I, and L861Q) 56 . Additionally, nazartinib, a third-generation EGFR-TKI, also demonstrated preclinical activity against major variants of EGFR exon 20 insertions (D770_N771insSVD, V769_D770insASV, and H773_ V774insNPH) 57 . More recently, a patient with lung adenocarcinoma harboring EGFR exon 20 insertion (S768_ D770dup) was shown to respond to osimertinib 58 . Together, patients with NSCLC harboring an EGFR L861Q or D770_N771insG mutation may respond to osimertinib.
To date, heterogeneous mechanisms of osimertinib resistance have been reported 5 . Our data suggest that EGFR C797S-mediated resistance can be overcome by a combination of brigatinib and cetuximab, consistent with the previous finding 35 . Recent study has shown that overexpression of AURKA and its upstream TPX2 confers resistance to osimertinib and rociletinib 41 . Indeed, we found that combined inhibition of EGFR and AURKA is efficacious in YU-1089 cells that were established from patient tumor progressing to olmutinib (Fig. 4). It is plausible that YU-1089 cells responded to tozasertib and alisertib due to elevated expression of AURKA 41,42 .
We observed that EGFR-TKI treatment in EGFR-mutant PDCs increases total EGFR protein (Fig. 3D,F and 4C). Previous studies and ours imply that this phenomenon may be common among EGFR-TKI resistant cell lines, although a molecular mechanism behind the phenomenon remains unclear 34,35 . It is well established that inhibition of receptor tyrosine kinase (RTK) signaling pathway causes temporary relief of RTK-dependent negative feedback mechanisms, resulting in a rebound in RTK expression or downstream signaling activation 59,60 . EGFR signaling is regulated by various EGFR inducible negative regulators such as LRIG1, MIG6, SOCS4, and SOC5 61 . Furthermore, LRIG1 and MIG6 are overexpressed in EGFR-mutant NSCLC cell lines and function as a negative regulator of EGFR signaling [61][62][63] . These findings may suggest a possible involvement of EGFR inducible negative regulators in EGFR upregulation after EGFR-TKI treatment. Further studies are required to investigate mechanisms of the EGFR rebound and its relationship to EGFR-TKI resistance 59,60 .
This study had several limitations. Previous studies have demonstrated that long-term culture of patient-derived models results in accumulation of somatic mutations and subclonal selection. Occasionally, these genetic drifts may functionally impact drug sensitivity 26,27 . PDCs in our study varied in their growth rates and time they take to achieve high tumor purity (Supplementary Table 3). We observed that majority of PDCs at early passages (1 to 8, median passage number of 3) were contaminated with fibroblasts (0%-51.9%, median value of 3.94%) in line with previous findings (Supplementary Table 1) 8 . Because of differential trypsinization, fibroblasts did not overgrow tumor cells, although fibroblast contamination generally resulted in additional cell passaging and a delay in functional tests (Supplementary Tables 1 and 3). Particularly, drug testing in 1 PDC was only available after 20 passages, which may not well represent patient tumor. Therefore, improved culture conditions should be tested in M+/TCF+ malignant effusions to accelerate tumor growth and turn-around time for functional assays. We also acknowledge that the presented therapeutic strategies should be validated in prospective clinical studies.
In summary, we streamlined a protocol for establishing PDCs and showed that these PDCs can be valuable preclinical platforms for designing therapeutic strategies.
Data availability
Materials and data are available upon reasonable request to corresponding authors. | 2019-12-28T15:04:39.133Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "7785fe88124017833a16fcd0040a8739d716fbdd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-56356-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7785fe88124017833a16fcd0040a8739d716fbdd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125470336 | pes2o/s2orc | v3-fos-license | Approximate expressions for solutions to two kinds of transcendental equations with applications
In a broad spectrum of physics and engineering applications, transcendental equations have to be solved in order to determine their roots. Exact and explicit algebraic expression of solutions to such equations is, in general, impossible. Analytical approximate solutions to two kinds of transcendental equations with wide applications are presented. These approximate root formulas are systematically established by using the Padé approximant and show high accuracy. As an application of the proposed approximations, a highly accurate expression of the effective mass of the spring for a spring-mass system is obtained. The method described in this paper is also applied to other transcendental equations in physics and engineering applications.
Introduction
The determination of roots of transcendental equations is a problem commonly encountered in a broad spectrum of physics and engineering applications. However, it is difficult to obtain analytical approximate root formulas for such equations. Though a wide variety of root finding algorithms are available to achieve the solutions to desired degree of accuracy, analytical approximate solutions, which provide explicit dependence of the roots on the physical parameters of problem compared with purely numerical solutions, are always desirable and preferable.
Consider the following two kinds of transcendental equations with various applications. The first kind of equation is This equation arises from the solution of a longitudinal vibration problem in a uniform bar with one fixed and one attached mass boundary condition [1]. A similar equation comes from the solution of buckling problem of a uniform column which has one free and one elastically hinged supported boundary condition and is subjected an axial compression load [2]. The infinite series solution to the one-dimensional transient chemical diffusion problem under certain boundary conditions [3] is also related to this equation. The second kind of equation is where p p p n x n 2 2 2.This equation comes from the problem of a particle moving in a finite square well potential where the energy eigenvalues are its roots [4]. After the fabrication of quantum wells [5], the experimental observation of revivals and super-revivals [6] and the progress of the so-called 'ghost orbit spectroscopy' [7], the square wells also describe realistic physical systems or phenomena. The need for an explicit solution exceeds the level of solving simple and relevant problems of quantum mechanics.
Researchers are interested in positive roots of these equations because only they are related to the physical quantities. Note that the analytical solutions to equations (1) and (2) are absent until now. A method for formulating an expression for the roots of any analytical transcendental function was presented [8]. The method Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
is based on Cauchy's integral theorem and uses only basic concepts of complex integration. Numerical evaluation of solutions requires a complex Fourier transform. However, the computational efficiency of this procedure would not be expected to rival that of traditional approximate root finding techniques [8]. Recently, Luo et al [9] constructed the analytical approximate solution to equation (1) by rewriting its series expansion solution [10] in the form of a ratio of polynomials by a second-order Padé approximant. However, their results showed large errors. Based on the algebraic approximations of trigonometric functions, it is possible to transform a class of transcendental equations in approximate, tractable algebraic equations [4,11,12]. As the algebraization used in those papers is, to a certain extent, an ad hoc procedure, this approximation must be used with a certain caution in order to avoid the appearance of spurious roots or of roots with too large errors [12].
In this paper, highly accurate approximate expressions for solutions to equations (1) and (2) are systematically constructed by exploring the periodic properties of functions x tan and sinx, and using the Padé approximant [13][14][15] to them. These approximations are valid for small as well as large values of parameters. Furthermore, as an application of the proposed approximate expression, a highly accurate expression of the effective mass of the spring for the spring-mass system is also obtained.
Preliminaries
A Padé approximant is the 'best' approximation of a function by a rational function of given order-under this technique, the approximant's power series agrees with the power series of the function it is approximating. The Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge and has thus abundant applications in physics and engineering.
Given a function ( ) f x and two integers p 0 and q 1, the Padé approximant of order [ ] p q is the rational function [13] The Padé approximant is unique for given p and q, that is, the coefficients a a a b b b , , , , , , , can be uniquely determined. It is known that in many cases a higher accuracy of approximation is achieved for small integers p and q; thus the degrees of both numerator and denominator are set to be small hereafter so that the analytical approximate roots of transcendental equations can be obtained. (1) 3.1. Derivation of the first approximate expressions for all roots Based on the periodic property of function x tan , a rational approximate expression for function x tan in interval p p -[ ] 2, 2 is first introduced, the simple approximate root formulas of equation (1) in terms of parameter a are then established.
Highly accurate approximate expressions of solutions to transcendental equation
Note that from the graphs of functions = ( ) Solving y from equation (7) and using equation (4) yields the first approximation to the Especially, for = n 0, equation (8) gives the first approximation to the first root of equation (1) Substitution of equation (10) into equation (5) produces which can be written as a p a p a + + - Solving quadratic equation for y 2 in equation (12) and using equation (4) with = n 0 gives the second approximation to the first root of equation (1) Equation (11) is a quartic one for > n 0, expressions of its roots are lengthy and complex. We will try to find approximate solutions as follows. Note that, p p -< < y 2 2 , so higher powers of y could eventually be neglected in equation (11). Keeping the constant, linear and quadratic terms, and neglecting the cubic and quartic ones in equation (11) Taking y 20 as the initial guess value, applying the Newton method to equation (11), iterating one step and noticing that y 20 satisfies equation (14), give the second approximations to the positive roots of equation (11) Finally, based on equation (4), the second approximate expression of the + ( ) n 1 th root of equation (1) is
Results
For given value of parameter a, the roots of equation (1) can numerically be calculated by using the Newton method. The corresponding analytical approximations to these roots can be obtained by utilizing equations (8), (13) and (17), respectively. Relative errors are then calculated against these numerical exact roots. Here, the relative error of the ith analytical approximation to the + ( ) n 1 th root to equation (1) is defined as ni ni a n N n N where x n N denotes the + ( ) n 1 th root obtained by using the Newton method.
For a = 1, the relative errors of the approximate roots in equations (8), (13) and (17) computed by the proposed method are shown in table 1. For comparison purposes, the relative errors of the approximate roots, equations (7) and (12b), in [9] are also listed in the same table. Note that Luo et al [9] used the series expansion solutions [10] to equation (1) to construct the corresponding Padé approximants of order [ ] 2 2 .It can be seen from table 1 that, except for the first approximation to the first root, the accuracy of the proposed approximate roots is much higher than that in [9].
In general, researchers are more interested in the case of a 1 [3]. Relative errors of the two approximate expressions given in equations (8), (13) and (17) for the first three roots of equation (1) are shown in figures 1-3, respectively. These figures indicate that, compared with the first approximation in equation (8), the second approximations in equations (13) and (17) can provide more accurate results. For a 1, the maximum relative errors of the first and second approximations to the first root are less than 0.662% and 0.000 176%, respectively; the maximum relative errors of the first and second approximations to the second and third roots are 0.001 09% and 0.000 0309%, and 0.000 0292% and 0.000 000 296%, respectively. For the nth root with n 4, the accuracies of approximate roots in equations (8) and (17) are higher than those for the third root. It should be pointed that, for a 1, the expression in equation (8) can provide high quality approximation to the nth root (1) with a = 1 and the relative errors of the approximate roots proposed in this paper and in [9].
Relative errors (%)
n Exact roots equation (8) equation (13) Based on the results above, the second approximate roots in equations (13) and (17) show excellent accuracy and they are valid for small as well as large value of parameter a. 3.4. Determination of the effective mass of the spring for a spring-mass system As an application of the proposed approximation, the effective mass of the spring for a spring-mass system will be established. When a longitudinal vibration in a uniform bar with one fixed and one attached mass boundary condition [1,16,17] is considered, equation (1) reads where w, l, A, r and E are the natural frequency, length, cross-sectional area, density and modulus of elasticity of the bar, respectively, M is the attached mass, and a º m M s is the ratio of the mass r = m Al s of the bar to the attached mass.
The introduction of the correction to the spring oscillations due to including the mass m s of a spring has led to many researches, for examples, we refer readers to [1,16,17] and cited therein. When the vibration of the bar is reduced to that of a spring-mass with a spring stiffness = K EA l and an end mass equal to + M m , eff the effective mass m eff need to be determined. Based on equations (1), (9), (13) and (19), two approximations to the first frequency of the spring-mass system can be expressed as Here, Note that, in [16,17], the frequency of the spring-mass system was given by Based on the assumption that the velocity of a spring element located a distance from the fixed end to vary linearly with it and the use of Rayleigh method, the effective mass of the spring is found to be the one-third the mass of the spring [1,16,17]. Now, the proposed first approximate frequency w s for the effective mass of the spring in the spring-mass system for any given mass ratio a. (2) 4.1. Derivation for highly accurate approximate expressions for all roots Note that from the graphs of functions = ( ) z f x in equation (2) and b = z x , for each positive integer n, equation (2) (2) with b = 15 and relative errors of the approximate roots proposed in this paper and in [4]. | 2019-04-22T13:11:30.873Z | 2018-05-09T00:00:00.000 | {
"year": 2018,
"sha1": "198687983600a65a4c9f594cb51edbe734ea9fcf",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2399-6528/aac0e8",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1b2a9a47df13b67e0103c2fb3366eaa3f436d93b",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
148618435 | pes2o/s2orc | v3-fos-license | Digital Games in the Science Classroom: Leveraging Internal and External Scaffolds during Game Play
We have developed a disciplinarily integrated game (DIG) to support students in interpreting, translating, and manipulating across formal representations in the domain of Newtonian kinematics. In this study, we seek to understand what game play looks like in a classroom context with particular attention given to how students leverage internal and external scaffolds to progress through the game and deepen their conceptual knowledge. We investigate the following questions: (1) In what ways do students interact with the game, with each other, and with their teacher when they play SURGE Symbolic in a classroom environment? (2) How do game scaffolds, both within and outside of the game, support or impede student learning and game play? (3) What are the implications of these observations for teachers and game designers?Wefoundthatalthoughmost studentsusedinternal scaffoldsin somewaytoassist their game play, many found that these scaffolds were insufficient to get through challenges. They quickly sought help from external resources available to them outside the game to help them advance in the game. The source of information they needed to make progress came from various people or resources outside the game, what we are calling “ knowers. ”
Introduction
Interpreting, translating, and manipulating across formal representations is central to scientific practice and modeling [1][2][3].We have developed a disciplinarily integrated game (DIG) such that players' actions in the game focus on iteratively developing and manipulating formal representations as the core game mechanics [4,5].These formal representations are computational and mathematized representations of focal science phenomena.Through playing a DIG, students investigate key conceptual relationships in the domain while also developing facility with the representations and inscriptions themselves [6].Supporting students in manipulating and transforming across representations, however, is challenging in terms of having students build connections between the formal representations and patterns of events that they represent [7][8][9][10].This study extends work from our previous pilot study [11] and used a DIG in the context of Newtonian kinematics.This DIG is designed to engage students in the manipulation of both formal representations and events (in this case, patterns of motion) to control one another in order to develop a deeper conceptual understanding of physics concepts.In this study, we seek to understand what game play looks like in a classroom context with particular attention given to how students leverage internal and external scaffolds to progress through the game and deepen their conceptual knowledge.
Internal and external scaffolds
The notion of scaffolding was originally conceived as a process by which teachers, other adults, or peers provide assistance to help learners with tasks that are normally beyond their reach [12].Scaffolds can be directly embedded into an activity in order to provide learners with "just in time" resources that help students overcome a challenge at the moment they encounter the challenge.In games for learning, these internal scaffolds can take the form of hints, questions, terminology, character dialogs, feedback screens, etc.Researchers have found that internal scaffolds can support students in developing science content and skills [13].It is important that internal scaffolds go beyond helping students simply progress to the next level in the game but also engage them in learning from their efforts on a level so that they can then apply that newfound knowledge to the subsequent game levels [14].In the context of this paper, internal scaffolds are supports that are embedded within the game that are designed to foster students' conceptual understanding of Newtonian kinematics and intended to help students make connections be t w e e nt h ei n f o r m a lc o n t e x to ft heg a m ea n d the formal physics vocabulary and representations used to describe motion of objects.These scaffolds include help screens and hints when a student fails to solve a level, as well as tutorials that provide new information (e.g., explanations of formal physics representations like graphs and dot traces) for future levels.
In addition to internal scaffolds embedded within the game environment, external scaffolds can also be used to enhance students' learning [15].In the context of games, external scaffolds may come from a teacher or peers in the form of scientific explanations of concepts in the game or advice on when to perform certain actions in a game.Additionally, providing collaboration opportunities between peers during game play can serve as a productive external scaffold that allows students to leverage discourse in order to negotiate meaning of concepts and actions within games, construct ideas, resolve conflicts, etc. [16,17].In the context of the current study, although most students used internal scaffolds in some way to assist their game play, many also found that these scaffolds were not enough to get through challenges.They quickly sought help from external resources available to them outside the game to help them advance in the game.The source of information they needed to make progress came from various people or resources outside the game, what we are calling "knowers."Students chose to seek help from the teacher, each other, and even other tools and materials to reason through the math and science needed to pass a level.
Despite the well-known affordances of digital games for students' conceptual development, these games have not been widely adopted in secondary science classrooms [15].A primary goal of this project was to explore what real game play with DIGs looks like in the classroom context to improve our understanding of how students engage with the game and the science concepts embedded within the game.In addition to exploring the impact of the new worldview levels, we were also particularly interested in exploring how students overcome challenges in the game when they have access to a classroom of peers and a teacher.Honey and Hilton [15] call for research to "investigate how best to integrate games into formal learning contexts…this should include how internal scaffolds in the game and external scaffolds provided by a teacher, mentor, peers, or other instructional resources support science learning" (p.124).It was of interest to us to explore when, who, and how students solicit support to succeed with their personal game play goals.This exploratory study addresses this research need by investigating the following questions: (1) In what ways do students interact with the game, with each other, and with their teacher when they play SURGE Symbolic in a classroom environment?(2) How do game scaffolds, both within and outside of the game, support or impede student learning and game play?(3) What are the implications of these observations for teachers and game designers? 1 and 2) is the prototypical DIG template that we used in this study, and is the result of evolution of design, research, and thinking chronicled in Clark et al. [4,5].Whereas earlier versions of SURGE supported reflection on the results of game play through formal representations as a means to support strategy refinement, the formal representations were not the medium through which players planned, implemented, and manipulated their game strategies.Earlier versions of SURGE provided vector representations, for example, to help students understand what was happening and how they might adjust their control strategy, but these formal representations only communicated information that a player might or might not use.The challenges and opportunities in a given game level, however, were communicated through the layout of elements in the game world, not in the formal representations.Similarly, the player's controls for executing a strategy were also independent of the formal representations.Thus, while attending to the formal representations might help a player succeed in a level, earlier SURGE games did not use formal representations as the medium through which challenges and opportunities were communicated to the player, nor did earlier SURGE games use diagrammatic formal representations as the medium of control.In SURGE Symbolic, we made Cartesian graphs of position and velocity over time the medium through which the player controlled their avatar in the game, and we also made those same types of Cartesian graphs the medium through which the game communicated goals and challenges to the players.
SURGE Symbolic (Figures
Students in SURGE Symbolic play from the perspective of the space navigator, Surge.Game play is divided into levels, each focused on a specific navigational challenge or Newtonian concept.Students must move Surge forward or backward on her space board to find the appropriate position or velocity to navigate Surge to the exit portals, represented by a purple box, while avoiding electricity zones, represented as orange boxes (see Figure 1).Surge's path is traced onto a graph representing the magnitude of position and velocity over time.Students use an interface to set up their strategies by dragging blocks that contain segments of a graph to create a graph to specify position or velocity versus time.When students have finished designing their path, they must click the "Run" button to launch their plan.As the plan is launched, the players can watch as the plan unfolds in terms of the game character's motion on the map as well as in the formal representations.
In this study, we used two different versions of SURGE Symbolic.These two versions provided different types of internal scaffolds to students.The purpose of this study is not so much to compare which version of the game "worked better," but rather the purpose was to explore how students made use of these various internal scaffolds as well as external scaffolds in the classroom, such as the teacher and the other students.The two versions of the game were very similar and differed only in the way that students controlled the game character and generated the graphs to design their path on a subset of the levels.In each level, students attempted to create a path for Surge to avoid the electricity zones and make it the exit portal.In one version, called block level (BL), students first used a toggle button to set Surge's initial position (Figure 1).Students then constructed position or velocity graphs to control Surge with "blocks" representing segments of the graph.Students could organize the blocks in any order to create a graph, and students could swap blocks in and out of the graph.In the second version, called worldview (WV), players used the block interface on most levels, but used a different control interface on a subset of levels.In the worldview subset of levels, players clicked and dragged Surge to create a sequence of positions to which Surge would move over the course of the level (Figure 2).Thus, the player specified where Surge should be at each second during the level.In both versions, the level goals and parameters (starting position, exit portal position, required velocities) were identical for each level so that students playing each version had to pass the same challenges in order to progress in the game.While the BL version exclusively had students drag blocks to create a path, the WV version used both block levels and worldview levels.Whereas the block interface focuses on connections between graphs, the WV interface was designed to help students develop understandings of the connections between the graphs and the motion of the avatar, Surge, herself.Through designing the worldview interface, we, therefore, hoped to help students develop a more intuitive understanding of the nature of motion communicated by the various relationships communicated in the Cartesian graphs.
Setting and participants
The study was conducted in a suburban school with 98 students in six sections of an STEM class spanning seventh grade (N 7 = 51) and eighth grade (N 8 = 47).The teacher of the class, Mrs. L, was a veteran teacher and had worked with the research team in previous years to pilot earlier versions of the game in her classroom.During the study, she walked around the classroom to assist students as needed with questions about the game or science concepts.The research team addressed only technical difficulties that students encountered during game play and recorded field notes of student game play and discourse.They intentionally did not provide assistance to students regarding conceptual questions or hints on how to solve levels.
The study lasted for six consecutive school days.Students took a pretest on the first day to assess prior understanding of concepts such as position and velocity, as well as interpretation of position-time and velocity-time graphs of an object's motion.They played the game for the next three and a half days.At the end of game play on the fifth day, students took a posttest that was identical to the pretest.On day 6, students talked with the research team about their perceptions of and struggles with the game and questions on the posttest.
Before the study began, Mrs. L's classroom was arranged in traditional rows of desks all facing the front of the classroom.For this study, Mrs. L chose to rearrange the desks into groups of four that were facing each other.As students entered the classroom on the first day of the study, students were allowed to self-select into groups.Approximately half of the groups were assigned to play the BL version, and the other half of the groups were assigned to play the WV version.Students in each group were encouraged to talk to each other for help, but groups were not allowed to talk to other groups since they were playing different versions of the game.Table 1 shows the number of students and groups in each class period.
Data collection
Data collected for this study included pre-post scores and detailed daily field notes.Due to school district rules, video data collection was not allowed, so the researchers circulated around the room during game play taking copious notes of student game play, including student talk, teacher talk, observations of silent game play, and level completion for each student at the end of each class period.One researcher primarily focused on interactions between the teacher and students, while the second researcher focused on student talk among the groups.The researchers compared notes at the end of each day, discussed any observed patterns, and subsequently adjusted the focus of observations for the following day.On day 3, one of the researchers talked with each group in five of the six classes about how they worked through the struggles encountered in the game and how they worked together as a group.No group interview data were collected in period 6 because students in this class discovered a previously unknown bug in the game that allowed them to "complete" levels without actually solving them.Several students in this class used the bug to make progress in the game instead
Simulation and Gaming
of soliciting help to work through challenges.Because this shortcut prevented them from working together in the same way as other classes, no group interview data were collected in this class period.
Data analysis
Data analysis included quantitative comparison of pre-post scores using paired t-tests and qualitative analysis of the field notes using inductive thematic analysis [18,19].Qualitative data analysis began by reviewing all field notes in order to become familiar with the data.We first coded the data to identify all instances of student talk and teacher talk recorded in the data.Since we were particularly interested in how students interacted with each other during game play and how they were making sense of the game and embedded physics concepts using the designed scaffolds, we made a second pass to identify instances when students were observed using or talking about the internal scaffolds or science concepts embedded in the game and when students were interacting with each other during game play.We also closely analyzed student responses to the researcher's question about types of help they used in the game and how the groups interacted with each other.
In order to identify themes in the data, we developed an initial open coding scheme for the data using the constant comparative method [20] and iteratively applied these codes to data, revising codes and grouping codes together as needed.Specific codes included such things as use of the level map, use of help screens, student talk about mechanics of the game (i.e., how do you reset the level?), student talk about the help within the game (i.e., what does that map show?), student talk about concepts in the game (i.e., the farther apart the dots, the faster Surge goes), teacher talk about concepts or game mechanics, attempts to seek help from a peer, and attempts to seek help from the teacher.Once the codes were applied to the data through an iterative process, we searched for themes among codes.Themes that emerged from this analysis centered on the use of both internal and external scaffolds to make sense of the game and progress to higher levels.These themes are examined in greater detail in the following section.
Analysis and findings
To provide a backdrop for our analysis of how students made use of internal and external scaffolds during game play, we first analyze the pre-post scores.We then proceed to the focal analyses of how students made use of the internal and external scaffolds.
Analysis of pretest and posttest scores showed that students made gains in conceptual understanding of formal representations after playing the game.Paired t-tests compared pretest scores (M = 47.0,SD = 19.9) to posttest scores for all students (M = 53.5,SD = 21.7).These results showed that students made significant, albeit modest, gains in the posttest scores (t(97) = 3.79, p < 0.001) with an effect size of 0.31 and suggest that the game did indeed help students develop a better understanding of the target physics concepts in the game.Independent t-tests were also conducted to compare changes in pre-post scores for the BL group (M = 6.1, SD = 17.0) and the WV group (M = 6.8,SD = 17.0), as well as changes in pre-post scores for seventh graders (M = 5.9, SD = 14.2) and eighth graders (M = 7.1, SD = 19.6), but no significant differences were found between either of the two groups (t(96) = À0.19,p = 0.85 for BL and WV groups; t(96) = À0.36,p = 0.72 for seventh and eighth grade groups).These results suggest that while the game did indeed help students develop a better understanding of the target physics concepts in the game, the differences in interface and representational design between the BL and WV groups did not significantly affect pre-post scores.Additionally, both seventh and eighth graders experienced similar amounts of growth in the pre-post scores.Table 2 shows the mean scores for each subgroup.
Against this quantitative backdrop, we now shift the analysis to students' use of internal and external scaffolds during game play.We will illustrate the themes that emerged through this analysis by focusing on one group of four eighth grade boys in period 2. This group was chosen because the four members all interacted with the game in different ways, seeking help from different sources, yet very actively interacting with each other.All of the group members were also very verbal and articulated their group interactions in great detail, thus allowing for robust data collection in the field notes.We will use this group to provide examples of how they differentially leveraged internal and external scaffolds to help them progress through the game, and we will also include data from other groups to provide examples of additional instantiations of these themes.
The members of our focus group were Dylan, Connor, Preston, and Grant (pseudonyms).The four boys approached their game play in different ways.Table 3 shows the performance of each group member on the pre and posttest, as well as the levels completed on each day as a way to show the different rates of progression through the game.Dylan was the first group Simulation and Gaming member (and the first person in all six class periods) to successfully complete all 66 levels in the game, and Connor finished all the levels shortly after he did.Both Dylan and Connor had participated in a research study the previous year with the same teacher that involved playing a different but related game, so their quick progression through the levels could be attributed, in part, to their prior exposure to a conceptually integrated game involving graphing and physics concepts.However, the design of the game for this study was significantly different from the game they played last year and presented conceptual challenges for them to navigate, as evidenced by their use of internal and external scaffolds during game play.Grant and Preston had no prior experience with games similar to this one.
Figure 3 shows a diagram of the seating positions of this group.All fours desks were facing toward each other to facilitate group conversation.While Dylan interacted with the group on multiple occasions, he was largely quiet as he intensely focused on solving his own levels as quickly as possible.In contrast, Grant, Preston, and Connor were consistently talking during game play, sharing ideas with each other, asking each other questions, and making general comments about their impressions of the game.
Internal scaffolds
In the context of this paper, internal scaffolds are supports that are embedded within the game that are designed to foster students' conceptual understanding of Newtonian kinematics and intended to help students make connections between the informal context of the game and the formal physics vocabulary and representations used to describe motion of objects.These scaffolds include help screens and hints (Figure 4) when a student fails to solve a level, as well as tutorials that provide new information (e.g., explanations of formal physics representations like graphs and dot traces) for future levels.Students had the option of using these supports at any time during game play, and we observed numerous students utilizing these in-game scaffolds when seeking help to pass levels, as evidenced by numerous observations of students reading help screens before starting levels, clicking on hints after failing a level, or making comments about the help screens.Some students continued to use these internal scaffolds exclusively as their help source throughout their game play.However, many students found these internal scaffolds to be insufficient when the game grew more complex.
One internal scaffold that many students appeared to use productively was the level map, which could be toggled on and off by students (Figure 5).This map showed Surge's required initial starting position, as well as a dot trace to determine the necessary initial velocity for Surge to avoid a deadly electric field.Observation of student game play showed that students quickly learned how to determine the initial starting position from the map.However, many had difficulty interpreting the distance between dots as an indicator of Surge's speed and translating that speed to a meaningful slope on the graph.Field notes indicated that students who only used the level map to obtain starting position and did not use the dot traces to determine velocity often adopted an unproductive "guess and check" strategy to solve the level.For example, one student in period 2 was observed attempting level 19 numerous times.
The student was seen viewing the level map and then moving Surge to the correct starting position, thus indicating a correct interpretation of required starting position from the map.However, he then appeared to guess as to which initial speed he needed to start, trying various speed blocks and failing the level several times before finally discovering the correct block to use through trial and error.While this student successfully determined the starting position using the map, he failed to notice or interpret any information about the required starting speed from the same map.Another student commented in the debrief session after game play that "the [map] wasn't very helpful.It told you the start position, but didn't say how fast it should be going each time," indicating that the student failed to understand that the dot traces could be used to determine the correct starting speed.Yet another student was overheard commenting on the map saying, "It says to notice the dots in the map.There are no dots on the map!" Apparently, this student never even noticed the dots on the map, much less made a connection between the dots and velocity.
Others who used the map for both starting position and initial velocity appeared to solve levels more quickly and demonstrated a stronger understanding of the targeted concepts in the game.An example of this usage can be seen in our focus group where Grant attempted to solve Level 19.He skipped through the explanation screens at the beginning of the level, looked at the map, and correctly moved Surge to a starting position of 8 m.However, he chose an incorrect initial velocity block, causing Surge to crash and fail the level.Reading the hint block, Grant exclaimed, "Too slow!What?!?!" He then enlisted the help of another member of his group, Connor, to figure out how to solve the level.Connor came over to Grant's chair to look at his screen and asked, "Where does the dot trail start?"Grant answered, "8."Then, Connor replied, "Good.How are the dots spread out?"In this exchange, Connor tried to get Grant to notice the dot traces and use the spacing between the dots to determine if he needed to start at a slow speed or a faster speed.In another example, Connor used the dot traces to calculate the initial velocity he needed for a level.While looking at the level map showing dots spaced far apart and thinking aloud, he said, "That [spacing] is like big. 4 per second-ish."He then chose the 4 m/s block and successfully solved the level.Connor demonstrated his understanding of how to use the spacing between the dots to determine his initial velocity, seemingly understanding that the farther apart the dots are, the faster the speed.
Dylan also demonstrated using the map for information on both starting position and velocity.In one instance, he noticed Grant struggling to solve a level.Dylan looked at Grant'slevelmapand told him to start at 4. Then Dylan said, "the dots are close together.4 s.You need to make her go forward in 4 s."In this exchange, Dylan demonstrated that he was using the map for information on both starting position and velocity.Not only did Dylan and Connor finish all the game levels before anyone else in the class, they also scored high on the posttest at the end, demonstrating strong conceptual understanding through their game play performance and scores on the posttest.While Dylan started with a high pretest score, Connor had one of the larger gains across all the classes, possibly suggesting that his interaction with the game helped him make sense of the science embedded within the game.His gains could also possibly be attributed to his close proximity and access to Dylan, a student in his group who had a more sophisticated conceptual understanding of the science and math ideas in the game, as evidenced by his high pretest score.
Unlike other members in the class, Connor had the opportunity to tap into Dylan's expertise and use him as an external scaffold during game play.
External scaffolds
Although most students used internal scaffolds in some way to assist their game play, many also found that these scaffolds were not enough to get through challenges.They quickly sought help from external resources available to them outside the game to help them advance in the game.The source of information they needed to make progress came from various people or resources outside the game, what we are calling "knowers."Students chose to seek help from the teacher, each other, and even other tools and materials to reason through the math and science needed to pass a level.
We noticed that the type of knower students sought out and the nature of the questions they asked varied by student.Students who primarily wanted to pass levels but did not care as much about the reasoning behind their success or failure tended to solicit assistance that would help them advance in the moment on a particular level.Students who were motivated by this type of help possessed what we termed a "game play orientation."Students who wanted to make sense of the rules and strategies behind Surge's movements often sought help to reason through the math and physics in the game.We refer to this as a "sense-making orientation."We have observed related patterns in studies with other games (e.g., [17]).
An example of these orientations can be seen in the focus group.Preston's game-play orientation was evident from the beginning when he approached the game from the perspective of a kid playing a video game in an informal setting.He quickly jumped into the storyline of the game and was primarily concerned about saving "fuzzies" and unlocking new settings (i.e., "Oh, there'sS N O W ! !").At some point in his game play, however, he got somewhat frustrated that he could not do more with the rescued fuzzies.He made the least progress in his group and actually showed losses in his posttest score.He never tried to make sense of the math or science behind the game level, admitting in the whole class debrief at the end of the study, "I'm going to be honest.After taking the post-test, I didn'tf e e lI learned anything.I was focused on beating the game."Dylan, however, demonstrated a sense-making orientation, as illustrated on day 2 when he worked through a level and failed.We observed him reading the feedback, presumably to reanalyze his approach.After he thought about it, he said to the computer, "Oh, OK," and succeeded on his next attempt.Dylan's repeated attempts to integrate the feedback indicate that he was trying to make sense of Surge's movements and using the game feedback to inform his next strategy.
With either orientation, students identified different knowers in the learning environment when they wanted help to make progress toward their goal.We conjecture that an individual student's orientation toward the game influenced the nature of questions asked and the type of knower they sought for help.In this study, we surveyed each student that was present on day 3 of game play and asked how they typically got help when they encountered a sticking point in the game.All students first responded that they started with internal scaffolds within the game.When the game did not provide the help they needed, some of them identified four categories of external scaffolds, what we are calling in this case knowers, that they turned to for further help.While some students explained that they had a chain of knowers they could go to next if one failed, Table 4 identifies the first knower each student would seek when the Surge's internal scaffolds no longer provided the help they needed.
Game as knower
Students who identified the "game as knower" were students who showed little or no use of external scaffolds.They leaned on the internal scaffolds to help them figure out what to do: they would revisit earlier levels of the game to help with later levels, repeatedly consult help screens or hints, or write down notes from the help screens.There was little audible or visible evidence of the use of this scaffold, but students self-reported these behaviors when speaking with researchers.In the focus group, both Dylan and Preston were classified into this category as they rarely, if ever, sought help from an external scaffold and relied on the game to help them progress through levels.However, since Dylan possessed a sense-making orientation, while Preston demonstrated a game-play orientation, their interactions with the internal scaffolds looked different, as described previously.Students in other groups showed their reliance on the game as knower in a couple of different ways.One boy said that when he gets stuck, he generally, "tries a couple of levels before" his current level to review any instructions that he might have gone through too quickly or to see if the earlier levels could provide some guidance that he did not pay enough attention to earlier.Another student described how he would copy and paste the help tips into an internet browser so he could flip back to see them when he got stuck.For some students, these strategies suggest they are applying knowledge of games outside the classroom to be successful with this game.For example, students who are familiar with games may expect help features within the game to provide all the assistance that is necessary to succeed with game play.One boy had such high expectations that the game was going to help him advance that he "sat a full period on one level trying to use the game to help get through the level."He thought it should have been easy and was "too embarrassed" to ask for any external help to get through the level.He kept reading the help in the game but did not understand what it was telling him to do.No students in his group nor Mrs. L knew that he was stuck for a full class period.
Self as knower
The "game as knower" strategy is closely connected to the "self as knower" strategy and is sometimes impossible to tease apart.There were a few students who showed visible evidence beyond the screen that they were doing something more in the game than just getting through levels.Students who fell in this category often used the feedback and help from the game to then integrate their prior knowledge of math and physics before deciding on their next strategy.These studentsworkedthroughthechallengeontheirown,oftensucceedingwithjustthehelpsuggestions within the game.Yet, there were times they tapped into external tools or representational resources, such as paper and pencil to extend graph lines, to help them with their sensemaking.Thus, the students who fell into the "self as knower" category and made their math and science thinking visible were students who had a sensemaking orientation.For example, while a level was running, one student moved her arms in gestures that mirrored the graph lines that were being generated in the game.When Surge crashed, she froze her arms, and reacted, "Oh…The down is not right.I've got to keep going down, down, down.Gosh!"She reshaped her arms while talking this through to imagine what move she needed to try next.A few students used paper to do some inscriptional work with mathematical notations that were not in the game or modified representations in the game by using extra paper to stretch graph lines to better obtain data.For example, Connor used an index card to mark a dot on one card that he held up to a graph, and then he moved that card up to the graph above it, demonstrating his strategy of coordinating information across graphs to inform his next move.While it is difficult to identify students who rely on their "self as knower" versus the "game as knower," we did notice that the students who did extend their reasoning beyond the game often advanced to higher levels in the game more quickly and had either greater gains or higher posttest scores.
Peer as knower
Students who used a "peer as knower" identified someone in their group to ask for help to solve levels.It was not always any member of the group who could serve as the knower, however.Students had multiple reasons for selecting different peers.Sometimes, the same student would call on different peer knowers for different reasons.For example, Grant positioned Connor and Dylan as his knowers at different times.Initially, when he asked for help, he adopted a game play orientation to help him get through a level.He asked Dylan how to get through a level and was happy with a response to "start at 4." Yet, when that did not work moving forward and he realized that he was still getting stuck in the same part of each level, he took on more of a sense-making orientation when he asked Dylan, "How do you know where to start in general?"Here, he was trying to uncover the strategy behind getting started with each level, not just the number where he needed to start in order to pass that particular level.Later, when Grant realized that Dylan was so far ahead and working with the game in his own way, he called on Connor when he needed help making sense of the dot trace map.This was interesting because Preston and Dylan were the peers closest in proximity (see Figure 3), but Grant recognized that the two of them had their own game play strategies that would be interrupted if he asked a question.He knew Connor was ahead of him, but closer to his own level than Dylan, so he asked Connor to physically walk around to his computer and help him.
Here, we see an example where the group member that was the farthest along in the game (Dylan in this case) was not always positioned as the knower in the group.Many students responded that they preferred to get help from a group member that was just slightly ahead of them in the game instead of the student who was several levels ahead.
In this group, Grant selected his peer knower based on what he needed-a quick response to get through a level, or someone who could spend more time with him to make sense of the level.Other students had different reasons for calling on different peers.Some students identified a partner as knower.Early on in game play, there were some groups where two or more members explicitly decided to work through each level together.They often repositioned their desks so they could more easily see each other's screens, or they engaged in more talk-alouds while they were making decisions on how to proceed in a level.These groups tended to move relatively slowly through game play, making sure that each member in the partnership was progressing.For example, there were times when one member had a lucky guess that allowed him to pass the level, but he struggled to get his partners through the same level because he did not have sound reasoning.But, he would not move on to the next level until his other partners were with him.In general, students that worked in partnerships would hang back with the slowest member to work together to identify the inputs that would yield successful outputs.
Other groups just said "anyone in the group" would serve as their knower.In these groups, each individual worked independently.If one student got stuck, he or she said something like, "Did anyone get through level 21? How did you do it?"Any member of the group could respond at any time.Sometimes it was an individual who recently completed the level.Other times, it was a student who was willing to take the time to work through a level with someone else.
Finally, some groups had one student who would serve as the knower for anyone in the group.
In one group, when the students were asked who they go to for help, they all identified the same individual in the group.He also identified himself as the group knower.When pressed on playing this role, he responded, "it is just what I do."Apparently, in this class and others, he is perceived as a "knower" by his peers, and he is comfortable with this role.
We also observed what we termed "reluctant peer knowers" in a group, which is someone that a group member calls on for help, but is reluctant to help because he/she would rather play his/ her own game.Reluctant knowers often gave short tips like, "Start at 4," with no reasoning, as Dylan did for Grant in an earlier example.They would provide some assistance, but it was often terse and more of a cheat.Reluctant knowers sometimes took a group member's computer and did the level for them, or simply turned their computer around to show a level solution to help the other member.If their help did not work, they most often did not spend any more time trying to help the student who asked for assistance.
Whether reluctant or not, the peer as knower group was the external help that most students turned to when they got stuck in the game.It seems that working in groups, or just having the ability to talk to peers when needed, is an external scaffold to game play that students are comfortable using for assistance.Students respond well to the peer feedback, and this keeps them engaged in game play by helping them work through obstacles collaboratively.
Teacher as knower
A final category included students who identified their "teacher as knower" and repeatedly asked Mrs. L for help.This group included students who first tried to work with the internal scaffolds of the game, still got stuck on a level, and immediately turned to Mrs. L to help them work through the challenge.These students rarely consulted with their group members, preferring instead to work exclusively with Mrs. L. Students in this category indicated that they went to Mrs. L because she could "explain it better" than anyone else or that they liked to problem-solve with her.In our focus group, no one solicited help from Mrs. L. but we did see a few examples of this from other groups, particularly in earlier levels of game play.It was clear early on that Mrs. L. did not have the quick answers to help students get through levels, but she was willing to pull up a chair and work alongside students to beat a level.One girl happened to be in a group where the other members all missed a day and she ended up being many levels ahead of her peers.Thus, because she did not feel she had a peer in her group who could help her with the game, she instead went straight to Mrs. L. for help.
Mrs. L. knew enough about the game to get through the early levels easily.Still, when students asked for help on these early levels, she did not give them a direct answer to their questions.Instead, she pressed their reasoning and encouraged them to try different options.Whether their attempts failed or succeeded, she then encouraged them to reflect on what they did to make sense of the failure or success.She spent a lot of time with students when they requested help in the early levels.As the levels got more complex, she tended to hover over a student and watch their game play decisions, but she asked fewer questions and interacted on the whole much less frequently with students.This could be one reason why more students did not call on her for additional help and instead chose another strategy-either within the game or asking a peer for help.
Discussion
While digital games for learning have been shown to support students' conceptual development in science, they have not been widely adopted in classrooms.This paper explores how students used the internal scaffolds in the game and external scaffolds provided by other knowers in the classroom to successfully play a game for learning physics.A key finding, while not surprising, was that not all students engage in game play in the same manner.Some students approached the game from a game-play orientation where they were simply focused on solving levels, while others approached the game from a sense-making orientation where they sought to truly understand the target concepts and formal representations.
In this study, most students started within the game to find the help they needed to advance in levels.When they ran into a need for help, their game-play orientation influenced who they turned to and for what reason.Whether they turned to their own ability to make sense of the game, requested help from a peer, or solicited Mrs. L's help, this classroom game play experience provided students with opportunities to locate the help they needed when they needed it.External resources, from paper to peers, were accessible by all students.Even the desk arrangement, from traditional rows in their typical class sessions to groupings of four for the game sessions, seemed to be an invitation for students to work together.Communication among peers was clearly encouraged.
These findings have important implications for how teachers design a game play learning environment in their classroom.Physically, different arrangements of furniture, particularly when computers are used, imply different kinds of participant structures.Teachers should consider what kind of access they want their students to have to external scaffolds, including their peers, during game play.Teachers also need to be aware of different orientations that students may adopt during game play and how their stance may influence how they are attending to the conceptual underpinnings of the game.This could mean that teachers take an active role during game play to facilitate connections between the game and science learning either through intentional discourse with groups or in whole-class instruction interspersed throughout the game play period.This kind of discourse can be invaluable to foster the necessary science thinking during game play.Research has shown that students can get very distracted while doing a particular activity and forget to attend to what they are supposed to be understanding [21].In a game play, this same stance can happen, particularly for students who have a game play orientation like Preston and who are very concerned with beating the game, but not learning along the way.Teachers can play a role to facilitate the game-to-science content connection for students.Allowing students to work in groups, even if they are playing the game individually, encourages students to talk, which allows their thinking to become visible [22].Teachers can use this talk as a formative assessment opportunity to identify ideas that need to be addressed and questions that can be asked to deepen student thinking [23].
Leveraging effective teaching practices with the affordances of digital games for learning can potentially lead to rich, meaningful student engagement with scientific ideas.
We believe this paper also has implications for the designers of future games for learning.In our study, we noted the crucial role that external scaffolds played in most students' game play experience.While some students relied exclusively on internal scaffolds and their own prior knowledge and resources, many students preferred to seek help from a peer or Mrs. L. We think game designers should consider the social capital present in most classrooms and leverage the discourse that will likely emerge when games are played in a traditional K-12 classroom.This could take the form of online discussion forums or embedded videos that serve as a virtual teacher and explain challenging concepts within the game.
This study has provided an examination of what DIG game play looked like in real classrooms.While much needs to be learned, this study showed that the DIG itself provided a goal (whether that goal was just to beat a level or a deeper goal of making sense of the character's motions and learning more about math and science in order to pass a level).Progress toward the goal was interrupted by challenges, also presented by the game.When these challenges proved too difficult to overcome, students sought help, and this is where the game extended its reach into the classroom context.Understanding more about when students are reaching for help, who they are turning to, and what kind of help they are seeking can help game designers and teachers learn how to design effective learning environments that support game play in secondary science classrooms.Essentially, while much research on the design of scaffolding in games for learning has focused on internal scaffolds, future research on external scaffolds may prove much more productive, with the added bonus of potentially even greater generalizability.
Figure 2 .
Figure 2. Anatomy of an introductory worldview level.
Figure 1 .
Figure 1.Anatomy of an introductory block level.
Figure 3 .
Figure 3. Group seating positions of focus group students, including direction desks are facing.
Figure 4 .
Figure 4. Sample help screen after student fails a level.
Figure 5 .
Figure 5. Level map showing starting position and dot trace.
Table 1 .
Number of students and groups playing each version of SURGE Symbolic.
Table 2 .
Mean average scores for pretest and posttest.
Table 3 .
Levels completed at the end of each day of game play and pre-post performance for focus group.
Table 4 .
Type of knowers used by number of students per period.Digital Games in the Science Classroom: Leveraging Internal and External Scaffolds during Game Play http://dx.doi.org/10.5772/intechopen.72071 | 2018-12-22T06:20:36.279Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "78518bcd6935d605b0423f3f72a00158a70cfa5e",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/57919",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "65353fcbe28e4b2e8876f2b2aa1c4ced736f15a2",
"s2fieldsofstudy": [
"Education",
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
86444690 | pes2o/s2orc | v3-fos-license | Twelve-month kinetics of circulating fibrosis-linked microRNAs (miR-21, miR-29, miR-30, and miR-133a) and the relationship with extracellular matrix fibrosis in dilated cardiomyopathy
Introduction A single measurement of any biomarker may not reflect its full biological meaning. The kinetics of fibrosis-linked microRNAs and their relationship with extracellular matrix (ECM) fibrosis in dilated cardiomyopathy (DCM) have not been explored. Material and methods We evaluated 70 consecutive DCM patients (48 ±12.1 years, left ventricular ejection fraction 24.4 ±7.4%). All patients underwent right ventricular endomyocardial biopsy in order to quantify ECM fibrosis and measure collagen volume fraction (CVF). Circulating microRNAs (miR-21-5p, miR-29b, miR-30c-5p, and miR-133a-3p) were measured with quantitative polymerase chain reaction (PCR) at baseline and at 3 and 12 months. Results Based on the biopsy results, two groups of patients were identified: with (n = 24, 34.3%) and without (n = 46, 65.7%) ECM fibrosis. Except for a single measurement of miR-29b at 3 months (DCM with fibrosis: 6.03 ±0.72 vs. DCM without fibrosis: 6.4 ±0.75 ΔCq; p < 0.05), baseline, 3- and 12-month kinetics of microRNAs did not differ between the two groups. Moreover, 12-month microRNA kinetics did not differ in patients with new-onset DCM (duration < 6 months; n = 35) and chronic DCM (> 6 months; n = 35). Only miR-29 at 3 months correlated with CVF (r = –0.31; p < 0.05), whereas other microRNAs did not correlate with CVF either at 3 or at 12 months. Conclusions Regardless of ECM fibrosis status or duration of the disease, 12-month patterns of circulating microRNAs are similar in DCM. Correlations between microRNAs, measured at 3 and 12 months, are lower than expected. In this study, regardless of the time point, circulating microRNAs were not able to differentiate between DCM patients with versus without fibrosis.
Introduction
Extracellular matrix (ECM) fibrosis is one of the key mechanisms that leads to heart failure (HF) [1]. Despite contemporary treatments in dilated cardiomyopathy (DCM) and HF, there is little beneficial effect on ECM fibrosis, which, in fact, frequently progresses [2]. Extracellular matrix fibrosis involves numerous parallel and intertwined mechanisms resulting in amplification or inhibition of fibrosis at various levels. In addition to well-characterized mechanisms, such as TGF-β-dependent pathways, cytokines, growth hormones, and regulatory proteins, there is increasing interest in fibrosis regulation via non-coding RNAs [3,4]. MicroRNAs are short, non-coding RNA sequences that regulate gene expression at the post-transcriptional level by targeting the 3'-untranslated region of mRNA sequences. Recent studies indicate that microRNAs control a variety of cellular processes essential to the heart, including fibrosis [5,6]. Circulating microRNAs are present in all blood components, including plasma, platelets, erythrocytes, and nucleated blood cells. They are remarkably stable in plasma and are resistant to harsh external conditions, such as boiling, low or high pH, long-term storage as well as internal conditions, e.g. they are protected from endogenous RNase activity [7]. The selection of microRNAs for the current study was based on research, performed predominantly in animal models, that established a link between miR-21, miR-29, miR-30, and miR-133 and cardiac fibrosis [8].
The biomarker-based approach in chronic diseases, such as HF, is slowly gaining momentum. Studies have revealed that certain biomarkers, including B-type natriuretic peptide, C-reactive protein or troponins, may help to predict clinical outcomes, distinguish high-risk subgroups or even guide therapy [9]. However, the majority of studies utilized only a single (one time point) measurement of the biomarker and, as such, our understanding of the kinetic patterns of biomarkers is rudimentary [10]. The relationships between either circulating or myocardial microRNAs and ECM fibrosis in various cardiac conditions, including DCM and HF, has been previously explored [11,12]. However, only a few studies have investigated the dynamic nature of circulating microRNAs in cardiac diseases, and none have specifically addressed the relationship between ECM fibrosis and the kinetics of microRNAs. Therefore, we aim to investigate the long-term kinetics of fibrosis-linked microRNAs in DCM patients stratified according to the duration of the disease and fibrosis status.
Study population
The study was approved by the institutional review board and the ethics committee, and all patients provided written informed consent prior to enrollment. Dilated cardiomyopathy was diagnosed according to the guidelines of the European Society of Cardiology [13], and HF status was defined according to New York Heart Association (NYHA) criteria. In order to be enrolled, patients with class I-III NYHA HF had to remain clinically stable for at least 2 weeks prior to beginning the study. Our hospital is a tertiary referral center for advanced HF and cardiomyopathies. All DCM patients who were recruited for this study were evaluated either in the outpatient or inpatient setting including transfers from referring hospitals. As one of the main aims of this study was to explore whether fibrosis-related biomarkers are related to the duration of disease, our goal was to include an equal number of patients with new-onset and chronic DCM (35 subjects per each group). The recruitment of patients with chronic DCM (duration > 6 months) took less time than for those with recent DCM (duration ≤ 6 months). Therefore, the chronic DCM subgroup was recruited in less than 10 months and an additional 5 months was required to complete the recruitment of the remaining new-onset DCM patients for a total of 35. Two hundred patients were screened in total. New-onset HF was defined as a duration of symptoms less than or equal to 6 months (group 1), and chronic DCM (group 2) was defined as a duration of symptoms greater than 6 months. The duration of HF symptoms was measured based on the time which had elapsed from the beginning of typical HF symptoms (such as dyspnea on exertion or at rest, paroxysmal nocturnal dyspnea, orthopnea, palpitations, and/or edemas) to the index hospitalization or ambulatory visit to the cardiology clinic. Assessment of the patients' clinical status, transthoracic echocardiograms, and blood sampling were performed at baseline, 3, and 12 months. A study flow diagram is presented in Those subjects were recruited from a pool of healthy hospital workers, families of hospital personnel or those subjects who responded to the call. The baseline characteristics of the control group have already been presented in the previous paper [11].
Echocardiography
Echocardiography was performed according to the recommendations of the American and European Associations of Cardiovascular Imaging [14]. Transthoracic echocardiograms were obtained on commercially available equipment (Vivid 7 GE Medical System, Horten, Norway) with a phased array 1.5-4 MHz transducer. Conventional M-mode, 2-dimensional and Doppler parameters were calculated. All measurements were obtained from the mean of 3 beats for patients in sinus rhythm, and 5 beats for those in atrial fibrillation. Chamber diameters, areas, and volumes were normalized for body surface area.
Endomyocardial biopsy
Endomyocardial biopsy (EMB) procedures were performed via a femoral or jugular vein approach [15]. Long (104 cm), flexible, 7 French disposable biopsy forceps with small jaws (Cordis, Johnson & Johnson Co, Miami Lakes, FL, USA) were used. Relying on years of experience, both in the biopsy technique and state-of-the-art pathology assessment, we have found that five good quality myocardial samples are adequate for numerous laboratory analyses. Consequently, we acquired five samples in the majority of cases. However, in 5 patients due to reasons including technical difficulties and length of the procedure, we sampled 3-4 cardiac samples. The presence or absence of fibrosis as well as the degree of fibrosis was determined qualitatively by an experienced pathologist who had been blinded to the clinical data. Specimens for fibrosis assessment were stained with Masson's trichrome; fibrotic areas stained blue and normal muscle fibers stained red. We defined ECM fibrosis as the disproportionate accumulation of fibrillar collagen between intermuscular spaces previously devoid of collagen. Patients were diagnosed as either fibrosis-positive or negative. Collagen volume fraction (CVF) was assessed by quantitative morphometry in biopsy sections stained with collagen specific picro-sirius red. For each cardiac sample, a total of 10 fields were analyzed with a × 40 objective lens. CVF was expressed as the percentage of the red-stained area per total myocardial tissue area and expressed as a percentage (%) [11].
Circulating microRNAs
Levels of microRNA were measured by qPCR. The technique was previously described [16]. RNA was extracted from 100 μl of plasma using a Mir-Vana kit (Life Technologies) following the manufacturer's protocol. Two microliters of extracted RNA was used to perform reverse transcription with a TaqMan Advanced MicroRNA cDNA Synthesis Kit (Life Technologies). Samples of cDNA were 10× diluted before the qPCR reaction. qPCR was conducted on 384-well plates with TaqMan Advanced MasterMix and TaqMan Advanced Assays targeting: hsa-miR-21-5p, hsa-miR-29b-3p, hsa-miR-30c-5p, hsa-miR-133a-3p, hsa-miR-26a-5p. Fifteen microliter reactions were prepared with a Bravo pipetting station (Agilent Technologies), and the real-time reaction was run and read on the CFX384 Real Time PCR Detection System (Bio-Rad). Three microRNAs (miR-15b, miR-16 and miR-423) were selected based on our previous experience with qPCR in plasma samples. A coefficient of variation and M-value were calculated for these microRNAs. The best combination of microR-NAs was miR-15b and miR-16 (CV = 0.23 and 0.2; respectively). Mean Cq values were normalized to the geometric mean of hsa-miR-15b-5p and hsa-miR-16-5p, which were selected as relatively stable controls in pilot experiments. Normalized data were expressed for each sample as . Δ Cq is defined as the difference between the Cq value of the microRNA of interest and the geometric mean of miR-15b and miR-16 for a particular sample.
Statistical analysis
The normality of the distribution of variables was assessed with the Shapiro-Wilk test. Levels of biomarkers at different time-points were compared using the Wilcoxon test for paired samples. A nonparametric test was used since differences between time-points were not normally distributed (according to the Shapiro-Wilk test). Differences between the two groups in terms of levels of miR-21, miR-29 etc. at a single time-point as well as differences in changes between time-points were assessed with the Mann-Whitney-Wilcoxon two-sample test. A nonparametric procedure was chosen due to the lack of normality in the groups compared. All results were considered statistically significant when p was < 0.05. All the analyses were conducted in R software, version 3.3.2 (R Foundation for Statistical Computing, Vienna, Austria). Table I shows the baseline characteristics of the study population. The majority of patients were
Kinetics of circulating microRNAs in dilated cardiomyopathy with fibrosis
There was no statistically significant difference in miR-21 values at baseline, 3-and 12-month Comparison of 3-and 12-month microRNAs between dilated cardiomyopathy patients and the control group A comparison of baseline values of circulating microRNAs between DCM patients and the control group was presented in a previous paper [11]. Three-and 12-month microRNA comparisons between those two groups are presented in Table II. All microRNAs, both at 3 and 12 months, significantly differed between DCM patients and the controls.
Kinetics of circulating microRNAs in dilated cardiomyopathy without fibrosis
The comparisons of circulating microRNAs measured at baseline and at 3 and 12 months in DCM patients without fibrosis are presented in Figure 3. The value of miR-21 significantly decreased from baseline to 3
Comparison of the kinetics of circulating microRNAs between new-onset and chronic dilated cardiomyopathy
MicroRNA values are provided above, and the p-values are shown in Table III. Twelve-month kinetics are depicted in Figure 2. Analyses revealed that there was no statistical difference between microRNA levels either at 3 months or 12 months between new-onset and chronic DCM patients.
Comparison of the kinetics of circulating microRNAs between dilated cardiomyopathy with and without fibrosis
MicroRNA values are presented above, and the p-values are included in Table IV. The illustra-
Correlations between circulating microRNAs at 3 and 12 months with collagen volume fraction
Out of four microRNAs only miR-29 at 3 months inversely correlated with CVF (r = -0.31; p < 0.01). There was no relationship with any other micro-RNA at any time point to fibrosis (Table V).
Discussion
The main findings of the study can be summarized as follows. First, 12-month kinetics of miR-21, miR-29, miR-30 and miR-133 are similar in DCM patients stratified according to disease duration and fibrosis status. Second, except for one significant difference of miR-29 at 3 months between DCM patients with and without fibrosis, the 12-month patterns of the remaining microRNAs do not differ in patients with new-onset and chronic DCM and patients with and without fibrosis. Thus, no distinct microRNA pattern, regardless of the duration of disease and fibrosis status, has been identified. Finally, correlations between 4 microRNAs, measured at 3 and 12 months, and ECM fibrosis are lower than expected.
The majority of studies on the biological role of microRNAs, either in humans or laboratory animals, relied only on a single (one time point) measurement. As such, there is a dearth of knowledge regarding the pattern of microRNAs in cardiac diseases. The few studies that have explored this subject concentrated on either short-term (hours and days) or longer-term (months) kinetics. Most of those studies focused on the kinetic release of microRNAs in urgent cardiac conditions, such as acute myocardial infarction (AMI) or HF exacerbation requiring urgent hospitalization. Although more than a dozen microRNAs have been associated with cardiac fibrosis, it is unknown whether these associations are maintained over a longer period and whether fibrosis determines any specific pattern of microRNAs.
The investigators of the Bio-SHIFT project performed repeated measurements of multiple microRNAs, specifically, miR-1254, miR-22-3p, miR-423-5p, miR-486-5p, miR-320a, miR-345-5p, miR-378a-3p, in 263 outpatients with HF [17]. The authors explored the associations of the temporal patterns of these microRNAs with adverse events. The authors discovered that the temporal pattern of miR-22-3p was inversely and independently associated with the primary endpoint, a combination of HF hospitalization, cardiovascular mortality, cardiac transplantation and LVAD implantation [17]. Our studies have previously reported the absence of associations between baseline circulating microRNAs, specifically, miR-21, miR-26, miR-29, miR-30 and miR-133a with CV outcomes. However, myocardial miR-133a was found to be an independent predictor of the combined endpoint of CV death and urgent HF hospitalization [12]. In another study, Koyama et al. investigated shortterm kinetics (from admission to hospital day 7) of a panel consisting of 125 microRNAs in 42 patients who were urgently admitted due to HF exacerbation [18]. The authors detected several fluctuations of circulating microRNAs, including miR-122-5p, miR-143-3p, miR-196-5p, and miR-200c-5p. Out of this large panel of microRNAs, miR-122-5p was found to be most abundant and correlated with changes in serum liver function markers (aspartate aminotransferase and alanine aminotransferase) that reflect liver damage [18]. Although Leistner et al. did not study longitudinal changes of micro-RNAs, they explored the trans-coronary (measured in the aorta and coronary sinus) gradient of micro-RNAs in 52 patients with stable coronary artery disease (CAD) [19]. The authors observed correlations between plaque burden (assessed with optical coherence tomography) and the trans-coronary gradient of miR-126-3p, miR-145-5p, miR-155-5p, and miR-29b-3p. Of note, the trans-coronary gradient of miR-29b-3p correlated significantly with plaque fibrosis [19]. The correlation of miR-29b with fibrosis (plaque fibrosis and ECM fibrosis) strengthens our knowledge on the pro-fibrotic role of miR-29. Studies from our group as well as others have shown much stronger correlations between mi-croRNAs and ECM fibrosis, e.g. baseline miR-26 and miR-30 correlated with collagen volume fraction (r = 0.48 and r = -0.58; p < 0.01, respectively) [11,20]. Although we applied exactly the same methodology of microRNA analysis, we did not observe similar correlations between ECM fibrosis and the aforementioned or other microRNAs under study, albeit measured at 3 and 12 months. There is no clear explanation for this profound discrepancy other than that the kinetics of "pro-fibrotic" microRNAs fluctuate for yet undetermined reasons. This possibility should be taken into account in order to properly interpret the value of any microRNAs as a biomarker.
Unfortunately, no distinct pattern of any mi-croRNA under study was observed regardless of the duration of disease and, more importantly, regardless of fibrosis status. It should be noted that we applied invasive techniques for the assessment of ECM fibrosis. If we had used non-invasive methods of fibrosis evaluation such as cardiac magnetic resonance imaging, we might have observed somewhat different results [21]. Although repeat cardiac biopsies are rarely performed, Mlejnek et al. have recently provided data that such an approach may be safe and justified [22].
Despite initial enthusiasm regarding the potential utility of circulating microRNAs as biomarkers, including markers of fibrosis, the absence of any strong relationship between the microRNAs under study and cardiac fibrosis calls into question their relevance as markers of fibrosis [23]. Furthermore, patients with and without fibrosis had almost identical microRNA kinetics. This observation can be viewed as another argument against their utility as biomarkers.
There are several potential limitations to the present study. One limitation may be the size of the cohort. The larger the cohort, the greater is the validity of the study. However, this study was conducted at a single center with a dedicated cardiomyopathy clinic and a recruitment period of over 12 months. All patients underwent RV endomyocardial biopsy and were prospectively followed. As such, our single-center study size may be viewed as robust. The number of microRNAs which were assessed is relatively small, as all measures were performed by quantitative PCR. Novel capabilities of new generation sequencing offer much larger panels of microR-NAs. As with any study with endomyocardial biopsy, sampling error and patchy distribution of fibrosis should always be taken into account.
In conclusion, regardless of ECM fibrosis status and duration of disease, 12-month patterns of circulating microRNAs, specifically miR-21, miR-29, miR-30 and miR-133, are similar in DCM. Thus, the presence or absence of fibrosis and new-onset versus chronic DCM are not characterized by distinct microRNA patterns. Correlations between microRNAs, measured at 3 and 12 months, are lower than expected and do not reflect findings previously observed at baseline measurements. The study sheds new light on the biology of mi-croRNAs in the context of ECM fibrosis in DCM that calls into question the potential application of microRNAs as fibrosis-specific biomarkers. | 2019-11-22T01:34:11.088Z | 2019-11-18T00:00:00.000 | {
"year": 2019,
"sha1": "3405929bb4970d3329658e15919d4ffe8b57980d",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5114/aoms.2019.89777",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd695fadba17746365cf6702b05bb8f34f164384",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209236364 | pes2o/s2orc | v3-fos-license | Submucosal tunneling endoscopic resection of a subepithelial lesion assisted by EUS miniprobe
A 68-year-old asymptomatic woman was involved in an ongoing research study evaluating the performance of a circulating tumor DNA-based blood test for early cancer identification (DETECT study). An abnormal result led to positron emission tomography–CT, which showed focal radiotracer uptake at the gastroesophageal junction (Fig. 1). This was followed by upper endoscopy, which revealed a small subepithelial lesion (SEL) at the esophagogastric junction/gastric cardia (Fig. 2). EUS showed a well-defined 22-mm 14-mm hypoechoic lesion arising from the muscularis propria layer of the esophagus (Fig. 3). FNA revealed a spindle-cell neoplasm positive for smooth muscle actin and negative for DOG-1 and CD 117, consistent with leiomyoma. Although the results of pathologic examination were consistent with a benign process, resection was recommended by the surgical oncologist, given the tracer uptake on positron emission tomography–CT. Surgical and endoscopic options were considered, and the submucosal tunneling with endoscopic resection (STER) technique was selected as the modality of choice based on the location of the lesion crossing the gastro-
A 68-year-old asymptomatic woman was involved in an ongoing research study evaluating the performance of a circulating tumor DNA-based blood test for early cancer identification (DETECT study). An abnormal result led to positron emission tomography-CT, which showed focal radiotracer uptake at the gastroesophageal junction ( Fig. 1). This was followed by upper endoscopy, which revealed a small subepithelial lesion (SEL) at the esophagogastric junction/gastric cardia (Fig. 2). EUS showed a well-defined 22-mm  14-mm hypoechoic lesion arising from the muscularis propria layer of the esophagus (Fig. 3).
FNA revealed a spindle-cell neoplasm positive for smooth muscle actin and negative for DOG-1 and CD 117, consistent with leiomyoma. Although the results of pathologic examination were consistent with a benign process, resection was recommended by the surgical oncologist, given the tracer uptake on positron emission tomography-CT.
Surgical and endoscopic options were considered, and the submucosal tunneling with endoscopic resection (STER) technique was selected as the modality of choice based on the location of the lesion crossing the gastro-esophageal junction. A surgical approach to this lesion likely would have required disruption of the loweresophageal sphincter and predisposed the patient to significant reflux. STER preserves the lower-esophageal sphincter and allows the gastric and esophageal attachments at the hiatus to be maintained.
A Dual Knife J (Olympus, Center Valley, Pa, USA) was used to create a mucosotomy entry point 5 cm proximal to the lesion and then create a submucosal tunnel. Although the tunnel was taken down to the expected location of the SEL, the lesion could not be seen.
Therefore, a 20-MHz EUS miniprobe was inserted through the working channel of the endoscope while it was in the tunnel, and the tunnel was filled with water. EUS imaging showed the hypoechoic mass, slightly deeper than the depth of the tunneling dissection. Cutting a little deeper in the predicted location revealed the outer capsule of the SEL. Complete resection was then completed with both the Dual Knife and an IT Nano Knife (Olympus, Center Valley, Pa, USA) (Video 1, available online at www.VideoGIE.org).
The intact lesion was 2.7 cm  1.7 cm  1.4 cm and was removed from the tunnel with an endoscopic net (Fig. 4). The mucosotomy was closed with clips. A routine esophagram (Fig. 5) with kidneys, uterus, and bladder www.VideoGIE.org Volume 5, No. 1 : 2020 VIDEOGIE 11 ( Fig. 6) was done postprocedurally. No esophageal leak was seen; however, a small amount of free air was seen on the kidneys, uterus, and bladder view. The patient experienced nausea postprocedurally and was observed overnight in the hospital. She was discharged the next day and was followed up in clinic 3 weeks after STER, with no complaints or adverse events from the procedure. The final pathologic diagnosis was leiomyoma, with an immunohistochemical profile identical to that of the previous FNA biopsy specimen.
DISCUSSION
The STER technique is inspired by both endoscopic submucosal dissection and peroral endoscopic myotomy. 1 The procedure begins with a mucosotomy to create an entry point for the endoscope similar to that for peroral endoscopic myotomy; then, endoscopic submucosal dissection knives and principles are used for creation of the submucosal tunnel. Finally, resection of the tumor is performed.
STER for subepithelial lesions at the esophagogastric junction can be technically challenging because of a narrow lumen and sharp angulations if the lesion extends into the cardia. 2 In addition, if the SEL is exophytic and points away from the mucosa, its location may not be readily apparent in the tunnel. In our case, we faced these challenges and found that use of the 20-MHz EUS miniprobe was essential to accurately guide deeper dissection and resection. For the management of subepithelial lesions in the digestive tract, miniprobe EUS imaging may be a useful adjunctive technique for precise localization of the lesion.
DISCLOSURE
Dr Diehl is a consultant for Olympus and Boston Scientific. All other authors disclosed no financial relationships relevant to this publication. | 2019-11-22T00:39:11.828Z | 2019-11-18T00:00:00.000 | {
"year": 2019,
"sha1": "4067b0ec76f2351f58bedbeb22265c3007a3c966",
"oa_license": "CCBYNCND",
"oa_url": "https://www.videogie.org/article/S2468-4481(19)30271-1/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d447163e3ac5d509ce76bc8da59e6e8cf76799b6",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119385370 | pes2o/s2orc | v3-fos-license | Ehrenfest scheme for $P$-$V$ criticality of the $d$-dimensional-AdS black holes surrounded by perfect fluid in Rastall theory
We discuss the $d$-dimensional anti-de-Sitter (AdS) black holes surrounded by perfect fluid in the Rastall theory of gravity characterized by its mass $M$, the field structure parameter $N_s$, the Rastall parameter $\psi$, and the cosmological constant $\Lambda$. We derive the quantities like the mass, and the Hawking temperature of the black holes and demonstrate the effects caused by the various parameters of the theory. We investigate the thermodynamics for $P$-$V$ criticality and phase transition in the extended phase space of the AdS black holes by treating the cosmological constant as pressure and its conjugate quantity as thermodynamic volume. We complete the analogy of this system with the liquid-gas system and study its critical point, which occurs at the point of divergence of the specific heat at constant pressure $C_P$, volume expansion coefficient $\alpha$ and isothermal compressibility coefficient $\kappa_T$. Using these expressions we calculate the Ehrenfest equations and carry out an analytical check. These results resemble with the nature of liquid-gas phase transition at the critical point, and hence the understanding of the analogy of AdS black holes and liquid-gas systems perfectly matches with the van der Waals (vdWs) gas.
I. INTRODUCTION
The Einstein's general relativity (GR) has successfully described many predictions within its regime and has been the most popular and successful theory of gravity. In the framework of GR, the geometry and matter fields are coupled minimally which results in the covariant conservation relation of the energy-momentum tensor (EMT). It has been shown that when the non-minimal coupling of the geometry and matter fields occurs, they get affected by their mutual changes [1][2][3][4][5], and hence the covariant conservation of the matter EMT may be violated [6,7]. The idea of the covariant conservation based on spacetime symmetries has been implemented only in the Minkowaski flat or weak field regime of gravity. However, in the strong domain of gravity the actual nature of the spacetime geometry and the covariant conservation relation, is still debated. Taking advantage of this fact, Rastall [6,7] proposed a phenomenological model where the covariant conservation of the EMT would be nonvanishing and has the form: T µν ;µ = λ R ;ν , where R is the Ricciscalar, and λ is the Rastall coupling parameter. Here, the parameter λ measures the potential deviations of Rastall theory from GR and shows a tendency of curvature-matter coupling in a non-minimal way.
Interestingly, all eletrovacuum solutions to Einstein's GR are also the solutions in Rastall gravity and asymptotically they approach the Mankowski spacetime. However, all the nonvacuum solutions in Rastall gravity contain Rastall parameter and are significantly different from corresponding solutions in GR, and thereby making the Rastall gravity aesthetically rich [8].
In the recent years, the Rastall theory has attracted great attention and a rich diversified research dedicated to it are available in literature including on some phenomenological results, related to both astrophysical [8][9][10][11][12] and cosmological consequences [13][14][15][16][17][18][19]. The static, cylindrically symmetric black hole solutions to the Rastall gravity coupled to the U(1) Abelian-Higgs model that takes into account the quantum effects in curved spacetime has been characterized in a phenomenological way [20]. Nevertheless, recently many works on the various black hole solutions have been investigated based on Rastall theory. Among them include the spherically symmetric black hole solutions of Rastall gravity [8,12], the rotating black holes [21,22], their shadow properties [23] and also the thermodynamics and other theoretical aspects of black holes [24][25][26]. Besides, some works comparing the Rastall theory with standard GR have been also discussed [27,28]. The essence of a perfect dark energy fluid called the quintessence has been included in the black hole solution. Following Kiselev [29], the black hole solutions surrounded by the quintessence field have been generalized to higher dimensions in GR [30] as well as in other theories of gravity [31]. The quintessence black hole solutions have been also extended to Rastall theory [8]. Recently, a d-dimensional solution to Rastall theory in presence of perfect fluid has been obtained [32] and extended to the charged anti-de Sitter (AdS) spaces [33].
The purpose of this paper is an analytical study of the extended phase space thermodynamics of the d-dimensional AdS black holes in Rastall theory and also their P -V criticality. The idea of AdS black holes dated back to the pioneering work due to Hawking and Page [34], describing a first-order phase transition between the Schwarzschild AdS black holes and the thermal AdS spaces familiarly known as Hawking-Page phase transition. Since then, people have found continued interests in the study of the phase transition of AdS black holes. The black holes in AdS spacetimes seem to be a real thermodynamic system as it contains the pressure, volume and temperature which have been important in the study of the thermodynamic phenomena and the rich phase structure, such as van der Waals (vdWs) phase transition, P -V criticality, reentrant phase transitions, triple point, isolated critical point, and superfluidity [35][36][37][38][39][40][41][42][43][44][45]. These considerations enhance us to study the AdS black holes as an extended thermodynamic system. On the other hand, the d-dimensional charged AdS black holes also exhibit a vdWs phase transition [38][39][40][41][42][43][44][45]. In the reduced parameter space, the critical phenomena were also found to be charge-independent [38][39][40][41][42][43][44][45]. The critical phenomena of the AdS black holes have also been extended for the AdS black holes in some modified gravity theories, e.g., massive gravity theory [46], power-law Maxwell field [47], Born-Infeld theory [48], and also to the regular AdS black holes [49]. Therefore, it will be interesting to study the thermodynamics phase transition and the P -V criticality of the d-dimensional AdS black holes in the presence of perfect fluid in Rastall gravity. The effect of the Rastall parameter in the thermodynamic quantities will be rather important since it contributes significantly to them.
Our paper is organized as follows. In the next section, we briefly discuss the d-dimensional AdS black holes in the presence of a perfect fluid in the Rastall theory. In Sec. III, we evaluate the thermodynamic quantities of the black holes and find the expressions for thermodynamic pressure and volume as thermodynamic variables. In Sec. IV, we have used the classical Ehrenfest equations to analytically check the AdS black holes at the critical points of P -V criticality.
PERFECT FLUID
The non-conservation of EMT in the strong gravity regime as proposed by Rastall [6,7] leads to the choice that T µν ;µ = λ R ;ν , where R is the Ricciscalar and λ is the Rastall parameter. This assumption leads to modify the Einstein's field equations which can be written as where κ is a coupling constant related to the Newton's gravitational constant. If one includes the negative cosmological constant Λ, the field equations (1) can be recast as The trace of the Eq. (2) leads to have the following form whereT µν is the effective EMT which has the form in which d is the spacetime dimension,Λ = Λ/κ and T µν is the EMT for surrounding quintessence field and T is its trace. Here and henceforth we consider ψ = κλ. The EMT [30,31] of the surrounding quintessence field is given as and With these expressions of T µ ν , the components of effective EMTT µ ν read as The black holes solutions in a d-dimensional static spherically symmetric spacetime surrounded by a perfect fluid have been analyzed [32] and then extended to charged AdS spacetimes [33]. However, we briefly discuss the d-dimensional static spherically symmetric black holes surrounded by perfect fluid in AdS spacetime [32,33]. The d-dimensional spherically symmetric Schwarzschild-Tangherlini-like line element reads where dΩ 2 d−2 is the line element of the (d − 2)-dimensional unit sphere given by The field equations (3) together with the metric (8) lead to the following independent equations: From Eqs. (7)-(11) we get a master differential equation to solve f : Eq. (12) admits the solution [33] f where and where l is the curvature radius. The energy density ρ s can be obtained when one put f (r) from Eq. (13) in Eq. (11), which lead to have the following form where where m and N s are two integration constants, respectively, related to the the black hole mass and density of the surrounding perfect field. The parameter m related to the Arnowitt-Deser-Misner (ADM) mass M of the black hole as The spacetime (8) depends not only on the dimension d, but also on the state parameter ω s of the perfect fluid. In the case when d = 4 and 1/l 2 = 0, the metric solution (13) reproduces [8] and for ψ = 0, the metric function (13) reduces to the d-dimensional AdS black holes in quintessence background, which in addition for 1/l 2 = 0 reproduces the d-dimensional quintessence black holes [30]. In the limit ω s = −1 and 1/l 2 = 0, the spacetime (8) for N s > 0 becomes which corresponds to the d-dimensional Schwarzschild-Tangherlini-de Sitter black holes [50].
Interestingly, Eq. (19) does not contain the factor ψ and thus the Rastall and Einstein theories merge for ω s = −1. For N s < 0 and ω s = −1, the metric (8) becomes where Λ ef f = (N s + 1/l 2 ) and the metric reduces to the d-dimensional Schwarzschild-Tangherlini-AdS black holes.
III. THERMODYNAMICS
In this section, we would like to briefly discuss the thermodynamics of the d-dimensional The Killing vector ξ µ is a null generator of the event horizon, i.e., ξ µ ξ µ = 0, which in turn gives g tt | r=r + = g rr | r=r + = f (r + ) = 0. This relation is used to determine the event horizon radius which has a complicated structure and cannot be solved analytically. The ADM mass M is obtained by solving f (r + ) = 0 in terms of the event horizon r = r + , which The surface gravity is defined as κ = −∇ µ ξ ν ∇ µ ξ ν /2 and is related to the temperature T through T = κ/2π. Thus the temperature of the black holes in d-dimensional spacetime reads The entropy can be calculated as In extended phase space one can identify the cosmological constant as thermodynamic pressure which for the metric (13) reads The expression for mass (21) in terms of pressure (24) can now be expressed as We observe that the entropy, Eq. (23) where S is the entropy and P is related to the cosmological constant of the black holes defined in Eqs. (23) and (24). Here we treat Θ s as a generalized force conjugate to the surrounding perfect fluid structure parameter N s and is introduced to make the the first law to be consistent with the Smarr-Gibbs-Duhem relation. Thus on using Eqs. (25) and (26), we obtain Θ s having the following form [30] The thermodynamic volume term from Eq. (26) reads Using Eqs. (24) and (25), the Eq. (28) is obtained as Therefore, the corresponding Smarr-Gibbs-Duhem formula [38,39] when using the Euler's theorem is obtained as It is clear that the mass depends on the state parameter ω s of the perfect fluid as it is contained in ξ. In the limit ω s → −1, the last term in the right hand side of Eq. (30) becomes The fact that if Λ and hence P is treated truly as a constant, the second last term in (26) is essentially zero. However, it is not zero always and can have physical consequences. It is of common belief that in the inflationary models, Λ can be identified as a variable quantity [51,54].
IV. P-V CRITICALITY AND ANALYTICAL CHECK OF CLASSICAL EHREN-FEST EQUATIONS IN THE EXTENDED PHASE SPACE
Next, we study the critical phenomena of the black hole under thermodynamic equilibrium. The Hawking temperature (22) when expressed in terms of pressure (24) is written as Before expressing the Eq. (31) for pressure we need to introduce the thermodynamic volume of the vdW-like fluid. In this way one can, in general, express the equation of state relating its pressure to other thermodynamic quantities e.g. temperature, volume and other related parameters specifying the AdS black holes in arbitrary dimensions in the presence of perfect fluid in Rastall theory such that P = P (T, V, Q, N s ). One can thus write the fluid volume as discussed in [35,36] in the following form In geometric units ℓ P = 1 and we have Thus replacing r + as defined in Eq. (33) we obtain the equation of state from Eq. (31) as In the limit ψ → 0, the equation of state (34) reduces to d-dimensional AdS black holes in quintessence background in Einstein's GR. When both N s and ψ tend to zero the equation of state (34) reduces to Schwarzschild-Tangherlini-AdS black holes in higher dimensions [36].
The critical points are determined as The coupled equations (35) can be solved analytically. Therefore, the critical quantities now The subscript "c" denotes the values of the physical quantities at the critical points. The Eqs. (36), (37) and (38) lead to the following universal ratio This ratio is independent of the surrounding field parameter N s but is dependent on the Rastall coupling constant ψ. In Einstein's GR, i.e., when ψ → 0, the ratio recovers the d-dimensional AdS black holes in quintessence background. When ψ = 0 and ω s = d−3 d−1 , the universal ratio corresponds to the d-dimensional Reissner-Nordström black holes [36] for N s = −q 2 , which reads We recover the ratio ρ c = 3/8 in d = 4, a universal feature of the vdW fluid. It is seen that the critical parameters, Eqs. (36)- (38), depend on the fluid parameter and therefore reflecting in this way the presence of some exotic matter fields, e.g., the dark energy on the critical behavior of the vdW fluid. If we put ψ = 0, and N s = −q 2 , all these critical quantities reduce to those of d-dimensional RN-AdS black holes [36].
Next we derive another important thermodynamic quantity in equilibrium physics, the Gibbs free energy of the thermodynamic system. In the extended phase space, the enthalpy of the thermodynamic system in interpreted as mass of the black holes. Thus treating mass as enthalpy one can obtain the Gibbs free energy, G = H − T S = M − T S [53] in the following
A. Analytical study of classical Ehrenfest equations
To obtain a distinct second order phase transition at the critical points, the Ehrenfest equations have been successfully applied for the vdW like fluid in the extended phase space of black hole thermodynamics for various solutions in Einstein's GR and other modified theories of gravity [35,[55][56][57]. The distinct second order phase transition of the vdWs fluid are described by the Ehrenfest equations as follows where α = 1 V ∂V ∂T P and κ T = − 1 V ∂V ∂P T , respectively, are the isobaric volume expansion coefficient and the isothermal compressibility coefficient.
At the critical points of P -V criticality in the extended phase space thermodynamics, we analytically study the Ehrenfest schemes given by Eqs. (42) and (43). The temperature when expressed in terms of entropy (31) can be obtained as Utilizing the Eqs. (23), (31) and (44), we calculate the specific heat at constant pressure, the volume expansion coefficient and the isothermal compressibility of the black holes as In deriving the Eq. (47), we use the following cyclic thermodynamic identity of a (P, V, T ) system such that It is to be noted that the denominators of C P , α and κ T share the same factor, and thereby indicating that they diverge at the same point critical points and therefore satisfy the con- However, using Eqs. (23), (33) and (36), we obtain Thus the Eq. (49) shows that C P , α and κ T are discontinuous at the critical points S c and P c .
Next, we move on to check the validity of the Ehrenfest equations (42)- (43) at the critical point. The definition of the volume expansion coefficient can be rearranged as hence using Eqs. (23) and (29), the R.H.S of the equation (42) can be converted to From Eqs. (53) and (54) Using the defining relations of κ T and α, we have such that the R.H.S of Eq. (43) can be written In calculating the Eq. (57), once again we have used the cyclic rule of the P -V -T system defined in Eq. (48). Therefore, the second Ehrefest equation is also valid at the critical points as is confirmed by Eq. (57). Therefore, both the Ehrenfest equations are found to be valid at the critical point of the P -V criticality of the d-dimensional Rastall AdS black hole surrounded by perfect fluid.
Using Eqs. (53) and (57), the Prigogine-Defay (PD) ratio is derived as Since PD ratio identically equals to unity, it is proved that the d-dimensional AdS black holes in the perfect fluid background in Rastall theory has a distinct second order phase transition.
The PD ratio was first introduced in [58] and later it was extensively investigated in [59].
The PD ratio can be used to measure the potential deviation from for the system which does not show the similar behavior as those of vdWs fluid [60,61]. For the vdWs system, the PD ratio equals to unity while for a glassy phase transition, it lies between 2 to 5 [60][61][62][63].
The Eq. (58) together with the Ehrenfest ensures the validity Eqs. (42) and (43) confirms the second-order phase transition at the critical points. Therefore, for a d-dimensional AdS black holes in Rastall gravity, the phase transition is no longer an exception and follows the same features of a vdWs like liquid-gas system.
V. CONCLUSIONS
In this paper, we briefly discussed the d-dimensional AdS black holes in the perfect fluid background in Rastall gravity. As limiting cases, when ω s = −1 and 1/l 2 = 0, the metric (13) reduces to the d-dimensional Schwarzschild-Tangherlini (A)dS black holes. We have extended the first law in AdS spaces which includes Θ s dN s term in addition to V dP term. We have studied the thermodynamics of this d-dimensional black hole spacetime and defines a quantity Θ s conjugate to N s in order to be consistent with the Smarr-Gibbs-Duhem relation.
Next, we have investigated the extended phase space thermodynamics of d-dimensional AdS black holes in perfect fluid background. The definitions of the thermodynamic pressure and volume term includes the Rastall parameter ψ and hence modify the thermodynamic quantities in AdS spacetime. We have employed the classical Ehrenfest schemes to study the essence of the phase transition at the critical points of P -V criticality. The consideration of cosmological constant as thermodynamic pressure in AdS spaces and its conjugate quantity as thermodynamic volume led us to analyze the classical Ehrenfest equations by calculating heat capacity at constant pressure C P , and the isobaric volume expansion coefficient α, and the isothermal compressibility κ T . The effect of Rastall parameter ψ is also demonstrated in the derived thermodynamic quantities in extended phase space of black hole thermodynamics and found that they diverge exactly at the same critical points. The PD ratio is also calculated and found that it exactly equals to unity. In the limit ψ → 0, our results reduce to the expressions calculated in Einstein's GR. Therefore, the universal character of the vdW gas in Rastall gravity help us to understand the relations of the AdS black holes and liquid-gas system.
Our results may be important in the context of AdS/CFT correspondence. An extension to d-dimensional charged AdS black holes and their phase analysis may be a natural addition to the work. A rotating version of the black holes and their P -V criticality investigation will be physically motivated. | 2019-01-15T20:11:32.000Z | 2019-01-10T00:00:00.000 | {
"year": 2019,
"sha1": "283cc2135ea3726fc873c342875a65c7b1be808a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "283cc2135ea3726fc873c342875a65c7b1be808a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
249274105 | pes2o/s2orc | v3-fos-license | Development and characterization of single domain monoclonal antibody against programmed cell death ligand-1; as a cancer inhibitor candidate
Objective(s): One of the important interactions in controlling the human immune system is the reaction between checkpoint proteins such as programmed cell death-1 (PD-1) and its ligand, PD-L1. These are negative immunoregulatory molecules that promote immune evasion of tumor cells. PD-L1 expression is an immune-mediated mechanism used by various malignant cells in order to down-regulate the immune system. Checkpoint inhibitors (CPIs) are a new class of anti-cancer agents that stimulate immune cells to elicit an antitumor response by blocking the ligand and receptor interactions. Nanobody (Nb) as a new type of antibody fragment, has some potential as CPI. Materials and Methods: A female camel was immunized with recombinant PD-L1 protein, nanobody library was constructed and PD-L1 specific Nb was selected. The selected Nb was characterized in terms of affinity, specificity, and binding potency in ELISA, Western blotting, and flow cytometry. Results: Developed nanobody, A22 binds to its cognate target with high specificity and affinity. Western blot and flow cytometry techniques showed that nanobody A22 was able to specifically detect and attach to human PD-L1 protein on the cell surface and in the cell lysate. MTT assay showed the inhibitory effect of PD-L1 by specific Nb on A431 and HEK293 cells, with no cytotoxic effect on cell growth. Conclusion: The results highlighted the potential of anti-PD-L1 Nb as a novel therapeutic in cancer therapy without undesirable cytotoxicity.
Introduction
One of the important functions of the immune system is the ability to distinguish between normal and abnormal cells in the body by checkpoint proteins. Immune checkpoints are certain molecules on specific immune cells that should be activated (or inactivated) to start an immune response (1,2). These immuno-regulatory agents act by limiting the T cell activity and developing self-tolerance. The wellcharacterized checkpoint proteins are programmed cell death-1 (PD-1), programmed cell death ligand-1 (PD-L1), and cytotoxic T-lymphocyte-associated protein 4 (CTLA-4). These proteins prevent excessive inflammation and act as a "switch off " to prevent T cells from invading normal cells (2). PD-L1 is a human cell surface protein encoded by the PDCD1 gene located at p24.1.2 position on chromosome 9 which binds to PD-1 protein and has been introduced as the third member of the B7 protein family (3,4). The intracellular part of PD-L1 consists of a short cytoplasmic tail (30 amino acids) that is responsible for signal transduction (3,4). PD-L1 is normally expressed by CD8 + T cells and leads to inhibition of TCR signaling via the SHP1/2 pathway (5,6).
The binding of T-cell-associated PD-1 protein to its ligand which is located on macrophages, dendritic cells, and tumor cells transmits signals and reduces the activity of cytotoxic T cells. In chronic immune responses and tumors, interferongamma (IFN-γ) produced by T cells induces the expression of PD-L1 at the antigen-presenting cells and tumor cells followed by down-regulation of the immune responses, which eventually yield to the failure of immunostimulants (7,8). This reduced anti-tumor immune response usually occurs in two ways: i) inactivation of cytotoxic T cells in the tumor microenvironment (5,6) and ii) inhibition of new cytotoxic T cell activation within the lymph nodes (9)(10)(11). A high expression level of PD-L1 allows cancer cells to "trick" the immune system and prevent attacks as foreign harmful substances. Previous studies have shown that high expression of PD-L1 in tumor cells increases the risk of death by increasing tumor invasion (12). Checkpoint inhibitors (CPIs) are a new class of anti-cancer agents that stimulate immune cells to elicit an anti-tumor response by blocking the ligand and receptor interactions. Antibodies have been considered as CPI as well (1).
Heavy chain antibodies (HCAbs; ~ 95kDa), introduced by Hamers-Casterman et al. (13), are naturally devoid of light chains and therefore recognize their cognate antigens by a single variable-domain, referred to as VHH or Nanobody (Nb). Their size is approximately 2.5 nm in diameter and 4 nm long and weigh about 12-15kDa (13)(14)(15). The biochemical and pharmacokinetic properties of Nbs make these small molecules ideal tools for targeted drug delivery, immunotherapy, and medical diagnosis (16,17). Considering the important role of the PD-1/PD-L1 molecular pathway in the development of various types of cancer, in this study we characterized PD-L1 specific Nb from immunized camel Nb, and the binding and functionality of the achieved Nb was evaluated using in vitro assays.
Immunization procedure
A six-month female Camelus dromedarius was immunized subcutaneously (s.c) with 200 µg of the previously developed recombinant extracellular domain of PD-L1 produced in Escherichia coli six times at one-week intervals. Freund's complete and incomplete adjuvants were used for the first and booster injections, respectively. Blood samples were taken after each injection and serum was isolated. The immune responses were analyzed by enzymelinked immunosorbent assay (ELISA). Briefly, 1 µg/ml of recombinant PD-L1 was coated in a Maxisorp 96-well plate overnight at 4 °C. The wells were blocked with phosphatebuffered saline (PBS) supplemented with 2%w/v skim milk and incubated at room temperature (RT) for 1 hr. Serially diluted sera were added to the wells and incubated for 1 hr. The wells were washed 5 times with PBS-Tween20 (0.05%) (PBST) and then, rabbit anti-camel antibody (1:2000 dilution) was added to the wells and incubated for 1 hr (22). After the washing step, goat anti-rabbit IgG antibody (HRP) (1:2000 dilution) was added and incubated for 1 hr. The wells were washed again and 3,3' ,5,5'-Tetramethylbenzidine (TMB) was added to the wells and incubated for 15 min in the dark. The reaction was stopped with 2N H 2 SO 4 and optical density (OD) was measured at 450 nm.
Immune library construction
The peripheral blood mononuclear cells (PBMCs) were collected using ready-to-use Ficoll density gradient media (Sigma, USA). Total RNA was purified using an RNA extraction reagent and then cDNA was synthesized using oligo dT primers. VHH fragments were amplified using nested primers. The first amplification round was carried out with CALL001 (5'-gtcctggctgctcttctacaagg-3') and CALL002 (5'-ggtacgtgctgttgaactgttcc-3') primers corresponding to leader sequence and CH2 domain of HcAb. The amplified fragments (600-700bp) were extracted from agarose gel and used for the second nested PCR with A6E (5'-gatgtgcagctgcaggagtctggtggagg-3') and P38 (5'-ggactagtgcggccgctggagacggtgacctgggt-3') primers containing PstI and NotI restriction sites, respectively. The PCR product (~ 400 bp) and pHEN-4 phage display phagemid were digested, gel purified, and ligated using T4 DNA ligase. The recombinant phagemid (pHEN-4-VHH) was electroporated into E. coli TG1 competent cells and cultured on LB agar plates supplemented with appropriate antibiotic (ampicillin). Approximately, 1×10 12 colony forming units (CFUs) of VCSM13 helper phage were added to the TG1 cells (at logarithmic phase, OD 600 of 0.4-0.6) and incubated at 37 °C without shaking. After 30 min, kanamycin was added to the culture medium and incubated overnight at 37 °C while shaking at 250 rpm. The bacterial pellet was collected by centrifugation at 8000×g for 10 min. Recombinant phages were purified from the supernatant of the culture medium using PEG-NaCl solution (20 % PEG 6000, 2.5 M NaCl) after one hour of incubation on ice. The phage library was collected using centrifugation at 10,000×g for 15 min.
Biopanning
Phages displaying PD-L1 specific Nbs were enriched using biopanning. Four successive rounds of biopanning were performed on immobilized PD-L1. Briefly, a 96-well plate (NUNC, Denmark) was coated overnight at 4 °C with 10 µg/ml of PD-L1 in sodium bicarbonate buffer (pH 9.6). The negative control wells were coated with 100 µl of the buffer alone. The wells were blocked with 2% skim milk in PBS for 1 hr at RT. About 1×10 11 CFU of phage library was added to the wells and incubated for 1 hr (RT). The wells were washed 10 times with PBST and bounded phages were eluted with 100 mM triethylamine (pH 10.0). Then, 100 µl of 1M Tris-HCl (pH 8.0) was added as the neutralizer. Ten-fold serial dilution of eluted phages (output phages) was prepared and used for infection of TG1 cells (log phase OD 600 0.4-0.6). The dilution series was cultured on 2×YT agar plates containing ampicillin. The remaining output phages were amplified in TG1 cells and rescued by VCSM13 helper phage for a subsequent round of biopanning. All steps of consecutive rounds of biopanning were identical except for the washing step in the increased concentration of tween-20 from 0.05 to 0.5%v/v (0.05, 0.1, 0.2, and 0.5 %) to increase the stringency of the biopanning procedure and obtain PD-L1 bounded phages with higher specificity and affinity.
Biopanning monitoring by polyclonal phage ELISA
To evaluate the outcome of the biopanning process, polyclonal phage ELISA was performed using output phages. Briefly, a 96-well plate was coated with 1 µg/ml of PD-L1 overnight at 4 °C. After blocking the wells with 2% skimmed milk, 1×10 10 CFU of output phages were added to the wells and incubated for 1 hr (RT). The wells were washed 10 times with PBST, and anti-M13 HRP-conjugated antibody (1:2000) was added and incubated for 1 hr. Then, the wells were washed, and an ELISA reaction was developed using TMB followed by 2N H 2 SO 4 . The optical densities were measured at 450 nm.
Screening of the nanobody library
Over 19 individual colonies from the fourth round of biopanning were randomly picked up and cultured. Expression of periplasmic Nb-PIII fusion protein was induced with 1 mM isopropyl d-1-thiogalactopyranoside (IPTG) and used for PD-L1 detection in ELISA. ELISA was performed as previously described according to the polyclonal phage ELISA to analyze the differences of individual colonies in antigen recognition. After coating and blocking steps, periplasmic extracts (PE) were added to the wells and the presence of Nbs was recognized using anti-HA antibody (1:2000) followed by anti-mouse HRP-conjugated antibody (1:5000). Colonies representing optical densities three times over the control samples were considered as
Expression and purification of selected nanobody
After identification of positive clones through PE-ELISA, the sequence analysis was performed and the results were blasted in NCBI and numbered using the IMGT database. Selected vectors were extracted and used as the template for Nb gene amplification with A6E and P38 primers. PCR product was gel extracted and digested with BstEII and PstI restriction enzymes and ligated into pHEN6c plasmid. The recombinant plasmid (pHEN6c-A22) was transformed into E. coli WK6 cells using heat shock and CaCl 2 (1). The expression of recombinant nanobody was induced with 1 mM IPTG at 28 °C overnight. Periplasmic fraction of the WK6 cells was extracted by osmotic shock and Nb was purified using Ni-NTA chromatography according to the manufacturer's instructions. Purified Nb was dialyzed against PBS and concentrated using a Vivaspin concentrator (Cutoff: 10kDa). The proteins were analyzed on 15% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and by coomassie brilliant blue staining. For western blotting, protein bands were transferred onto the nitrocellulose membrane. The membrane was then blocked with 4% skim milk for 2 hr (RT). Then, an anti-Histidine HRP-conjugated antibody (1:2000) was added and incubated for 4 hr. The protein bands were visualized using 3,3'-Diaminobenzidine (DAB) chromogenic substrate (13,18).
Affinity analysis
The binding constant of Nb was evaluated using Beatty's protocol as described previously (19). A checkerboard assay with serial dilution of PD-L1 as well as Nbs was performed to achieve a saturating concentration of PD-L1 and Nbs.
Flow cytometry
A431 and HEK293 cell lines were used for flow cytometry analysis. 3×10 5 cells were counted and washed three times with PBS (1% BSA) and incubated with 1 µg of PD-L1 specific Nb at 4 °C for 1 hr (up to 100 µl). The cells were washed and incubated with 1 µg of rabbit anti-His antibody. After washing steps, 1 µg anti rabbit FITC conjugated was added to the wells and incubated at the same condition for 1 hr. The cells were washed and monitored using Cyflow (Partec, Sysmex) (20).
Cytotoxicity assay
Cytotoxicity of the recombinant Nbs was evaluated by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. 3×10 5 of A431 and HEK 293 cells were cultured in 96-well sterile plate in DMEM medium supplemented with 10% FBS. Serial dilution of Nbs (0 to 100 nM) was added to the wells and incubated for 24, 48, and 72 hr at 37 °C, under 5% CO 2 . PBS was added to the control wells and incubated at the same condition. 50 µl of ready-to-use MTT solution (5 mg/ml) was added to the wells and incubation was carried out for a further 4 hr in the dark. MTT solution was removed and formazan crystals were solubilized with 100 µl of dimethylsulfoxide (DMSO). The OD was measured at 570 nm (630 nm as a reference wavelength).
Monitoring of camel immunization procedure
The camel was immunized six times with recombinant extracellular domain of PD-L1 and the immunization process was analyzed using ELISA. As shown in Figure 1, the antibody titer was raised after the second injection which indicated the successful immunization process.
Library construction and biopanning
cDNA was synthesized from RNA samples extracted from PBMCs and used for amplification of gene fragments encoding the variable domains of the heavy-chain antibodies. The 1 st PCR amplified two distinct PCR products (Figure 2a) while the 2 nd PCR amplified the VHH sequences (approximately 400 bp) (Figure 2b). The PCR products were ligated into pHEN4 phagemid and transformed into E. coli TG1 cells. Following transformation, a library of about 4×10 7 transformed bacteria was obtained in which transformation efficiency was 90% through colony-PCR.
After each round of biopanning, the library enrichment was qualitatively investigated. In this way, the bounded phages in both positive (antigen-coated wells) and negative (wells without antigen) wells were eluted and used to infect TG-1 cells. After 16 hr incubation, the number of colonies Figure 1. Antibody titration against recombinant PD-L1 after camel immunization procedure grown in each dilution step was counted and divided by the number of colonies grown in the corresponding negative well, and the degree of enrichment was calculated. Table 1 shows appropriate results of the enrichment process.
Polyclonal phage ELISA
The phages obtained after each round of biopanning (output phages) were evaluated by an ELISA assay. It was expected that by increasing the rounds of enrichment, higher optical densities would be obtained due to the increased number of specific phages against the coated antigen. The results showed an upward trend indicating a successful library enrichment procedure (Figure 3).
Specific nanobody selection
After the 4 th round of biopanning, 19 single colonies were selected and screened by PE-ELISA. Six colonies showed positive signals which were at least 3 times higher than negative controls (Figure 4). The colony representing the highest optical density (#A22) was selected and its nucleotide sequence was identified.
Expression and purification of anti-PD-L1 nanobody
After subcloning of gene fragment encoding Nb. A22 into pHEN6c expression vector, protein expression was performed in TB medium and the Nb was extracted from the periplasmic space by osmotic shock. The yield of protein expression (2 mg/l) was calculated by the adsorption rate at 280 nm. The purified Nb was evaluated on 15% SDS-PAGE ( Figure 5a) and its identity was confirmed using western blotting with anti-His antibody (Figure 5b).
Affinity and specificity analysis
The affinity of Nb. A22 was evaluated according to Beatty's protocol (19). First, adsorption of different concentrations of Nb in two concentrations of antigen was calculated. Then, using Beatty's affinity determination formula, the affinity of Nb was calculated which was 7×10 12 M -1 . Binding capacity and cross-reactivity of Nb. A22 with PD-L1 as well as other antigens were tested by ELISA. The results showed that PD-L1 specific Nbs could not react with other antigens including PD-1, VEGF, VEGFR2, NRP-1, Ep-CAM, LIV-1, CTLA-4, BSA, and casein ( Figure 6).
Flow cytometry analysis
To verify the binding capacity of PD-L1 specific Nb in the detection of the native form of PD-L1 on the surface
MTT assay
The inhibitory effect of PD-L1 specific Nb on A431 and HEK293 cells was tested by MTT assay. It was shown that Nb. A22 did not have any effect on cellular viability (Figure 8).
Discussion
Monoclonal antibodies (mAbs) can be produced by several approaches including hybridoma technology, repertoire cloning, CRISPR/Cas9, phage, and yeast display technologies (21), which can be applied to enhance the specificity, stability, therapeutic efficacy, and capacity. Compared with the polyclonal antibodies which can bind to several epitopes, mAbs are high-specific universal binding molecules. These specific molecules are essential tools in research, diagnosis, and treatment. Therapeutic mAbs are divided into four groups: Murine, Chimeric, Humanized, and Human. Currently, mouse antibodies are not used due to immunological reactions, and fully human antibodies are the most desirable therapeutic molecules due to their origin and lack of side reactions. Among therapeutic mAbs, the most effective drugs include adalimumab (Humira), an mAb used to treat rheumatoid arthritis, and bevacizumab (Avastin) which targets vascular endothelial growth factor (VEGF) and inhibits the growth of blood vessels (19). One of the most important disadvantages of human therapeutic antibodies is the need for mammalian expression systems which increase the cost of production. An alternative approach is developing nanobodies which have attracted much attention due to their small size which ensures their improved tumor penetration, rapid diffusion, fast clearance from the body, binding to hidden epitopes, low immunogenicity, safety for humans, and low cost of production.
Many studies have been conducted or are being done on development of Nbs against different targets in research, preclinical and clinical stages (22). Nowadays, one of the nanobodies that has received the European Medicines Agency (EMA) and the USA Food and Drug Administration (FDA) approval is Caplacizumab which has been designed for treatment of thrombotic thrombocytopenic purpura by targeting the von Willebrand Factor (vWF) (23). The development of nanobodies against tumor antigens is growing rapidly and various targets have been considered. Human epidermal growth factor receptor 2 (HER2) is one of the tumor markers for Nb development. One of the first attempts in the development of anti-HER2 Nbs was conducted by Vaneycken et al. (24) in which a panel of 38 anti-HER2 Nbs was biochemically characterized and preclinically evaluated for utilization as tracer for imaging of xenografted tumors (25)(26)(27)(28)(29). These nanobodies have a low half-life which causes a rapid clearance of radioactive or other toxic substances. Another application of tumorspecific nanobodies is targeting the tumor-associated ligands such as VEGF and placental growth factor (PlGF). In the study conducted by Kazemi et al., an anti-VEGF Nb was developed which had high specificity and binding affinity towards both human and mouse VEGF. This Nb potently inhibited human endothelial cell migration and tumor growth in a tumor-bearing mouse model (30).
In the current study, we developed a human PD-L1 specific Nb by phage display technique. Four consecutive rounds of biopanning were performed to obtain Nbs with the highest affinity and specificity to immobilized PD-L1. The output of the biopanning rounds was checked through polyclonal ELISA which represented a successful Nb library enrichment procedure. Screening of Nb library through PE-ELISA on selected clones yielded the highest optical density in ELISA. The advantage of PE-monoclonal ELISA over monoclonal phage ELISA is that in the former binding capacity of soluble Nb (fused to phage protein III) to the immobilized antigen will be evaluated and the results will be more reliable. Selected Nb was subcloned in pHEN6c expression vector expressed within WK6 cells. Recombinant Nb. A22 was purified in the native structural form from the periplasmic space by osmotic shock. The calculated affinity of the selected Nb was in the range of picomolar which was comparable with previously reported studies (16,31). The results of flow cytometry analysis indicated strong binding capacity of Nb. A22 to PD-L1 antigen presented on A431 cell line in comparison with PD-L1 negative HEK293 cells.
The MTT results showed no cytotoxicity effect of anti-PD-L1 Nanobody on A431 and HEK293 cells. A similar study, developed a nanobody library of PD-L1 immunized camel and obtained 3 anti-PD-L1 nanobodies that had amino acid differences in the CDR regions and their affinity was reported in the range of nanomolar (32). In another study, Zhang et al. developed an anti-PD-L1 Nb which is a potent inhibitor of the PD-1/PD-L1 interaction with strong antitumor activity within a mouse model (33).
Conclusion
Taken together, in the present study Nb with a high and reasonable affinity and specificity was obtained using phage display technology from the Nb library developed by PD-L1 immunization of a camel. Selected Nb showed specific binding in ELISA and flow cytometry assay and did not have any cytotoxicity effect on treated cells. The results indicated the potential role of anti-PD-L1 Nb as a promising tool in cancer therapy. The anti-tumor activity of the selected Nb should be further evaluated in in vivo experiments before making any firmed conclusion. | 2022-06-03T05:21:38.595Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "1142b3cd418831e903893e71a8522cf02403596f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1142b3cd418831e903893e71a8522cf02403596f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
133072208 | pes2o/s2orc | v3-fos-license | Antagonistic coevolution between hosts and sexually transmitted infections
Abstract Sexually transmitted infections (STIs) are predicted to play an important role in the evolution of host mating strategies, and vice versa, yet our understanding of host‐STI coevolution is limited. Previous theoretical work has shown mate choice can evolve to prevent runaway STI virulence evolution in chronic, sterilizing infections. Here, I generalize this theory to examine how a broader range of life‐history traits influence coevolution; specifically, how host preferences for healthy mates and STI virulence coevolve when infections are acute and can cause mortality or sterility, and hosts do not form long‐term sexual partnerships. I show that mate choice reduces both mortality and sterility virulence, with qualitatively different outcomes depending on the mode of virulence, costs associated with mate choice, recovery rates, and host lifespan. For example, fluctuating selection—a key finding in previous work—is most likely when hosts have moderate lifespans, STIs cause sterility and long infections, and costs of mate choice are low. The results reveal new insights into the coevolution of mate choice and STI virulence as different life‐history traits vary, providing increased support for parasite‐mediated sexual selection as a potential driver of host mate choice, and mate choice as a constraint on the evolution of virulence.
Parasite-mediated sexual selection (PMSS) is predicted to lead to the evolution of reproductive strategies that limit the risk of infection from mating (Hamilton and Zuk 1982;Sheldon 1993;Loehle 1997). By avoiding mates with signs of disease, organisms should be able to increase their reproductive success, either because they might choose partners possessing genes that confer resistance to disease (the "good genes" hypothesis; Hamilton and Zuk 1982) or simply because they choose mates that are currently uninfected and hence are a low-risk option (the "transmission avoidance hypothesis"; Loehle 1997). Both hypotheses have been the subject of intense empirical research with varying evidence in support of and against PMSS (Borgia 1986;Borgia and Collis 1989;Clayton 1990Clayton , 1991Hamilton and Poulin 1997;Abbot and Dill 2001;Balenger and Zuk 2014;Jones et al. 2015;Ashby et al. 2019). Empirical studies have explored both sexually and nonsexually transmitted infections, but although one may intuitively expect sexual transmission to be the main driver of PMSS, this is not necessarily the case, and in theory both sexual and nonsexual transmissions may contribute to PMSS. In some cases, females have been found to prefer uninfected males-for example, Clayton (1990) found that female Rock Doves (Columba livia) prefer males without lice (which can be transmitted by physical contact during mating or otherwise, or by a vector), suggesting support for PMSS-whereas in other cases females appear unable to distinguish between infected males-for instance, female milkweed leaf beetles (Labidomera clivicollis; Abbot and Dill 2001) and two-spot ladybirds (Adalia bipunctata; do not avoid males with sexually transmitted mites. In parallel, there has been much theoretical interest in understanding the role of parasites, especially sexually transmitted infections (STIs), in the evolution of host mating strategies, and the role of host mating behavior in the evolution of STIs (Thrall et al. 1997(Thrall et al. , 2000Knell 1999;Boots and Knell 2002;Kokko et al. 2002;Ashby and Gupta 2013;McLeod and Day 2014;Ashby and Boots 2015). STIs are of particular interest as they are inherently tightly linked to host reproduction, unlike non-STIs, and are more likely to have negative effects on host fecundity (Lockhart et al. 1996). This body of theoretical work has generally predicted that STIs may indeed act as a strong force of selection on host mating strategies.
Although changes in host mating behavior arising from PMSS will in turn affect STI evolution, forming a coevolutionary feedback, almost all theoretical studies only consider one-sided adaptation of either the host or the STI. To date, it appears that only two theoretical studies have considered host-STI coevolution. First, Ashby and Boots (2015) showed that the evolution of mate choice can prevent runaway selection for sterility virulence in STIs, leading to either stable levels of choosiness and virulence or coevolutionary cycling in these traits. Second, Wardlaw and Agrawal (2019) showed how mortality virulence escalates sexual conflict, whereas sterility virulence de-escalates sexual conflict, thus showing how the mode of virulence can qualitatively change host-STI coevolution. Together, these results represent important first steps in understanding host and STI coevolution, but we have only begun to scratch the surface. For example, Ashby and Boots (2015) focused on chronic, fertility-reducing STIs in serially monogamous hosts, which are reasonable assumptions for many host species and STIs: for example, approximately 90% of bird species are thought to be monogamous (Kleiman 1977), and STIs often cause chronic infections, are more likely to cause reductions in fecundity, and typically have less of an impact on mortality than non-STIs (Lockhart et al. 1996;Knell and Webberley 2004). Although serial monogamy is a useful place to start, many species do not form exclusive monogamous partnerships and instead carry out extra-pair copulations or form no lasting sexual partnerships at all (Kleiman 1977;Forstmeier et al. 2014). Because many species are not serially monogamous, it is of particular interest how mate choice coevolves with STIs in other mating systems. For instance, if hosts form ephemeral rather than long-term sexual partnerships, how and when will mate choice evolve? Although the importance of any given ephemeral sexual partnership is lower than under serial monogamy, the accumulation of many sexual partners over the lifetime of the host may still select for mate choice. In addition, many STIs are known to increase mortality-for example, HIV and syphilis in humans, and dourine in equines (Gizaw et al. 2017)-and to cause acute rather than chronic infections (e.g., Chlamydia trachomatis can be cleared by a number of mammalian species; Miyairi et al. 2010). Broadening our understanding of host-STI coevolution therefore requires the development of theory that captures alternative host mating systems and disease outcomes.
Here, I examine a simple model of host-STI coevolution when sexual partnerships are short term (ephemeral) and disease causes variable mortality or sterility virulence, and for different recovery rates. Using evolutionary invasion analysis, I first show how mate choice leads to lower optimal levels of mortality and sterility virulence. I then show how and when mate choice is likely to evolve under different disease characteristics. Finally, I consider host-STI coevolution, showing that coevolutionary cycling is typically more common under sterility virulence, whereas mortality virulence tends to lead to more stable outcomes. I also identify conditions when polymorphism in host mate choice can evolve through evolutionary branching, but this outcome only occurs under a narrow set of conditions. Finally, I examine how costs associated with mate choice, the rate of recovery from infection, and the lifespan of the host impact host-STI coevolution. Combined with previous studies, these results show that PMSS can occur for a broad range of host and STI life-history traits.
Methods
I model the dynamics of an STI in a well-mixed host population, which for simplicity I treat as a single hermaphroditic sex as there is assumed to be no sex-specific variation in disease characteristics. The epidemiological and mating dynamics of a one-host-one-STI system are described by where S and I are the densities of susceptible and infected individuals, respectively; b( f, g, v) is the host birth rate, which depends on the fecundity of infected hosts relative to uninfected hosts, f , with 0 ≤ f ≤ 1, the strength of mate choice (i.e., how strongly individuals prefer uninfected mates), g ≥ 0, and v, which is used to relate the mode of virulence to mate choice (defined below); d is the natural mortality rate; α is the disease-associated mortality rate; β is the transmission probability per sexual contact; γ is the recovery rate; and [XY ] is the mating rate between individuals in classes X ∈ {S, I } and Y ∈ {S, I } (see Table 1 for full list of parameters and variables). Hereafter, I assume that mortality virulence and sterility virulence may be functions of transmissibility (i.e., α = α(β), f = f (β)) as parasites may need to damage their hosts or use host resources to produce transmission stages, and the more transmission stages produced the greater the damage is likely to be to the host (see Alizon et al. 2009 andAcevedo et al. 2019 for discussions of the transmissionvirulence trade-off hypothesis). For simplicity, I assume linear functions to control the relationships between transmission and virulence such that f (β) = 1 − ηβ for ηβ < 1 and 0 otherwise for sterility virulence, and α(β) = κβ for mortality virulence, with η and κ parameters that define the strength of these relationships. Such functions correspond to a situation where the damage caused to the host is proportional to the number of transmission stages Number of host and parasite types, respectively, in the polymorphic model produced by the STI. I restrict my analysis to how one mode of virulence at most varies and affects mate choice by setting η = 0 and/or κ = 0 and by using v = v(β) to relate the current mode of virulence to mate choice, with v(β) = 1 − f (β) in the case of sterility virulence and v(β) = α(β)/κ in the case of mortality virulence. Sexual partnerships are assumed to be ephemeral, and the mating dynamics occur as follows. The baseline per-capita mating rate is p, which is independent of the population size, N = S + I . This means that larger populations do not have a higher per-capita mating rate than smaller populations. Before deciding whether to mate, each host inspects its prospective partner for signs of infection (e.g., through visual or olfactory clues). If the prospective partner is currently uninfected, the probability that the focal host accepts the mate is m S (g), with dmS dg ≤ 0. The function m S (g) allows for the fact that hosts may imperfectly assess the condition of other individuals and those who are choosier (higher g, lower m S (g)) may be generally more cautious in their approach to mating, potentially declining healthy prospective partners. If the prospective partner is currently infected, the probability of accepting them as a mate is m I (g, v(β)) with ∂m I ∂g , ∂m I ∂v ≤ 0. By causing more damage to their hosts, more virulent STIs may be easier to detect by prospective mates (e.g., due to a general deterioration in health or more visible signs of infection). Preferential mating with uninfected hosts is somewhat comparable to the notion of disease causing lower contact rates (e.g., due to decreased movement) in classical evolution of virulence theory (Ewald 1983), although there are also a number of other notable differences in the present framework (sexual rather than direct transmission, and a reduction in contact rates affects reproduction). Note that g is a dummy variable, which indicates the "strength of mate choice", whereas m S (g) and m I (g, v(β)) are the actual probabilities of accepting an uninfected or infected prospective partner as a mate, respectively. I use a dummy variable for the strength of mate choice so that the host responses to susceptible and infected individuals are correlated. Throughout, it is assumed that the probability of accepting an uninfected individual as a mating partner is at least as large as the probability of accepting an infected individual: m S (g) ≥ m I (g, v(β)). Without this assumption, there would never be any advantages to mate choice, as choosier individuals would mate with infected members of the population at a higher rate than less choosy individuals. I therefore set m I (g, v(β)) = m S (g)m I (g, v(β)) with m I (g, v(β)) ≤ 1 the mate choice response specific to prospective partners who are infected. In the analysis that follows, I set m S (g) = 1 − ζg for ζg < 1 and 0 otherwise, where ζ is the cost of mate choice, and eitherm I (g, v(β)) = 1 − gv(β) (linear response) orm I (g, v(β)) = 1 − gv(β) 2 (nonlinear response) with both functions restricted tom I (g, v(β)) ≥ 0. I only examine a linear function for m S (g) because g is a dummy variable and therefore one only needs to consider linear and nonlinear forms of one of the two correlated mate choice functions. Biologically, the linear and nonlinear functions form I (g, v(β)) mean that the effects of mate choice either increase proportionately or accelerate with virulence. In other words, the linear function implies that the probability of accepting an infected mate is proportional to the damage caused by the STI, and the nonlinear function implies that hosts are disproportionately choosier when STIs are more virulent or that the STI is increasingly easier to detect. Although empirical evidence for how mate choice varies with virulence is currently lacking, it is plausible that either linear or nonlinear relationships could exist and therefore varying the shape of these functions is important for a better understanding of potential host-STI coevolutionary dynamics.
The mating rates for each combination of the S and I classes are given by The factor of 2 in the equation for [S I ] appears when there is mating between individuals in different classes and is required to balance the total mating rate, M: Note that for the specific case when m S (g) = m I (g, v(β)) = 1 (i.e., there is no mate choice), the total mating rate reduces to M = pN . For the general case, hosts that have mated produce offspring at a total rate of where r is the maximum reproduction rate per pair and the birth rate is subject to density-dependent competition given by the parameter h. The disease-free equilibrium (S, I ) = (S * , 0) of this system occurs at and is viable provided (m S (g)) 2 pr > d (i.e., the birth rate is higher than the death rate). A newly introduced STI will spread in a susceptible population when the basic reproductive ratio, R 0 (g, β), is greater than 1, where The above model describes the dynamics when there is only one host type and one STI type in the population. To account for situations where hosts vary in their strength of mate choice and STIs in the transmissibility/virulence, I adapt the above monomorphic model for populations that are polymorphic in these traits. The dynamics for n h hosts each with strength of mate choice g i , where i ∈ {1, . . . , n h } and n p STIs each with transmissibility β j and virulence v j where j ∈ {1, . . . , n p }, are fully described by the following system of ordinary differential equations: which is the total mating rate between susceptible hosts with trait g i and all hosts infected by STIs with traits β j and v j , and the birth rate for each host type is where for notational convenience circles in subscripts correspond to sums over all host or parasite types (i.e., S • is the sum over all uninfected hosts, I i• is the sum over all hosts of type i that are infected, I • j is the sum over all hosts that are infected with parasite type j, and I •• is the sum over all infected hosts), so that This model is related to the pair formation framework proposed in Ashby and Boots (2015), but there are two key differences. First, the model in Ashby and Boots (2015) assumes there is serial monogamy with separate pools of paired and unpaired individuals. Here, I focus on a single pool of individuals with ephemeral sexual partnerships, which is analogous to having infinite pair dissolution rates and instantaneous reproduction. However, it is not possible to move directly between the models by letting the pair dissolution rates tend to infinity, as this would mean individuals are never in the paired state and reproduction only occurs while individuals are paired. The second major difference is in the nature of the infection: the model in Ashby and Boots (2015) assumes STIs cause sterility rather than increase mortality, and that individuals are unable to recover once infected. Relaxing these assumptions, by allowing STIs to cause mortality and hosts to clear infection (as is the case for many STIs; e.g., Miyairi et al. 2010;Gizaw et al. 2017), will reveal how a broader range of STI life-history traits affect coevolution with host mate choice. An additional difference in the current model is that the mate choice functions have been generalized to m S (g) and m I (g, v(β)) for easier interpretation and so that different functions governing mate choice can be readily explored.
I use evolutionary invasion analysis to determine the longterm trait dynamics of the host and STI (Geritz et al. 1998). This assumes that mutations have small phenotypic effects, and for analytic rather than simulated solutions, mutations are sufficiently rare so that the system has reached a stable state before a new mutant emerges. For one-sided adaptation (either host or STI evolution), I numerically solve the one-dimensional fitness gradients to find the singular strategies, as the system is intractable to nonnumerical methods of stability analysis. For host-STI coevolution, I solve the dynamics using simulations to capture nonequilibrium dynamics (i.e., fluctuating selection). In the coevolutionary simulations, host and STI traits are discretized into a finite number of different types and mutations between adjacent types occur at regular intervals (source code in the Supporting Information).
PARASITE EVOLUTION
The invasion fitness of a rare mutant strain of the STI (subscript m) in a population at equilibrium N * = S * + I * is The mutant STI can only invade when w P (g, β m ) > 0, which requires where R EFF (g, β m ) is the effective reproductive ratio of the STI. Because STI fitness can be written in this form, we know that parasite evolution maximizes its basic reproductive ratio R 0 (Lion and Metz 2018). The STI will evolve in the direction of ∂ R0 ∂β until β is maximized at 1, one or both populations are driven extinct, or a singular strategy β * is reached at ∂ R0 ∂β | β=β * = 0, which requires Because m S (g) does not feature in this equation, we do not need to consider the effects of costs of mate choice on STI evolution. In general, m I (g, v(β)) and f (β) will be decreasing (or constant) functions of β, and α(β) will be an increasing (or constant) function. In the absence of mate choice (m S (g) = m I (g, v(β)) = 1), a continuously stable strategy (CSS)-analogous to an evolutionary stable strategy or ESS-can only exist when α(β) is concave up (i.e., mortality virulence accelerates with the transmission probability). In the presence of mate choice, however, a CSS can exist under a broader set of conditions, such as concave down mortality-transmission tradeoffs and with sterility-transmission trade-offs. This is clear from the equation for R 0 (g, β) (eq. 11), which features the product of m I (g, v(β)) and β (i.e., the product of decreasing and increasing functions of β; Fig. 1A).
To illustrate the above, suppose first that sterility virulence is constant ( d f dβ = 0). When there is no mate choice and mortality virulence is a linear function of β with α(β) = κβ, we have ∂ R0 ∂β > 0, so the STI will evolve to maximize transmission at β = 1 and virulence at α(1) = κ. If, however, mate choosiness is a linear function of mortality virulence (i.e., increasing damage has a linear effect on the probability of being accepted as a mate) such that v(β) = α(β)/κ andm I (g, v(β)) = 1 − gα(β) κ for gα(β) < κ and 0 otherwise, then a singular strategy exists at ( Fig. 1B), which is always evolutionarily stable since Now suppose instead that mortality virulence is constant ( dα dβ = 0) and sterility virulence is a function of the transmission probability such that f (β) = 1 − ηβ, with v(β) = 1 − f (β). If host mate choosiness is a linear function of sterility virulence such thatm I (g, v(β)) = 1 − g(1 − f (β)), then the singular strategy occurs at β * = 1 2gη (Fig. 1B), which again is always evolutionarily stable since All else being equal, the effects of mate choice on the ecology and evolution of the STI under sterility and mortality virulence are qualitatively similar (Fig. 1). However, because mortality virulence causes an additional reduction in R 0 compared to sterility virulence due to the presence of α(β) in the denominator (eq. 11), a given level of mate choice will have a greater impact on an STI that causes mortality virulence. This can be seen in Figure 1, where both R 0 and the evolved probability of transmission, β * , of the STI are always lower under mortality virulence and the region of viability is smaller compared to STIs that cause sterility virulence.
In summary, host mate choice prevents the evolution of greater mortality or sterility virulence in an acute STI even in the absence of long-term partnerships, but the effects on STIs that cause mortality virulence will tend to be greater, leading to lower disease prevalence and selection for slightly lower transmissibility for a given level of mate choice.
HOST EVOLUTION
The initial dynamics of a rare mutant host (subscript m) in a resident population at equilibrium are given by Using the next-generation method (see Supporting Information; Hurford et al. 2010), it can be shown that host fitness is sign-equivalent to where β = d + α(β) + γ, f β = f (β), m g S = m S (g), and m gm ,vβ I = m I (g, v(β)) for the sake of brevity. The host will evolve in the direction of ∂wH ∂g until g is minimized at 0, one or both populations are driven extinct, or a singular strategy, g * , is reached at ∂wH ∂g | g=g * = 0.
Suppose initially that there are no costs of mate choice (m S (g) = 1) and that mate choice is a linear function of virulence such thatm I (g, v(β)) = 1 − gv(β). In this scenario, there may be one or two singular strategies. The singular strategy at g * 1 = (2 pβ − d − α(β) − γ)/2 pβv(β) always exists, and corresponds to the point where the host drives the STI extinct. The second singular strategy (g * 2 ), if it exists, is an evolutionary repeller (i.e., a fitness minimum, so the direction of selection always points away from the singular strategy) with 0 < g * 2 < g * 1 , in which case the outcome depends on the initial conditions, with g < g * 2 causing selection against mate choice, and g > g * 2 leading to STI extinction due to mate choice ( Fig. 2A, B).
If we first suppose that virulence is fixed (i.e., it does not vary with transmission, η = κ = 0), mate choice is likely to evolve for intermediate transmission probabilities (Fig. 2A, B). When the probability of transmission is small, the STI is unable to spread even in the absence of mate choice (R 0 < 1) and if the probability of transmission is close to 1, there may be selection against weak mate choice caused by the evolutionary repeller. This is because disease prevalence is high and so most attempted matings are with infected individuals, meaning that even weak mate choice dramatically reduces the mating rate for invading host mutants compared to the resident population. If, however, there is already a sufficient level of mate choice in the resident population (i.e., the initial conditions are above the repeller), disease prevalence is sufficiently low to allow runaway selection for mate choice, eventually driving the disease extinct. This pattern is similar regardless of whether virulence has fixed effects on mortality or sterility ( Fig. 2A, B). When sterility or mortality virulence is linked to the transmission probability, the dynamics are more complex (Fig. 2C, D). Notably, the threshold for driving the STI extinct is lower at high transmission probabilities because virulence (and hence the effects of mate choice) are also stronger. An evolutionary repeller may exist, but it now occurs for intermediate values of β.
The system is intractable to classical analysis when mate choice also affects prospective partners that are susceptible (i.e., there is a "cost" of being choosy, with m S (g) < 1 for g > 0), and so one must find the evolutionary dynamics using numerical analysis. Although many of the results are qualitatively similar to the no-cost scenario, there are some notable exceptions. In particular, if the host evolves mate choice, then it no longer drives the STI extinct, and is instead likely to reach a CSS with the STI endemic in the population. When virulence is correlated with transmission, the host only evolves mate choice de novo at sufficiently high values of β (Fig. 2C, D). Additionally, there is a very small region of parameter space at intermediate values of β that can yield evolutionary branching, with stable coexistence between two host types: one that exhibits moderate mate choice and the other that does not discriminate against infected mates.
COEVOLUTION
I now consider coevolution of the host and the STI, focusing on how the costs associated with mate choice (ζ), the recovery rate (γ), and the natural mortality rate of the host (d) interact with the mode of virulence (sterility or mortality) and the shape of the mate choice response to infected individuals. The model exhibits the same range of qualitative outcomes under both sterility and mortality virulence: (1) co-CSS where STI virulence and host mate choice are at a stable equilibrium (Fig. 3A, B); (2) coevolutionary cycling, whereby host and STI phenotypes fluctuate through Table 2. Overall, higher costs associated with mate choice (i.e., greater ζ leading to mistaken avoidance of uninfected individuals) tend to suppress choosiness and allow higher levels of virulence to evolve (Figs. 4A, D and 5A, D) and faster recovery rates have a stabilizing effect on the dynamics (Figs. 4B and 5B), as do both short (high d) and long (low d) host lifespans (Figs. 4C, F and 5C, F). Although both sterility and mortality virulence produce the same range of qualitative coevolutionary outcomes, there are some notable differences between the two scenarios. First, coevolutionary cycling is much more common under sterility virulence than mortality virulence, with the latter more likely to lead to stable equilibria. Even when mortality virulence does produce coevolutionary cycling, the amplitude of the cycles tends to be smaller compared to sterility virulence (Fig. 3C, D). Second, mate choice requires much lower costs (ζ) to evolve when the STI causes mortality virulence (Figs. 4A, D and 5A, D). Third, higher costs cause qualitatively different transitions in the coevolutionary dynamics, from cycling to stable strategies in the case of sterility virulence and from stable strategies to dimorphism and cycling in the case of mortality virulence (Figs. 4A and 5D). Fourth, although mate choice peaks for intermediate host lifespans in the case of sterility virulence (Fig. 4C, F), under mortality virulence mate choice generally decreases (or for a narrow window becomes dimorphic) as host lifespan shortens (as d increases; Fig. 5C, F). These general differences in outcomes are broadly consistent whether mate choice is linearly or nonlinearly related to virulence. Still, there are some notable differences in outcomes between the linear and nonlinear versions. For example, when greater virulence is associated with an acceleration in mate choice, there is usually a greater potential for coevolutionary cycling under sterility virulence (Fig. 4) and for coevolutionary cycling and evolutionary branching under mortality virulence (Fig. 5).
Discussion
Understanding the role of STIs in the evolution of host mating strategies, and in turn, the effects of mating behavior on disease evolution are inherently linked (Hamilton and Zuk 1982; Ashby ). Yet despite the large number of theoretical studies on coevolution between hosts and non-STIs, to date theoretical models of STIs have almost exclusively focused on one-sided adaptation rather than coevolution (Thrall et al. 1997(Thrall et al. , 2000Knell 1999;Boots and Knell 2002;Kokko et al. 2002;Ashby and Gupta 2013;McLeod and Day 2014;Johns et al. 2019), which limits our ability to understand parasite-mediated sexual selection. Using a theoretical model of host-STI coevolution, I have shown that host mate choice can readily evolve under a broad range of conditions, including when hosts have ephemeral sexual partnerships, STIs cause sterility or mortality virulence, hosts can recover from infection, and across large variations in host lifespan. In addition to showing when mate choice is most likely to evolve, I have also shown when qualitatively different coevolutionary outcomes typically occur (Table 2). Interestingly, coevo-lutionary cycling (fluctuating selection) in mate choice and STI virulence is much more common when the STI causes sterility virulence than mortality virulence, which may be because reductions in fecundity can cause sudden declines in population size and are generally known to induce oscillatory dynamics (Ashby and Gupta 2014). This suggests that STIs associated with higher mortality, such as dourine in equines (Gizaw et al. 2017), are more likely to lead to stable coevolutionary outcomes. But because STIs typically cause reductions in host fecundity (Lockhart et al. 1996), fluctuating selection may be a more probable outcome overall. Similarly, STIs often but not always cause chronic or long-lasting infections, which will promote coevolutionary cycling because lower recovery rates tend to have a destabilizing effect. Higher clearance rates are associated with more stabilizing outcomes, so, for example, we might expect acute STIs such as Chlamydia trachomatis in mammalian species (Miyairi et al. 2010) to produce more stable coevolutionary dynamics than STIs with low or zero clearance rates. Furthermore, fluctuating selection dynamics appear to be limited to hosts with moderate rather than short or long lifespans. It is not entirely clear why fluctuating selection does not occur when host lifespans are taken to extremes, although it is likely to be related to changes in disease prevalence. Other factors may too vary with lifespan, but all else being equal, shorter host lifespans reduce the infectious period and hence disease prevalence and the risk of infection, whereas the converse is true for hosts with longer lifespans. Precisely what corresponds to an "intermediate lifespan" will clearly be system dependent, but overall, such predictions may help guide comparative analyses and other empirical studies toward systems that are more favorable to generating certain types of coevolutionary dynamics.
The model also revealed that evolutionary branching in host mate choice is possible, leading to the stable coexistence of more and less choosy individuals in the population. Hence if choosy individuals are above or below their equilibrium frequency, then they will decrease or increase in frequency, respectively. This only occurs when there are costs associated with being choosy, such that choosier individuals are not only less likely to mate with infected individuals but are also less likely to mate with susceptible individuals, for example, due to false-positive detection of infection among healthy prospective partners. Hosts who are less choosy do not pay this cost but have a higher infection rate, which also increases disease prevalence in the population. This is analogous to trade-offs with host life-history traits that are often associated with resistance or tolerance to infection (Schmid-Hempel 2003). Polymorphism in risky and prudent mating behavior has been identified previously by Boots and Knell (2002), although their model only allowed for variation in the overall mating rate rather than for condition-dependent mate choice, and the mating rate for all infected individuals was the same regardless of whether hosts initially belonged to the risky or prudent mating type. Although the model presented herein can also generate polymorphism, this time in terms of host mate choosiness, it is only predicted to occur under a fairly narrow set of conditions, requiring moderate costs, recovery rates, and host lifespans.
A number of studies have looked at the evolution of host mating preferences in the presence of nonevolving STIs, showing, for example, that STIs can reduce mating skew provided disease prevalence is not too high (Kokko et al. 2002) and that STIs which cause mortality rather than sterility are more likely to drive the evolution of serial monogamy (McLeod and Day 2014). In general agreement with the present study, this body of theory predicts that STIs are likely to be a potent force of selection in host mating system evolution, and that factors such as the mode of virulence can lead to qualitatively different outcomes. Crucially, however, these studies do not account for reciprocal coevolution with the STI. Previous theoretical work on host-STI coevolution appears to be limited to two models. A recent paper by Wardlaw and Agrawal (2019) explored the coevolution of hosts and STIs in the context of sexual conflict, showing that the mode of virulence (sterility or mortality) can lead to contrasting coevolutionary outcomes, analogous to the present study. Specifically, sterility virulence was shown to de-escalate sexual conflict, whereas mortality virulence increased conflict, and furthermore led to an increase in STI virulence. The other study, Ashby and Boots (2015), is more closely related to the current model, as it too concerns the evolution of host mate choice for preferential mating with healthy hosts. The present study differs from this work both in terms of the mating dynamics (ephemeral sexual partnerships rather than serial monogamy) and the disease characteristics (acute infections causing sterility or mortality, rather than chronic sterilizing infections). Previously, it was unclear whether the qualitative coevolutionary outcomes such as fluctuating selection were restricted to the particular host and STI characteristics that were explored in Ashby and Boots (2015), but the current model reveals this not to be the case, and furthermore, predicts when these and other coevolutionary outcomes are most likely to occur. To build on these results, future work should continue to explore the evolution of mate choice under different mating systems, for example, for polygynous or polyandrous hosts, as different mating systems have previously been shown to select for contrasting levels of virulence (Ashby and Gupta 2013). One especially interesting direction for future research that would bridge the gap between the ephemeral partnership and serial monogamy scenarios would be to examine the evolution of mate choice when serially monogamous hosts engage in extra-pair copulations (Forstmeier et al. 2014). This would help to elucidate whether mate choice should be stronger when choosing long-or short-term mating partners.
Examining a broader set of STI characteristics helps to build up a more general picture of host-STI coevolution, which is important given that many STIs do not cause chronic infections or sterility virulence (Lockhart et al. 1996;Miyairi et al. 2010;Gizaw et al. 2017). As discussed above, the mode of virulence has an important impact on the nature of the coevolutionary dynamics, with mortality virulence tending to have a stabilizing effect compared to sterility virulence. Overall, one might expect the benefits of mate choice to be greater if STIs cause sterility rather than mortality, as (1) individuals may be unable to reproduce following infection by a sterilizing STI but may still reproduce if infected by a nonsterilizing STI that increases mortality; and (2) disease prevalence is likely to be higher as mortality virulence reduces R 0 by lowering the infectious period (eq. 11), which means all else being equal the risk of infection will be lower under mortality virulence. Recovery from infection will typically reduce the benefits of mate choice as both disease prevalence and the costs of contracting an infection are lower (since infection is acute rather than chronic). Recovery did not prevent the evolution of mate choice, but it did tend to have a stabilizing effect. As expected, high recovery rates reduce selection for mate choice, but surprisingly the difference in optimal levels of mate choice when infections are of relatively short or long durations is fairly small (Figs. 4B, E and 5B, E), which supports the notion that both acute and chronic STIs can select for mate choice. For simplicity, I assumed that recovery does not lead to immunity from future infection and that the condition of recovered individuals does not differ from those who have yet to experience infection. The former assumption is reasonable for many STIs, which are less likely than non-STIs to result in lasting immunity (Lockhart et al. 1996), but the latter deserves further investigation.
Although it is possible for hosts to fully recover from infection, it is also reasonable to suspect that host condition may remain lower for some time following pathogen clearance, in which case these hosts should have lower mating success than individuals who have never been infected. In future, a simple extension of the current model would be to explore the effects of temporary or permanent reductions in host condition following infection, as this will separate the effects of mate choice into components representing transmission avoidance (i.e., avoiding infectious individuals) and partner fertility (i.e., choosing more fertile partners). Another potentially important extension to the current model would be to split the host population into males and females with different disease outcomes and transmission rates between the sexes. Here, the hosts were treated as a single hermaphroditic sex for the sake of simplicity, but such an extension would add greater realism and expand predictions for an even broader set of STIs with sex-specific characteristics.
To date, many empirical studies have struggled to find evidence that hosts are able to discriminate between individuals with and without STIs (Abbot and Dill 2001;Nahrung and Allen 2004). At first this seems surprising given that hosts should, in theory, be under strong selection to avoid choosing infected mates. There are a number of possible reasons as to why this may not always be the case. For example, hosts may simply be unable to detect signs of infection due to physiological constraints. This is not a particularly satisfying or general explanation, because various species have been found to prefer social or sexual partnerships based on visual or olfactory cues relating to infection (Clayton 1990;Willis and Poulin 2000;Moshkin et al. 2012). Instead, it is more likely that hosts may be unable to detect infection due to strong selection on STIs to be inconspicuous or asymptomatic, potentially through low virulence. For instance, sexually transmitted mites in ladybirds and the eucalypt beetle appear to have no negative impact on fertility or mortality under nonstress conditions, which may explain why mites do not appear to impact on mate choice in this system Nahrung and Clarke 2007;Ashby et al. 2019). It is also possible that STIs can evolve to be asymptomatic or difficult to detect despite being virulent, as is commonly the case with Neisseria gonorrhoeae infections (Gonnorrhea) in humans (Walker and Sweet 2011). More empirical evidence is needed to determine how STI virulence varies with detectability, whereas future theory needs to examine coevolution with virulent STIs, which may be asymptomatic and thus not easily detectable.
Another possibility is that hosts can sometimes discriminate between infected and uninfected individuals, but the costs of mate choice are too high relative to infection. In the current model, mate choice only evolves under certain conditions, and may not evolve even when the STI is relatively virulent, conspicuous, or prevalent, if mate choice is intrinsically costly. Clearly, any costs associated with mate choice (e.g., fewer mating opportunities) must be weighed against the potential benefits of avoiding infection. Alternatively, hosts may have more effective forms of defense against STIs, such as post-copulatory grooming or urination to remove parasites (Hart et al. 1987;Nunn 2003). This area has received very little theoretical attention. Finally, it is possible that STIs cause changes in host characteristics such as attractiveness, mating frequency, or choosiness, which may potentially counter or increase selection for mate choice. In principle, such changes could simply be by-products of infection or could be host or STI adaptations to increase fitness by achieving a higher mating rate, potentially leading to sexual conflict (Knell and Webberley 2004;Apari et al. 2014;Johns et al. 2019;Wardlaw and Agrawal 2019). The evolutionary consequences for STIs increasing mating rates have yet to be thoroughly explored and deserve much greater attention in future theoretical work. Similarly, the present model does not specify the cue for infection and whether this is via a sexually selected trait such as bright plumage that may affect the intrinsic attractiveness between individuals, but this would be a worthwhile distinction to explore theoretically. It is also worth noting that although the present study has focused on STIs, parasites that can be transmitted via other routes may also impact on mate choice (e.g., Clayton 1990), but theoretical models of mate choice have yet to consider coevolution with non-STIs.
Given the general lack of predictions and data on host-STI coevolution, there are clearly a number of important avenues for future theoretical and empirical research in this area. Still, the present study, combined with previous theoretical work (Ashby and Boots 2015;Wardlaw and Agrawal 2019), predicts that STIs and host mating systems are readily shaped by coevolutionary feedbacks. More specifically, this study has shown how mate choice and STI virulence are likely to coevolve under different host and STI life-history traits, including variation in the mode of virulence, recovery rates, and host lifespan. Together, these results suggest that parasite-mediated sexual selection is likely to select for mate choice under a broad set of conditions. | 2019-04-26T13:49:44.402Z | 2019-03-30T00:00:00.000 | {
"year": 2019,
"sha1": "c946f5abe7b626932df712bf1efaee3a07e9ac0a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/evo.13883",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceebfabdccc76ffe7292d79951a7847d25a8e8f9",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17298008 | pes2o/s2orc | v3-fos-license | The Dirac equation vs. the Dirac type tensor equation
We discuss a connection between the Dirac equation for an electron and the Dirac type tensor equation with ${\rm U}(1)$ gauge symmetry.
Under a linear nondegenerate change of coordinateś where p µ ν are real constants, the components of tensor field transform aś and aggregates e µ 1 ∧. . .∧e µ k transform as components of contravariant tensor field of rank ké In this paper we admit changes of coordinates from the proper orthohroneous Lorentz group SO + (1, 3) only, i.e., where P = p µ ν . Also we consider nonhomogeneous exterior forms Denote by Λ the set of all such exterior forms and by Λ k the sets of exterior forms of rank k. The set Λ 0 is identified with the set of smooth scalar functions R 1,3 → R. We have Elements of Λ ev and Λ od are called even and odd exterior forms respectively. At any point x ∈ R 1,3 we may consider the sets Λ, Λ ev , Λ od , Λ 0 , Λ 1 , Λ 2 , Λ 3 , Λ 4 as linear spaces of dimensions 16,8,8,1,4,6,4,1 respectively. Basis elements of Λ are 1, e µ , e µ 1 ∧ . . . ∧ e µ k , µ 1 < · · · < µ k , k = 2, 3, 4.
Note that from the second rule we get the equalities e µ e ν + e ν e µ = 2g µν , which appear in the Clifford algebra. In other words, the operation of central product maps Λ × Λ to Λ.
It can be checked that If we formally substitute U = e µ ∂ µ into (2), then we get By ℓ denote a volume form The volume form commutes with all even exterior forms and anticommutes with all odd exterior forms with respect to (w.r.t.) the central product. Suppose that exterior forms H ∈ Λ 1 ; I, K ∈ Λ 2 are such that where [H, I] = HI − IH, {I, K} = IK + KI. Then the exterior forms ℓ, H, I, K are said to be invariant generators of Λ. In particular, in fixed coordinates the exterior formś satisfy (3) are linear independent at any point x ∈ R 1,3 and they can be used as basis exterior forms of Λ.
Let us take the exterior form The equality t 2 = t means that t is an idempotent and we may consider the left ideal It can be checked that complex dimension of this left ideal is equal to four. The exterior forms t k = F k t, k = 1, 2, 3, 4, where are linear independent and can be considered as basis elements of I(t).
Let us define operations of conjugation * and Hermitian conjugation †, whereŪ is the exterior form with complex conjugated components (if U ∈ Λ, thenŪ = U). We see that Let us define a trace of exterior form as the linear operation Tr : Λ → Λ 0 such that Tr(1) = 1 and Tr(e µ 1 ∧ . . . ∧ e µ k ) = 0 for k = 1, 2, 3, 4. It is easy to prove that Now we may define an operation ( · , · ) : This operation has all properties of Hermitian scalar product where U, V, W ∈ I(t), α ∈ Λ C 0 . This scalar product converts the left ideal I(t) into the four dimensional unitary space with orthonormal basis t k = t k , (k = 1, 2, 3, 4) (t k , t n ) = δ n k . Theorem 2 ([2], Theorem 4). If Φ ∈ I(t) is given and Ψ ∈ Λ ev is unknown even exterior form, then the equation where Φ has the form Φ = (α k + iβ k )t k and F k are defined in (5).
This theorem establishes the one-to-one correspondence between Λ ev and I(t).
Here γ(U) n k are elements of a four dimensional square matrix γ(U) (an upper index enumerate rows and a lower index enumerate columns of a matrix). If ∂ µ U = 0, then elements of matrix γ(U) are constants. Otherwise they are smooth functions R 1,3 → C. It is easily shown that for U, V ∈ Λ, α ∈ Λ 0 . Hence the map γ is a matrix representation of Λ such that a central product of exterior forms UV corresponds to the product of matrices γ(U)γ(V ). This map depends on invariant generators ℓ, H, I, K. In particular, if we take invariant generators (4), then we get the following well known (Dirac) representation of matrices γ µ = γ(e µ ):
Now we may write down an equation, which we call a Dirac type tensor equation
where an even exterior form Ψ ∈ Λ ev is interpreted as a wave function of an electron, a 1-form A = a µ e µ ∈ Λ 1 is identified with a potential of electromagnetic field, exterior forms H, I are defined in (3), and m is a real nonnegative constant (the electron mass). Let us take matrices γ µ = γ(e µ ), defined with the aid of the map γ, and scalar functions ψ k : R 1,3 → C (ψ k ∈ Λ 0 , k = 1, 2, 3, 4) defined by the formula Ψt = ψ k t k .
By Theorem 2 we get Ω = 0. This means that the exterior form Ψ ∈ Λ ev satisfies the Dirac type tensor equation (6). This completes the proof.
For the sequel we need a set of even exterior forms This set can be considered as a group w.r.t. the central product. It can be shown that if U ∈ Λ k and S ∈ Spin(1, 3), then S * US ∈ Λ k . In particular, S * e µ S = p µ ν e ν , where p µ ν are real constants and the matrix P = p µ ν is such that P T gP = g, detP = 1, p 0 0 > 0.
Now we may consider a change of coordinates which associated with the exterior form S ∈ Spin(1, 3). According to the formulas (9), this change of coordinates is from the group SO + (1, 3). Conversely, if we take any change of coordinates (10) from the group SO + (1, 3), i.e., p µ ν satisfy (9), then there exists a unique pair of exterior forms ±S ∈ Spin(1, 3) such that p µ ν e ν = S * e µ S.
We claim that correspondence (7) between the Dirac type tensor equation (6) and the Dirac equation (8) is the same in any coordinatesx µ such that transformation x µ →x µ is from the proper orthohroneous Lorentz group SO + (1, 3). Indeed, consider a change of coordinates (10) from the group SO + (1, 3), which associated with the exterior form S ∈ Spin(1, 3). The exterior forms Ψ, A, H, I from (6) are invariants and the operators d, δ are invariant under this change of coordinates. Therefore the Dirac type tensor equation has the same form in coordinatesx µ . Consider relation (7). As Ψ, t, t k are exterior forms, i.e., invariants, the functions ψ k : R 1,3 → C must be invariants too. Let us write the Dirac equation (8) Substituting R −1 γ µ R forγ µ in (11), we get or, equivalently, Thus, postulating relation (7) where H ∈ Λ 1 , I ∈ Λ 2 , Ψ ∈ Λ ev , A ∈ Λ 1 , F ∈ Λ 2 , J ∈ Λ 1 , m and α are constants. These values have the following physical interpretation: Ψ is the tensor wave function of electron, A and F are the potential and strength of electromagnetic field respectively, J is the electric current generated by the electron, m is the electron mass, and α is a real constant dependent on physical units (the speed of light is equal to 1). It can be shown (see [3]) that if an even exterior form Ψ satisfies (15), then the 1-form J = ΨHΨ * satisfies the equality which is called a charge conservation law for the Dirac type tensor equation. Taking into account the identity δ 2 = 0, we see that (19) is consistent with equation (17).
Proof. Let us multiply equation (15) Variating the Lagrangian L w.r.t. components of the exterior form Ψ, we may derive the Dirac type tensor equation. | 2014-10-01T00:00:00.000Z | 2002-11-29T00:00:00.000 | {
"year": 2002,
"sha1": "0f7fab4eb9167e5212d96a3a0a89bd11ee4a0106",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4399d7667fe6d9120a813cb7b62b20c425b97453",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
80180112 | pes2o/s2orc | v3-fos-license | Knowledge of diabetes among diabetic patients in government hospitals of Delhi
Background: Poor patient knowledge of recommended diabetic self-care practices is a major barrier toward attainment of good glycemic control and prevention of diabetic complications. Materials and Methods: We assessed the knowledge of diabetes self-care practices through a short 7-item pretested questionnaire among diabetes mellitus patients attending special clinics in three government hospitals. Results: The average diabetes knowledge score attained by the patients was 3.79 ± 1.77 (maximum score = 7). Lifetime treatment requirement for diabetes mellitus, plasma glucose levels for good glycemic control, and symptoms of hypoglycemia were correctly reported by 89%, 74%, and 38.5% of the patients, respectively. Low educational status and female gender were significantly associated with poor knowledge of diabetes (P < 0.05). Low level of knowledge of diabetes was a predictor of poor glycemic control but not medication adherence. Conclusion: Knowledge of diabetes in patients attending government hospitals in India is low. Future studies should explore low-cost health education interventions feasible in the Indian health-care context for improving patient knowledge of diabetes.
Introduction
Poor glycemic control in diabetic mellitus (DM) patients is a major public health problem since it increases the risk of development of diabetic complications and associated management costs. [1,2] Patient knowledge and awareness of recommended diabetic self-care practices and need for adhering to them is associated with improved medical adherence to treatment recommendations which may promote better glycemic control. [3][4][5] The objective of this study was to evaluate diabetes-related health knowledge of Type 2 DM patients following up for treatment in government hospitals.
Materials and Methods
A cross-sectional study was conducted in 385 Type 2 DM patients attending special outpatient department (OPD) clinics in three government hospitals of Delhi from July to November 2013. The study sites were chosen conveniently with respect to logistical feasibility while study participants were selected through systematic random sampling. Type 2 DM patients who were previously diagnosed with diabetes and undergoing treatment from their existing treating health-care institution for at least 1 year were invited to participate in the study and enrolled after obtaining written informed consent. their current and desired glycemic control, potential complications of diabetes, recognition of hypoglycemic symptoms, persistence of drug therapy during routine illnesses, interval for periodic retinal screening, and recommend frequency and duration of exercise. Patient medical records were used for assessing the validity of patient-reported glycemic control.
Three hundred and forty (88.3%) participants reported having received diabetes-related health education from a designated health-care provider at their respective treatment facility within the previous 3 months. Two hundred and four (53.4%) participants were satisfied with the health education received by them at their treatment facility.
The average diabetes knowledge score attained by the patients was 3.79 ± 1.77 (maximum score = 7). We classified a diabetes knowledge score of ≤3 out of 7 as a low score. We found that, among participants with duration since diagnosis of DM as ≤5 years, 95 of the 170 (55.8%) reported low knowledge scores. In the female participants, 115 out of 226 (50.8%) obtained low knowledge scores [ Table 1].
A score of ≤3 was found among 76 out of 133 (57.14%) illiterate participants, 60 out of 135 (44.44%) participants with total years of education ≤10 years, and 33 out of 119 (27.73%) participants with total years of education >10 years.
A strong positive correlation was observed between patient satisfaction levels toward diabetes-related health education received at treatment facility and the knowledge score attained by them (Pearson's correlation: 0.71, P < 0.01).
The lifetime treatment requirement in DM was understood by 343 (89%) participants. The appropriate plasma glucose levels associated with good glycemic control were correctly reported by 285 (74%) participants. A total of 323 (84%) participants were aware of the necessity of adhering to diabetes medication during minor illnesses while 210 (55%) participants correctly stated the minimum weekly recommended physical activity requirements in diabetic patients. However, only 146 (38%) patients correctly identified the symptoms of hypoglycemia. At least two major complications resulting from end-organ damage due to uncontrolled diabetes were correctly reported by 135 (34.5%) participants. The need for an annual retinal screening examination in diabetes was known to 62 (16%) participants.
Poor knowledge of diabetes was found to be a significant predictor toward poor glycemic control (P ≤ 0.05). However, there was no significant association of patients' diabetes knowledge score and their medication adherence.
Discussion
We found inadequate levels of patient knowledge of control and management of diabetes among Type 2 diabetes patients undergoing treatment in OPDs of three government hospitals of Delhi. Patients with low educational level, female gender, and diabetic patients on treatment for <5 years were particularly at risk of being deficient in their knowledge of diabetes and its management through appropriate self-care practices. A majority of the participants were unaware of the symptoms of hypoglycemia and almost one-fourth of the study participants did not know their optimal glycemic control levels.
In the study by Gulabani et al., in a tertiary care center among diabetic patients, 48% of the participants were unaware of the symptoms of hypoglycemia while 37% of the patients lacked awareness of the lifetime treatment requirement in diabetes. [6] The study by Shah et al. in Gujarat also reported 38.2% patients believed that diabetes could be permanently cured. [7] The knowledge of diabetes among our study population was higher regarding these aspects. However, similar to the study by Gulabani et al., we also found significantly lower knowledge scores in women (P < 0.05).
Conclusion
Lack of awareness of diabetes' pathophysiology and self-care practices among diabetic patients is a major challenge in government health-care settings. Better knowledge of diabetes can improve glycemic control and treatment satisfaction in patients. Therefore, health programs should target improvement of diabetes health education levels in those with minimal or no formal education, especially women. In government health facilities with limited health providers and high patient load, innovative methods for imparting diabetic health education to patients which allow for reinforcement even with minimal human intervention should be explored.
Financial support and sponsorship Nil.
Conflicts of interest
There are no conflicts of interest. | 2019-03-17T13:01:32.407Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "47be2f8b483d268775dbd80773ae01ecac6befb7",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jncd.jncd_44_16",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "54262ab71b9a634e8a0c5eefccc7b71a79335be4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30890144 | pes2o/s2orc | v3-fos-license | Clinical Challenges With Concentrated Insulins: Setting the Record Straight
The availability of insulins with concentrations greater than the standard 100 units/mL (U-100) concentration (adopted in the United States in 1973) provides additional options for managing diabetes, but these agents may be a source of confusion for many clinicians. Our awareness of such confusion has come about after a number of inquiries from health care providers (HCPs) to Lilly Diabetes, U.S. Medical Affairs, and inaccuracies in recent articles and published guidance. The purpose of this editorial is to bring attention to some of the more common and crucial issues and provide relevant background and clinical evidence to address and clarify misunderstandings and instruct HCPs on the safe and appropriate use of these agents.
Four concentrated insulins are available in the United States. Three are analog insulins, which have been approved in the past 2–3 years: insulin glargine 300 units/mL (IGlar300) (1), insulin degludec 200 units/mL (IDeg200) (2), and insulin lispro 200 units/mL (ILis200) (3). The fourth, human regular insulin 500 units/mL (U-500R) (4), has been commercially available since 1997.
These agents have diverse pharmacokinetic/pharmacodynamic (PK/PD) profiles and were developed to address different challenges of insulin therapy (5). Designing a basal insulin with stable, prolonged action was the rationale for IGlar300 (6). Insulin degludec was also developed as a longer-acting basal agent; the concentrated formulation (IDeg200) may benefit patients with higher insulin requirements (7). The more stable and protracted time-action profile for IGlar300 and insulin degludec (both 100 and 200 units/mL) supports once-daily dosing and may result in reduced hypoglycemia compared to insulin glargine 100 units/mL (IGlar100) (6–18). Rapid-acting prandial ILis200 is delivered in half the volume of the corresponding U-100 formulation, allowing a twofold increase in device capacity (19) and resulting in a longer-lasting pen. U-500R is a prandial/basal agent intended specifically for patients with severe insulin resistance …
T he availability of insulins with concentrations greater than the standard 100 units/mL (U-100) concentration (adopted in the United States in 1973) provides additional options for managing diabetes, but these agents may be a source of confusion for many clinicians. Our awareness of such confusion has come about after a number of inquiries from health care providers (HCPs) to Lilly Diabetes, U.S. Medical Affairs, and inaccuracies in recent articles and published guidance. The purpose of this editorial is to bring attention to some of the more common and crucial issues and provide relevant background and clinical evidence to address and clarify misunderstandings and instruct HCPs on the safe and appropriate use of these agents. 1. What concentrated insulins are available, and why were they developed?
These agents have diverse pharmacokinetic/pharmacodynamic (PK/PD) profiles and were developed to address different challenges of insulin therapy (5). Designing a basal insulin with stable, prolonged action was the rationale for IGlar300 (6). Insulin degludec was also developed as a longer-acting basal agent; the concentrated formulation (IDeg200) may benefit patients with higher insulin requirements (7). The more stable and protracted time-action profile for IGlar300 and insulin degludec (both 100 and 200 units/ mL) supports once-daily dosing and may result in reduced hypoglycemia compared to insulin glargine 100 units/mL (IGlar100) (6-18). Rapidacting prandial ILis200 is delivered in half the volume of the corresponding U-100 formulation, allowing a twofold increase in device capacity (19) and resulting in a longer-lasting pen. U-500R is a prandial/basal agent intended specifically for patients with severe insulin resistance (i.e., total daily dose [TDD] >200 units) as insulin monotherapy (4,20).
What is the difference between concentrated and U-100 formulations?
Insulin concentration is defined by the number of insulin units per milliliter. The standard concentration, U-100, contains 100 units/mL. Likewise, U-200, U-300, and U-500 contain 200, 300, and 500 units/mL, respectively, thereby reducing the administered volume by two-to fivefold. Injection of lower volumes is a potential benefit of these agents and has been shown to reduce injection site discomfort for U-500R (21). Differences in concentration/volume can have both direct and indirect effects, such as PK/PD effects. These and other distinctions are discussed below.
3. How is bioequivalence defined, which concentrated insulins are bioequivalent, and what are the clinical implications?
Bioequivalence is defined by PK parameters per regulatory guidance. To establish bioequivalence, 90% confidence intervals (CIs) for area under the curve (AUC) and maximum concentration (C max ) ratios of comparators must fall within 0.80-1.25 (22,23). The bioequivalent concentrated insulins with respect to their U-100 counterparts are IDeg200 and ILis200. Bioequivalence of these agents was demonstrated in euglycemic clamp studies, including a steady-state study for IDeg200 versus insulin degludec 100 units/ mL (IDeg100) (8) and a single-dose study for ILis200 versus insulin lispro 100 units/mL (ILis100) (19). When changing from U-100 to U-200 bioequivalent insulins, that is, from IDeg100 to IDeg200 or from ILis100 to ILis200, the dose remains the same (i.e., unit-for-unit). Likewise, safety and efficacy are expected to be similar.
In contrast, IGlar300 and U-500R are not bioequivalent to their U-100 counterparts. Pharmacokinetic studies comparing IGlar300 to IGlar100 did not support bioequivalence in terms of either AUC or C max (6,11,14). Although PK results for U-500R versus human regular insulin 100 units/ mL (U-100R) met the above criterion for AUC, confidence intervals for C max treatment ratios were outside the acceptable range for bioequivalence (24).
When is "a unit a unit?"
Unit equivalence, or equipotency, of two insulin preparations can potentially be affected by changes in concentration as a consequence of altered exposure/absorption. The insulin unit has been defined by various international standards, which historically were quantified according to glucose reduction in a fasting rabbit (25). Presently, euglycemic clamp studies characterizing PD time-action profiles are used to assess relative potency, which can be confirmed in Phase 3 efficacy studies. The three concentrated products demonstrating similar exposure between comparators (IDeg200/IDeg100, ILis200/ILis100, and U-500R/U-100R) also showed similar potency (overall glucose infusion) (8,19,24), thus supporting unit equivalence. Alternatively, lower potency for IGlar300 versus IGlar100 was demonstrated in single-dose (P <0.05) and steady-state comparisons (24-hour ratio IGlar300/IGlar100, 0.73 [90% CI 0.56-0.94]) and in randomized clinical trials (RCTs) (9-16). Importantly, it is incorrect to state that any of the concentrated insulins are more potent by virtue of their higher concentration. For example, U-500R is not a more potent form of regular human insulin; it is five times more concentrated (i.e., it delivers the same number of units in one-fifth the volume).
How do clinicians initiate/ switch to concentrated insulin therapy and titrate doses?
Treatment with concentrated insulins should be individualized, considering overall patient needs and circumstances, as is generally recommended. Basal insulins IGlar300 and IDeg200 are initiated at a TDD of 0.2 units/kg and 10 units/day, respectively, in insulin-naive patients with type 2 diabetes (1,2,7,12). When switching to concentrated insulins in insulinexperienced patients, which may be the more common clinical scenario, the starting dose for IGlar300 and IDeg200 should be the same as the previous total daily basal insulin dose; exceptions include a 20% reduction for patients on twice-daily NPH insulin (when switching to IGlar300) and for pediatric patients >1 year of age (when switching to IDeg200) (1,2,9,10,13,15,16,18,26). It should be anticipated that upward titration may be needed when switching from IGlar100 to IGlar300 to maintain the same level of glucose control (1). Initiation of ILis200 follows the same guidance as that for ILis100, and switching between formulations uses a one-to-one conversion (3). Recommended dose transitions to prandial/basal insulin U-500R are based on U-100 insulin TDDs: oneto-one dosing for patients with an A1C >8% and a 20% dose reduction for those with an A1C ≤8% (20,27). U-500R is usually administered as insulin monotherapy either two or three times daily, approximately 30 minutes before meals (20,27).
Weekly dose titration of the concentrated basal agents (IGlar300 and IDeg200) is recommended with a minimum of 3-to 4-day intervals (1,2,7,9,10,12,13,26). ILis200 is titrated identically to ILis100 (3). Titration of U-500R may be performed progressively from weekly b r u s k o e t a l .
to biweekly to triweekly and then extended according to clinical judgment (20).
What if a patient needs to switch back to U-100 insulin?
Switching from concentrated insulins to U-100 concentrations has not been broadly studied in phase 3 trials. Switching may be required because of dose reductions, insurance formulary changes, or hospital admissions where formularies may exclude concentrated insulins. In such cases, bioequivalent insulins would be dosed similarly (e.g., 1:1 dosing either between IDeg200 and IDeg100 or between ILis200 and ILis100). When changing from IGlar300 to IGlar100, a dose reduction of approximately 20% is recommended (6,28). However, for inpatients, depending on food intake restrictions, substantial dose reductions may be required for all insulins. For example, with U-500R, expert reviews and case series (29,30) suggest reduction in TDDs from home to hospital by at least 50%, and further reduction may be needed for patients who are NPO ("nothing-by-mouth") status (30). Guidelines for switching between formulary insulins such as concentrated and U-100 formulations are greatly needed (31). 8. What has been done to reduce the risk of dosing errors with concentrated insulins?
All of the concentrated insulin products are available with dedicated pen devices designed to deliver an accurate dose for each insulin concentration without the need for dose conversion (32). For ease of recognition, the devices differ in appearance from their U-100 counterparts, where available, and some have dosing modifications (1)(2)(3)(4). The IGlar300 pen is off-white with a green dose knob and "300 units/mL (U-300)" printed on the pen. It dials in 1-unit increments and delivers a maximum dose/injection of 80 units. The IDeg200 pen is blue with a dark green dose knob and "200 units/mL (U-200)" printed on the pen. This pen dials in 2-unit increments and delivers a maximum dose/injection of 160 units. The ILis200 pen is dark gray with a dark grey dose knob that has a burgundy ring on the end and "200 units per mL (U-200)" printed on the pen. It dials in 1-unit increments and delivers a maximum dose/injection of 60 units. U-500R is available in a pen or 20-mL vial (10,000 units). The U-500R pen is aqua with "U-500" displayed in green. This pen dials in 5-unit increments and delivers a maximum dose/injection of 300 units. A 0.5-mL U-500 insulin syringe with a green needle shield and "U-500" symbol (Becton, Dickinson and Co., Franklin Lakes, N.J.) is available for use with U-500R vials. It delivers 5 units per mark with a maximum dose/injection of 250 units and was approved to replace the use of nondedicated syringes (U-100 insulin and tuberculin [volumetric] syringes), which are no longer approved by the U.S. Food and Drug Administration for use with U-500R vials.
It is important to note that insulin should not be withdrawn from insulin pen devices by syringe. ILis200 and U-500R have a yellow label on the pen cartridge that states "Do not transfer to a syringe; severe overdose can result." In addition to the instructions for use, each company has educational materials to help patients use these products as intended.
Are all concentrated insulins only for severely insulinresistant diabetes patients taking high insulin doses?
A common misconception is that all new concentrated insulins were developed to assist in the management of patients with severe insulin resistance who are taking high daily doses of insulin (>200 units/day). However, RCTs targeting severely insulin-resistant patients have only been performed with U-500R (20,27,33). Additionally, the only concentrated insulins that may reduce injection burden for such patients are those that allow higher maxi-mum dosing via the delivery device (IDeg200, 160 units/injection [2], and U-500R pen device, 300 units/ injection, or U-500R vial/BD U-500 insulin syringe, 250 units/injection [4,21]).
We hope these comments will provide a better understanding of how available concentrated insulins may be effectively and safely integrated into clinical practice. | 2018-04-03T01:45:35.502Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "587e82f71347c841bb425c2964d61f13373fd912",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc5687105?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "587e82f71347c841bb425c2964d61f13373fd912",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12612455 | pes2o/s2orc | v3-fos-license | The index of centralizers of elements of reductive Lie algebras
same as arXiv:0904.1778
Introduction
In this note k is an algebraically closed field of characteristic 0.
1.1.Let g be a finite dimensional Lie algebra over k and consider the coadjoint representation of g.By definition, the index of g is the minimal dimension of stabilizers g x , x ∈ g * , for the coadjoint representation: indg := min{dim g x ; x ∈ g * } .
The definition of the index goes back to Dixmier [Di74].It is a very important notion in representation theory and in invariant theory.By Rosenlicht's theorem [Ro63], generic orbits of an arbitrary action of a linear algebraic group on an irreducible algebraic variety are separated by rational invariants; in particular, if g is an algebraic Lie algebra, indg = deg tr k(g * ) g , where k(g * ) g is the field of g-invariant rational functions on g * .The index of a reductive algebra equals its rank.For an arbitrary Lie algebra, computing its index seems to be a wild problem.However, there is a large number of interesting results for several classes of nonreductive subalgebras of reductive Lie algebras.For instance, parabolic subalgebras and their relatives as nilpotent radicals, seaweeds, are considered in [Pa03a], [TY04], [J07].The centralizers, or normalizers of centralizers, of elements form another interesting class of such subalgebras, [E85a], [Pa03a], [Mo06b].The last topic is closely related to the theory of integrable Hamiltonian systems [Bol91].Let us precise this link.
From now on, g is supposed to be reductive.Denote by G the adjoint group of g.The symmetric algebra S(g) carries a natural Poisson structure.By the so-called argument shift method, for x in g * , we can construct a Poisson-commutative family F x in S(g) = k[g * ]; see [MF78] or Remark 1.3.It is generated by the derivatives of all orders in the direction x ∈ g * of all elements of the algebra S(g) g of g-invariants of S(g).Moreover, if G.x denotes the coadjoint orbit of x ∈ g * : Theorem 1.1 ([Bol91], Theorems 2.1 and 3.2).There is a Poisson-commutative family of polynomial functions on g * , constructed by the argument shift method, such that its restriction to G.x contains 1 2 dim (G.x) algebraically independent functions if and only if indg x = ind g.
Motivated by the preceding result of Bolsinov, A.G. Elashvili formulated the conjecture: ind g x = rkg for all x ∈ g * , where rkg is the rank of g.Elashvili's conjecture also appears in the following problem: Is the algebra S(g x ) g x of invariants in S(g x ) under the adjoint action a polynomial algebra?This question was formulated by A. Premet in [PPY07, Conjecture 0.1].After that, O. Yakomiva discovered a counterexample [Y07], but the question remains very interesting.As an example, under certain hypothesis, and under the condition that Elashvili's conjecture holds, the algebra of invariants S(g x ) g x is polynomial in rkg variables, [PPY07, Theorem 0.3].
In this paper, we give a proof of Elashvili's conjecture.Hence we claim: Theorem 1.2.Let g be a reductive Lie algebra.Then ind g x = rkg for all x ∈ g * .
During the last decade, Elashvili's conjecture caught attention of many invariant theorists [Pa03a], [Ch04], [Y06a], [De08].To begin with, describe some easy but useful reductions.Since the g-modules g and g * are isomorphic, it is equivalent to prove Theorem 1.2 for centralizers of elements of g.On the other hand, by a result due to E.B. Vinberg, pointed out in [Pa03a], the inequality indg x ≥ rkg holds for all x ∈ g.So only remains to prove the opposite one.Given x ∈ g, let x = x s + x n be its Jordan decomposition.Then g x = (g xs ) xn .The subalgebra g xs is reductive of rank rkg.Thus, the verification of the Elashvili's conjecture reduces to the case of nilpotent elements.At last, one can clearly restrict itself to the case of simple g.
Review now the main results obtained so far on Elashvili's conjecture.If x is regular, then g x is a commutative Lie algebra of dimension rkg.So, Elashvili's conjecture is obviously true in that case.Further, the conjecture is known for subregular nilpotent elements and nilpotent elements of height 2 and 3, [Pa03a], [Pa03b].Remind that the height of a nilpotent element e is the maximal integer m such that (ad e) m = 0.More recently, O. Yakimova proved the conjecture in the classical case [Y06a].To valid the conjecture in the exceptional type, W.A. DeGraaf used the computer programme GAP (cf.[De08]).Since there are many nilpotent orbits in the Lie algebras of exceptional type, it is difficult to present the results of such computations in a concise way.In 2004, the first author published a case-free proof of Elashvili's conjecture applicable to all simple Lie algebras; see [Ch04].Unfortunately, the argument in [Ch04] has a gap in the final part of the proof, which was pointed out by L. Rybnikov.
Because of the importance of Elashvili's conjecture in invariant theory, it would be very appreciated to find a conceptual proof of Elashvili's conjecture applicable to all finite-dimensional simple Lie algebras.The proof we propose in this paper is fresh and almost general.More precisely, it remains 7 isolated cases; one nilpotent orbit in type E 7 and six nilpotent orbits in type E 8 have to be considered separately.For these 7 orbits, the use of GAP is unfortunately necessary.1.2.Description of the paper.Let us briefly explain our approach.Denote by N(g) the nilpotent cone of g.As noticed previously, it suffices to prove indg e = rkg for all e in N(g).If the equality holds for e, it does for all elements of G.e; we shortly say that G.e satisfies the Elashvili's conjecture.
From a nilpotent orbit O l of a reductive factor l of a parabolic subalgebra of g, we can construct a nilpotent orbit of g having the same codimension in g as O l in l and having other remarkable properties.The nilpotent orbits obtained in such a way are called induced; the other ones are called rigid.Richardson orbits are induced nilpotent orbits from a zero orbit.We refer the reader to Subsection 2.3 for more precisions about this topic.Using Bolsinov's criterion of Theorem 1.1, we first prove Theorem 1.2 for all Richardson nilpotent orbits and then, by an induction process, we show that the conjecture reduces to the case of rigid nilpotent orbits.To deal with rigid nilpotent orbits, we use methods developed in [Ch04] by the first author, and resumed in [Mo06a] by the second author, based on nice properties of Slodowy slices of nilpotent orbits.
In more details, the paper is organized as follows: We state in Section 2 the necessary preliminary results.In particular, we investigate in Subsection 2.2 extensions of Bolsinov's criterion and we establish an important result (Theorem 2.7) which will be used repeatedly in the sequel.We prove in Section 3 the conjecture for all Richardson nilpotent orbits.Starting with Section 4, we develop a reduction method: we introduce in Section 4 a property (P 1 ) given by Definition 4.3 which turns out to be equivalent to Elashvili's conjecture.By using this equivalence and results of Section 2, we show in Section 4 that Elashvili's conjecture reduces to the case of rigid nilpotent orbits.From Section 5, we handle the rigid nilpotent orbits: we introduce and study in Section 5 a property (P 2 ) given by Definition 5.2.Then, in Section 6, we are able to deal with almost all rigid nilpotent orbits.The remaining cases are dealt with set-apart by using a different approach, still in Section 6.
Notations.
• If E is a subset of a vector space V , we denote by span(E) the vector subspace of V generated by E. The grassmanian of all d-dimensional subspaces of V is denoted by Gr d (V ).By cone of V , we mean a subset of V invariant under the natural action of k * := k \ {0} and by a bicone of V × V we mean a subset of V × V invariant under the natural action of k * × k * on V × V .
• From now on, we assume that g is semisimple of rank ℓ and we denote by ., .the Killing form of g.We identify g to g * through ., . .Unless otherwise specified, the notion of orthogonality refers to the bilinear form ., . .
• Denote by S(g) g the algebra of g-invariant elements of S(g).Let f 1 , . . ., f ℓ be homogeneous generators of S(g) g of degree d 1 , . . .,d ℓ respectively.We choose the polynomials f 1 , . . .,f ℓ so that d 1 ≤ • • • ≤d ℓ .For i = 1, . . ., d i and (x, y) ∈ g × g, we may consider a shift of f i in direction y: f i (x + ty) where t ∈ k.Expanding f i (x + ty) as a polynomial in t, we obtain where y → (m!)f (m) i (x, y) is the differential at x of f i at the order m in the direction y.The elements f (m) i as defined by (1) are invariant elements of S(g) ⊗ k S(g) under the diagonal action of G in g × g.Note that f .One says that the family F x is constructed by the argument shift method.
• Let i ∈ {1, . . ., ℓ}.For x in g, we denote by ϕ i (x) the element of g satisfying (df i ) x (y) = f (1) i (x, y) = ϕ i (x), y , for all y in g.Thereby, ϕ i is an invariant element of S(g) ⊗ k g under the canonical action of G.We denote by ϕ (m) i , for 0 ≤ m ≤ d i − 1, the elements of S(g) ⊗ k S(g) ⊗ k g defined by the equality: • For x ∈ g, we denote by g x = {y ∈ g | [y, x] = 0} the centralizer of x in g and by z(g x ) the center of g x .The set of regular elements of g is and we denote by g reg,ss the set of regular semisimple elements of g.Both g reg and g reg,ss are G-invariant open dense subsets of g.
• The nilpotent cone of g is N(g).As a rule, for e ∈ N(g), we choose an sl 2 -triple (e, h, f ) in g given by the Jacobson-Morozov theorem [CMa93, Theorem 3.3.1].In particular, it satisfies the equalities: The action of ad h on g induces a Z-grading: Recall that e, or G.e, is said to be even if g(i) = 0 for odd i.Note that e ∈ g(2), f ∈ g(−2) and that g e , z(g e ) and g f are all ad h-stable.
• All topological terms refer to the Zariski topology.If Y is a subset of a topological space X, we denote by Y the closure of Y in X.
1.4.Acknowledgments.We would like to thank O. Yakimova for her interest and useful discussions; and more particularly for bringing Bolsinov's paper to our attention.We also thank A.G. Elashvili for suggesting Lawther-Testerman's paper [LT08] about the centers of centralizers of nilpotent elements.
Preliminary results
We start in this section by reviewing some facts about the differentials of generators of S(g) g .Then, the goal of Subsection 2.2 is Theorem 2.7.We collect in Subsection 2.3 basics facts about induced nilpotent orbits.
We turn now to the elements ϕ (m) i , for i = 1, . . ., ℓ and 0 ≤ m ≤ d i −1, defined in Subsection 1.3 by (2).Recall that d i is the degree of the homogeneous polynomial f i , for i = 1, . . ., ℓ.The integers d 1 − 1, . . ., d ℓ − 1 are thus the exponents of g.By a classical result [Bou02, Ch.V, §5, Proposition 3], we have d i = b g where b g is the dimension of Borel subalgebras of g.For (x, y) in g × g, we set: The subspaces V x,y will play a central role throughout the note.
Remark 2.2.(1) For (x, y) ∈ g×g, the dimension of V x,y is at most b g since d i = b g .Moreover, for all (x, y) in a nonempty open subset of g × g, the equality holds [Bol91].Actually, in this note, we do not need this observation.
(2) By Lemma 2.1(ii), if x is regular, then g x is contained in V x,y for all y ∈ g.In particular, if so, dim [x, V x,y ] = dim V x,y − ℓ.
The subspaces V x,y were introduced and studied by Bolsinov in [Bol91], motivated by the maximality of Poisson-commutative families in S(g).These subspaces have been recently exploited in [PY08] and [CMo08].The following results are mostly due to Bosinov, [Bol91].We refer to [PY08] for a more recent account about this topic.
(i)[PY08, Lemma A1] The subspace V x,y of g is the sum of the subspaces g x+ty where t runs through any nonempty open subset of k.
(ii)[PY08, Lemma A4] The subspace g y + V x,y is a totally isotropic subspace of g with respect to the Kirillov form Let σ and σ i , for i = 1, . . ., ℓ, be the maps respectively, and denote by σ ′ (x, y) and σ ′ i (x, y) the tangent map at (x, y) of σ and σ i respectively.Then σ ′ i (x, y) is given by the differentials of the f (m) i 's at (x, y) and σ ′ (x, y) is given by the elements σ ′ i (x, y).
Proof.(i) The verifications are easy and left to the reader.
(iii) Suppose that x is regular and suppose that σ ′ (x, y)(v, w ′ ) = 0 for some w ′ ∈ g.Then by (i), v is orthogonal to the elements ϕ 1 (x), . . .,ϕ ℓ (x).So by Lemma 2.1(ii), v is orthogonal to g x .Since g x is the orthogonal complement of [x, g] in g, we deduce that v lies in [x, g].Conversely, since σ(x, y) = σ(g(x), g(y)) for all g in G, the element ([u, x], [u, y]) belongs to the kernel of σ ′ (x, y) for all u ∈ g.So, the converse implication follows.
2.2.On Bolsinov's criterion.Let a be in g and denote by π the map Remark 2.5.Recall that the family (F x ) x∈g constructed by the argument shift method consists of all elements f (m) i (x, .)for i = 1, . . ., ℓ, 1 ≤ m ≤ d i , see Remark 1.3.By definition of the morphism π, there is a family constructed by the argument shift method whose restriction to G.a contains 1 2 dim G.a algebraically independent functions if and only if π has a fiber of dimension 1 2 dim G.a.
In view of Theorem 1.1 and the above remark, we now concentrate on the fibers of π.For (x, y) ∈ g × G.a, denote by F x,y the fiber of π at π(x, y): Lemma 2.6.Let (x, y) be in g × G.a.
(i) The irreducible components of F x,y have dimension at least 1 2 dim G.a. (ii) The fiber F x,y has dimension 1 2 dim G.a if and only if any irreducible component of F x,y contains an element (x, y ′ ) such that (g y ′ + V x,y ′ ) ⊥ has dimension 1 2 dim G.a. Proof.We prove (i) and (ii) all together.The tangent space T x,y ′ (F x,y ) of F x,y at (x, y ′ ) in F x,y identifies to the subspace of elements w of [y ′ , g] such that σ ′ (x, y ′ )(0, w) = 0. Hence, by Lemma 2.4(ii), But by Lemma 2.3(ii), (g y ′ + V x,y ′ ) ⊥ has dimension at least 1 2 dim G.a; so does dim T x,y ′ (F x,y ).This proves (i).Moreover, the equality holds if and only if (g y ′ + V x,y ′ ) ⊥ has dimension 1 2 dim G.a, whence the statement (ii).
Let p be a proper parabolic subalgebra of g and let l be a reductive factor of p.We denote by p u the nilpotent radical of p. Denote by L the connected closed subgroup of G whose Lie algebra is ad l and denote by P the normalizer of p in G.We shall say that e ∈ N(g) is an induced (respectively rigid, Richardson) nilpotent element of g if the G-orbit of e is an induced (respectively, rigid, Richardson) nilpotent orbit of g.Proposition 2.9 ([CMa93], Proposition 7.1.4).If l 1 and l 2 are two Levi subalgebras of g with In other words, Proposition 2.9 says that the induction of orbits is transitive.
Remark 2.10.As a consequence of Proposition 2.9, a nilpotent orbits is always induced, not necessarily in a unique way, from a rigid nilpotent orbit.
We use now classical results about polarizable elements and sheets to obtain interesting properties of Richardson elements useful for Section 4. All results we need to state Theorem 2.11 are contained in [TY05, §33 and §39].For any x in g, we denote by C(x) the G-invariant cone generated by x and we denote by C(x) the closure of C(x) in g.
Theorem 2.11.Let e ∈ N(g) be a Richardson element.There is a semisimple element s of g satisfying the following two conditions: (1) dim G.e = dim G.s; (2) the subset of nilpotent elements contained in C(s) is the closure of G.e. Since s is a nonzero semisimple element, there exists i in {1, . . ., ℓ} such that f i (s) = 0.For any x in G.s, for j = 1, . . ., ℓ. Hence C(s) is contained in the nullvariety in g of the functions: Recall that N(g) is the nullvariety in g of the f j 's.As a consequence, the set In addition, it is a finite union of nilpotent orbits.Hence the subset of nilpotent elements belonging to C(s) is exactly the closure of G.e. Indeed, a nilpotent orbit contained in C(s) different from G.e has dimension strictly smaller than dim G.e as we saw before.In conclusion, s satisfies the desired conditions (1) and (2).
Remark 2.12.The above proof shows that, for semisimple s as in Theorem 2.11,
Proof of Theorem 1.2 for Richardson nilpotent orbits
Let e be a Richardson nilpotent element of g.Our goal is to prove: ind g e = ℓ.Let s be a semisimple element of g verifying the two conditions of Theorem 2.11 and let x * be a regular element of g such that f j (s) = f j (x * ) for all j = 1, . . ., ℓ.Let i ∈ {1, . . ., ℓ} such that f i (s) = 0.
is the union of the nilpotent cone and of the cone generated by the closure of G.x * . Proof.
Then τ is a proper morphism and its fiber are finite.So τ is surjective since D(f i ) * and k * × G.x * have the same dimension, whence the statement.
As an abbreviation, we set: For j = 1, . . ., ℓ, recall that σ j is the map: Let B j be the nullvariety of σ j in g × g and let B be the union of B 1 , . . .,B ℓ ; it is a bicone of g × g.Denote by ρ and τ the canonical maps: Let σ 0 be the restriction to (g × g)\B of σ; it has values in k d× .As σ j (sx, sy) = s d j σ j (x, y) for all (x, y) ∈ g × g and j = 1, . . ., ℓ, the map τ •σ 0 factors through ρ.Denote by σ 0 the map from ρ(g × g\B) to P d making the following diagram commutative: and let Γ be the graph of the restriction to ρ(g × C(s)\B) of σ 0 .
Lemma 3.2.The set Γ is a closed subset of P(g × g) × P d .
Proof.Let Γ be the inverse image of Γ by the map ρ × τ .Then Γ is the intersection of the graph of σ and (g . But for all x in g such that f j (x) = 0 for any j, σ(x, e) belongs to k d× .Thus, the open subset U = Z ∩ k d× of Z is convenient and the proposition follows.
Theorem 3.4.Let e be a Richardson nilpotent element of g.Then the index of g e is equal to ℓ.
Proof.Let s be as in Theorem 2.11.Since s is semisimple, g s is a reductive Lie algebra of rank ℓ.So the index of g s is equal to ℓ. Besides, by Theorem 2.7, (1)⇒(6), applied to s, Hence for all z in a dense subset of σ(g × G.s), the fiber of the restriction of σ to g × G.s at z has minimal dimension Recall that Z is the closure of σ(g × C(s)) in k d .We deduce from the above equality that Z has dimension By Proposition 3.3, there exists an open subset U of Z contained in σ(g × C(s)) having a nonempty intersection with σ(g × G.e).Let i be in {1, . . ., ℓ} such that f i (s) = 0.For z ∈ k d , we write z = (z i,j ) 1≤i≤ℓ 0≤j≤d i its coordinates.Let V i be the nullvariety in U of the coordinate z i,d i .
Then V i is not empty by the choice of U .Since U is irreducible and since z i,d i is not identically zero on U , V i is equidimensional of dimension 1 2 dim G.e + ℓ.By Theorem 2.11 and Remark 2.12, the nullvariety of is an open subset of g × G.e.So σ(g × G.e) has dimension 1 2 dim G.e + ℓ.Then by Theorem 2.7, (6)⇒(1), the index of g e is equal to ℓ.
The property (P 1 ) and reduction to rigid nilpotent orbits
Let p be a proper parabolic subalgebra of g, let l be a Levi factor of p and denote by p u the nilpotent radical of p.We denote by L the connected subgroup of G whose Lie algebra is ad l.We consider a nilpotent orbit L. e of l and we choose an element e of Ind g l (L.e) which belongs to ẽ + p u .Note that rkl = ℓ and that dim g e = dim l e e by Theorem 2.8.It will be shown in this Section that under the assumption indl e e = ℓ, the index of g e is equal to ℓ (Theorem 4.10).This will enable us to reduce the proof of Elashvili's conjecture to the case of rigid nilpotent orbits.4.1.A preliminary result.Let f 1 , . . ., f ℓ be homogeneous generators of S(l) l of degrees d 1 , . . ., d ℓ respectively, with As a rule, we will write down with a tilde all relative notions to the f i 's.For example, we denote by σ the map where b l is the dimension of Borel subalgebras of l.Recall that the morphism σ was defined in Subsection 2.1.For (x, y) ∈ l × l, denote by Γ x,y the fiber at σ(x, y) of σ and denote by Γ x,y the fiber at σ(x, y) of the restriction of σ to l × l.Lemma 4.1.Let (x, y) ∈ l × l.The fiber Γ x,y is a union of irreducible components of Γ x,y and σ is constant on each irreducible component of Γ x,y .
Proof.Let d ≥ max(d ℓ , d ℓ ) and let us choose t 1 , . . .,t d pairwise different elements in k.Denote by κ and κ the maps from l to k ℓ , respectively.For j = 1, . . ., d, we denote by τ j the map l × l → l, (x ′ , y ′ ) → x ′ + t j y ′ and we denote by χ and χ the maps from l × l to k dℓ , j for all (x ′ , y ′ ) in l × l and all j = 1, . . ., d. Conversely, if F is the fiber of χ at χ(x, y), then for all (x ′ , y ′ ) ∈ F , all j = 1, . . ., d and all i = 1, . . ., ℓ, we have: So, for i = 1, . . ., ℓ, the elements f (m) i (x ′ , y ′ ) are solutions of a linear system whose determinant is a Van der Monde determinant defined by some elements among the t j 's.As these elements are pairwise different, this system has a unique solution.As a consequence, σ is constant on F .In conclusion, F = Γ x,y .The same argument shows that Γ x,y is the fiber of χ at χ(x, y).
The subalgebra S(l) l is a finite extension of the subalgebra generated by the restrictions to l of f 1 , . . .,f ℓ since l contains a Cartan subalgebra of g.So there exists a finite morphism ρ : Then ρ d is a finite morphism and χ = ρ d • χ.As a result, Γ x,y is contained in Γ x,y .Moreover, χ is constant on each irreducible component of Γ x,y since χ(Y ) is finite for any irreducible component Y of Γ x,y .This completes the proof of the lemma.Proposition 4.2.Assume that indl e e = ℓ.Then σ(p × {e}) has dimension 1 2 dim G.e + ℓ − dim p u .Proof.By Theorem 2.7, (1)⇒(6), applied to l and e, we deduce from our hypothesis that σ(l×{ e}) has dimension 1 2 dim L. e + ℓ.On the other hand, Lemma 4.1 tells us that the minimal dimension of fibers of the restriction of σ to l × { e} is the minimal dimension of fibers of the restriction of σ to l × { e}.Hence σ(l × { e}) has dimension 1 2 dim L. e + ℓ as well.For (x, y) ∈ l×l and (v, w) ∈ p u ×p u , we have σ(x+v, y+w) = σ(x, y); so σ(p×{e}) = σ(l×{ e}).In conclusion σ(p × {e}) has dimension 1 2 L. e + ℓ.Now, since dim l e e = dim g e , we have and the proposition follows.
4.2.The property (P 1 ).Let p − be the opposite parabolic subalgebra to p. Denote by p u,− the nilpotent radical of p − , by P − the normalizer of p − in G and by P u,− the unipotent radical of P − .We introduce now a property (P 1 ).It will turn out to be equivalent to the equality indg e = ℓ under the assumption ind l e e = ℓ (Corollary 4.7).
Definition 4.3.We say that e has Property (P 1 ) if for all (g, x) in a nonempty open subset of P u,− × p, we have [g(x), V g(x),e ] ⊥ ∩ p u,− = 0.
Remark 4.4.The image of P u,− × p by the map (g, x) → g(x) is dense in g.Hence, e has Property (P 1 ) if and only if for all y in a nonempty open subset of g, we have [y, V y,e ] ⊥ ∩ p u,− = 0.
We start with a preliminary result.For (g, x) ∈ P u,− × p, set: Note that Σ g,x is a closed subset of P u,− containing g.We denote by T g (Σ g,x ) the tangent space of Σ g,x at g.
Proof.Let v be in p u,− .By Lemma 2.4, (ad v)g belongs to T g (Σ x ) if and only if [v, g(x)] is orthogonal to V g(x),e whence the lemma.
Proposition 4.6.If e has Property (P 1 ) then Moreover, if the following equality holds: then e has Property (P 1 ).
By Lemma 2.4, for v in g, v belongs to the tangent space at x of S g,x if and only if g(v) is orthogonal to V g(x),e or, equivalently, if and only if v is orthogonal to V x,g −1 (e) .Since S 1,x is the fiber at σ(x, e) of the map x → σ(x, e) from p to σ(p × {e} we deduce that for all x in a nonempty open subset of p. So, for all (g, x) in a dense open subset of P u,− × p, we have dim (V ⊥ x,g −1 (e) ∩ p) ≤ dim p − dim σ(p × {e}).Hence, there exists a dense open subset U of P u,− × p satisfying the following condition: Suppose now that e has Property (P 1 ) and show dim ψ(P u,− × p) ≥ dim σ(p × {e}) + dim p u .Since e has Property (P 1 ), we can assume furthermore that U satisfies the following condition: (C 2 ) for all (g, x) ∈ U , we have [g(x), V g(x),e ] ⊥ ∩ p u,− = 0.
Let F be an irreducible component of a fiber of the restriction of ψ to U and let τ be the restriction to F of the projection map P u,− × p → p.By (C 2 ) and Lemma 4.5, the fibers of τ are finite.So there exists a dense open subset O of τ (F ) such that the restriction of τ to τ −1 (O) is an étale covering.Therefore, for x ∈ O and for g ∈ P u,− such that (g, x) ∈ τ −1 (O), {g} × S g,x is a neighborhood of (g, x) in F .Hence for all (g, x) ∈ F , F has dimension dim S g,x .Then, by (C 1 ), we get dim ψ(U ) = ψ(P u,− × p) ≥ dim σ(p × {e}) + dim p u .
Suppose now that the image of ψ has dimension dim σ(p × {e}) + p u and prove that e has Property (P 1 ).In this case, we can assume that the dense open subset U of P u,− × p satisfies (C 1 ) and also the following condition: (C 3 ) for all (g, x) in U , the fiber at ψ(g, x) of the restriction ψ 0 of ψ to U has dimension dim p − dim σ(p × {e}).
As a result, for all (g, x) ∈ U , the subset Σ g,x is finite.Let (g, x) be in U and let v be in p u,− such that v is orthogonal to [g(x), V g(x),e ].Then [v, g(x)] is orthogonal to V g(x),e .So by Lemma 4.5, (ad v)g belongs to the tangent space at g of Σ g,x .Hence v = 0.In other words, e has Property (P 1 ).
The following corollary yields the expected equivalence: Corollary 4.7.Assume that ind l e e = ℓ.Then, e has property (P 1 ) if and only if indg e = ℓ.
Proof.By Proposition 4.2, σ(p × {e}) has dimension 1 2 dim G.e + ℓ − dim p u and by Lemma 2.6(i), dim σ(g × {e}) ≤ 1 2 dim G.e + ℓ.Hence by Proposition 4.6, σ(g × {e}) has dimension 1 2 dim G.e + ℓ if and only if e has property (P 1 ).But Theorem 2.7, (1)⇔(6), says that indg e = ℓ if and only if e has property (P 1 ).4.3.Reduction to rigid nilpotent orbits.The aim of this section is Theorem 4.10.Let p, p u , l, e, e be as in Subsections 4.1 and 4.2.Recall that e is a nilpotent element which belongs to e + p u .For x in p and for y in N(l), we set: Since the map (x, y) → V x,y is G-equivariant, s x only depends on G.x.As an abbreviation, we denote by Ω y the intersection Ind g l (L.y) ∩ (L.y + p u ).By Theorem 2.8, remind that Ω y is a P -orbit.
(i) If x ∈ Ω y has Property (P 1 ), then so does any element of Ω y .
(ii) For all (z, x) in a nonempty open subset of g reg × Ω y , [z, V z,x ] ⊥ has dimension s x .
(ii) For x in Ω y , set: . Then there are s ′ x elements η 1 , . . .,η s ′ x among the elements ϕ (m) i 's, such that η 1 (z, x), . . .,η s ′ x (z, x) is a basis of V z,x .By continuity, for all (z ′ , x ′ ) in an open subset U of g reg × Ω y containing (z, x), the elements η 1 (z ′ , x ′ ), . . .,η s ′ x (z ′ , x ′ ) are linearly independent.Hence by maximality of s Let y be in N(l).The integer s x does not depend on the choice of an element x in Ω y .So one is justified to set s y := s x for any x in Ω y .Lemma 4.9.Let y, y ′ ∈ N(l) such that L.y is contained in the closure of L.y ′ in l.If x ∈ Ω y has Property (P 1 ) then any element of Ω y ′ has Property (P 1 ) too.
Proof.Let x be in Ω y and assume that x has Property (P 1 ).Suppose that there exists x ′ ∈ Ω y ′ such that x ′ has not Property (P 1 ).We are going to prove that, for all z in a nonempty open subset of g reg , there exists a subspace E of g of dimension s x ′ , contained in [z, V z,x ] ⊥ and such that E ∩ p u,− = 0. Once proved, this will lead us to a contradiction.Indeed, by Remark 4.4, Property (P 1 ) for x means that, for any z in a nonempty open subset of g reg , we have [z, V y,z ] ⊥ ∩ p u,− = 0.
So, let us prove the above statement.From our assumption and by Lemma 4.8(i), no element of Ω y ′ has Property (P 1 ).Remind that s y ′ = s x ′ for all x ′ ∈ Ω y ′ by definition of s y ′ .By Lemma 4.8(ii), for all (z, x ′ ) in a nonempty open subset U of Let z be in the projection of U to g reg and set: Then U z is a dense open subset of Ω y ′ .Denote by T the subset of elements (E, D) of Gr s y ′ (g) × P(p u,− ) such that D is contained in E. Then T is a projective variety.Set: Denote by T 0 the closure of T 0 in g × T .Since T is a projective variety, the projection of T 0 to g is closed.In particular, it contains the closure of Ω y ′ in g.Since L.y is contained in the closure of L.y ′ in l, there exists (E, D) in T such that (x, E, D) belongs to T 0 .For any i = 1, . . ., ℓ, 0 ≤ m ≤ d i − 1, and for all x ′ in U z , ϕ is the subspace generated by the ϕ (m) i (z, x)'s.So, E is convenient since its intersection with p u,− contains the line D and since the projection of U to g reg is an open subset of g reg .
We are now ready to prove the main result of this section: Theorem 4.10.Assume that ind a x = rka for all reductive subalgebra a strictly contained in g and for all x in a. Then for all induced nilpotent orbit O g in g and for all e in O g , ind g e = ℓ.
Proof.Let l be a Levi subalgebra of g and let O g the induced orbit from a nilpotent orbit L. e of l.From our assumption, indl e e = ℓ.Choose now e in O g ∩ p; we wish to prove ind g e = ℓ.Let O 0 be the induced nilpotent orbit of g from the zero orbit of l.It is a Richardson orbit by definition.Therefore by Theorem 3.4, for all x in O 0 , ind g x = ℓ.So by Corollary 4.7, all elements of Ω 0 have Property (P 1 ).Since 0 belongs to the closure of L.ẽ in l, any element of Ω e e has Property (P 1 ) by Lemma 4.9.So by Corollary 4.7, indg e = ℓ since e belongs to Ω e e by the choice of e.
From that point, our goal is to prove Theorem 1.2 for rigid nilpotent element; Theorem 4.10 tells us that this is enough to complete the proof.
The Slodowy slice and the property (P 2 )
In this section, we introduce a property (P 2 ) in Definition 5.2 and we prove that e ∈ N(g) has Property (P 2 ) if and only if indg e = ℓ.Then, for most rigid nilpotent orbits of g, we will show next section that they do have Property (P 2 ).5.1.Blowing up of S. Let e be a nilpotent element of g and consider an sl 2 -triple (e, h, f ) containing e as in Subsection 1.3.The Slodowy slice is the affine subspace S := e + g f of g which is a transverse variety to the adjoint orbit G.e. Denote by B e (S) the blowing up of S centered at e and let p : B e (S) → S be the canonical morphism.The variety S is smooth and p −1 (e) is a smooth, irreducible, hypersurface of B e (S).The use of the blowing-up B e (S) for the computation of the index was initiated by the first author in [Ch04] and resumed by the second author in [Mo06a].Here, we use again this technique to study the index of g e .Describe first the main tools extracted from [Ch04] we need.
For is a nonempty set.In addition, α(x) ⊂ g p(x) for all x ∈ Ω. Definition 5.2.We say that e has Property (P 2 ) if z(g e ) ⊂ α(x) for all x in Ω ∩ p −1 (e).
Remark 5.3.Suppose that e is regular.Then g e is a commutative algebra, i.e. z(g e ) = g e .If x ∈ Ω ∩ p −1 (e), then α(x) = g e since p(x) = e is regular in this case.On the other hand, ind g e = dim g e = ℓ since e is regular.So e has Property (P 2 ) and ind g e = ℓ.
We aim to prove that e has Property (P 2 ) if and only if indg e = ℓ.As a consequence of Remark 5.3, we will assume that e is a nonregular nilpotent element of g.
5.
2. On the property (P 2 ).This subsection aims to show: Property (P 2 ) holds for e if and only if ind g e = ℓ.We start by stating that if (P 2 ) holds, then so does the assertion (B) of Theorem 5.1.
Let L g be the S(g)-submodule ϕ ∈ S(g) ⊗ k g satisfying [ϕ(x), x] = 0 for all x in g.It is known that L g is a free module of basis ϕ 1 , . . ., ϕ ℓ , cf. [Di79].We investigate an analogous property for the Slodowy slice S = e + g f .We denote by S reg the intersection of S and g reg .As e is nonregular, the set (S \ S reg ) contains e.
Lemma 5.4.The set S \ S reg has codimension 3 in S and each irreducible component of S \ S reg contains e.
Proof. Let us consider the morphism
By a Slodowy's result [Sl80], this morphism is a smooth morphism.So its fibers are equidimensional of dimension dim g f .In addition, by [V72], g \ g reg is a G-invariant equidimensional closed subset of g of codimension 3. Hence S \ S reg is an equidimensional closed subset of S of codimension 3.
Denoting by t → g(t) the one parameter subgroup of G generated by ad h, S and S\S reg are stable under the action of t −2 g(t) for all t in k * .Furthermore, for all x in S, t −2 g(t)(x) goes to e when t goes to ∞, whence the lemma.
Denote by k[S] the algebra of regular functions on S and denote by L
Lemma 5.5.The module L S is a free module of basis ϕ 1 | S , . . ., ϕ ℓ | S where ϕ i | S is the restriction to S of ϕ i for i = 1, . . ., ℓ.
The following proposition accounts for an important step to interpret Assertion (B) of Theorem 5.1: Proof.Since g x is the orthogonal complement of [x, g] in g, our hypothesis says that ϕ(x) is orthogonal to g x for all x in a nonempty open subset S ′ of S. The intersection S ′ ∩ S reg is not empty; so by Lemma 2.1(ii), ϕ(x), ϕ i | S (x) = 0 for all i = 1, . . ., ℓ and for all x ∈ S ′ ∩ S reg .Therefore, by continuity, ϕ(x), ϕ i | S (x) = 0 for all i = 1, . . ., ℓ and all x ∈ S. Hence ϕ(x) ∈ [x, g] for all x ∈ S reg by Lemma 2.1(ii) again.Consequently by Lemma 5.4, Lemma 5.5 and the proof of the main theorem of [Di79], there exists an element ψ ∈ k[S] ⊗ k g which satisfies the condition of the proposition.
Let u 1 , . . .,u m be a basis of g f and let u * 1 , . . ., u * m be the corresponding coordinate system of S = e + g f .There is an affine open subset Y ⊂ B e (S) with Y ∩ p −1 (e) = ∅ such that k[Y ] is the set of linear combinations of monomials in (u * 1 ) −1 , u * 1 , . . ., u * m whose total degree is nonnegative.In particular, we have a global coordinates system u * 1 , v * 2 , . . ., v * m on Y satisfying the relations: Note that, for x ∈ Y , we so have: So, the image of Y by p is the union of {e} and the complementary in S of the nullvariety of u * 1 .Let Y ′ be an affine open subset of Y contained in Ω and having a nonempty intersection with p −1 (e).Denote by L Y ′ the set of regular maps ϕ from Y ′ to g satisfying [ϕ(x), p(x)] = 0 for all x ∈ Y ′ .Lemma 5.7.Suppose that e has Property (P 2 ).For each z ∈ z(g e ), there exists Proof.Let z be in z(g e ).Since Y ′ ⊂ Ω, for each y ∈ Y ′ , there exists an affine open subset U y of Y ′ containing y and regular maps ν 1 , . . .,ν ℓ from U y to g such that ν 1 (x), . . .,ν ℓ (x) is a basis of α(x) for all x ∈ U y .Let y be in Y ′ .We consider two cases: (1) Suppose p(y) = e.Since e has Property (P 2 ), there exist regular functions a 1 , . . .,a ℓ on U y satisfying for all x ∈ U y ∩ p −1 (e).The intersection U y ∩ p −1 (e) is the set of zeroes of u * 1 in U y .So there exists a regular map ψ from U y to g which satisfies the equality: ] = 0 for all x ∈ U y since α(x) is contained in g p(x) for all x ∈ Ω.
(2) Suppose p(y) = e.Then we can assume that U y ∩ p −1 (e) = ∅ and the map ψ = (u * 1 ) −1 z satisfies the condition: [z − u * 1 (x)ψ(x), p(x)] = 0 for all x ∈ U y .In both cases (1) or (2), we have found a regular map ψ y from U y to g satisfying: [z − (u * 1 ψ y )(x), p(x)] = 0 for all x ∈ U y .Let y 1 , . . .,y k be in Y ′ such that the open subsets U y 1 , . . .,U y k cover Y ′ .For i = 1, . . ., k, we denote by ψ i a regular map from for all i, j.Then there exists a well-defined map ψ z from Y ′ to g whose restriction to U y i is equal to ψ i − ψ i for all i, and such that z − u * 1 ψ z belongs to L Y ′ .Finally, the map ψ z verifies the required property.
Let z be in z(g e ).We denote by ϕ z the regular map from Y to g defined by: Corollary 5.8.Suppose that e has Property (P 2 ) and let z be in z(g e ).There exists Proof.By Lemma 5.7, there exists ], for all x ∈ Y ′ .So the map ψ z is convenient, since u * 1 is not identically zero on Y ′ .The following lemma is easy but helpful for Proposition 5.10: Lemma 5.9.Let v be in g e .Then, v belongs to z(g e ) if and only if [v, g f ] ⊂ [e, g].
Proof.As [x, g] is the orthogonal complement of g x in g for all x ∈ g, we have: But g is the direct sum of g e and [f, g] and [v, g e ] is contained in g e since v ∈ g e .Hence [v, g f ] is contained in [e, g] if and only if v is in z(g e ).
Proposition 5.10.Suppose that e has Property (P 2 ) and let ϕ be in k Proof.Since ϕ is a regular map from Y to g, there is a nonnegative integer d and ϕ and ϕ is a linear combination of monomials in u * 1 , . . ., u * m whose total degree is at least d.By hypothesis on ϕ, we deduce that for all x ∈ S such that u * 1 (x) = 0, ϕ(x) is in [g, x].Hence by Proposition 5.6, there exists ψ in k[S] ⊗ k g such that ϕ(x) = [ ψ(x), x] for all x ∈ S.
For |i| equal to l, the term in of [ ψ(e + τ l+2 v), e + τ l+2 v] is equal to [ψ i(j) , e] + [ψ i , u j ].Since (u * 1 ) d vanishes on the set of k[τ l+2 ]-points of Y whose source is a zero of u * 1 , this term is equal to 0, whence the claim.Recall that Y ′ is an affine open subset of Y contained in Ω and having a nonempty intersection with p −1 (e).
Corollary 5.12.Suppose that e has Property (P 2 ).Let ϕ be in .Since ϕ is a regular map from Y ′ to g, there is m i ≥ 0 such that a m i i ϕ is the restriction to Y ′ of some regular map ϕ i from Y to g.For m i big enough, ϕ i vanishes on Y \ D(a i ); hence ϕ i (x) ∈ [g, p(x)] for all x ∈ Y .So, by Proposition 5.6, there is a regular map ψ i from Y ′ to g such that ϕ i (x) = [ψ i (x), p(x)] for all x ∈ Y ′ .Then for all x ∈ D(a i ), we have ϕ an affine open subset of Y , there exists a regular map ψ from Y ′ to g which satisfies the condition of the corollary.
We are now in position to prove the main result of this section: Theorem 5.13.The equality indg e = ℓ holds if and only if e has Property (P 2 ).
Proof.By Corollary 5.12, if e has Property (P 2 ), then Assertion (B) of Theorem 5.1 is satisfied.Conversely, suppose that indg e = ℓ and show that e has Property (P 2 ).By Theorem 5.1, (A)⇒(B), Assertion (B) is satisfied.We choose an affine open subset Y ′ of Y , contained in Ω, such that Y ′ ∩ p −1 (e) = and verifying the condition of the assertion (B).Let z ∈ z(g e ).Recall that the map ϕ z is defined by (5).Let x be in Y ′ .If u * 1 (x) = 0, then ϕ z (x) belongs to [g, p(x)] by (5).If u * 1 (x) = 0 , then by Lemma 5.9, ϕ z (x) belongs to [e, g].So there exists a regular map ψ from Y ′ to g such that ϕ z (x) = [ψ(x), p(x)] for all x ∈ Y ′ by Assertion (B).Hence we have ] for all x ∈ Y .So α(x) contains z for all x in Ω ∩ Y ′ ∩ p −1 (e).Since p −1 (e) is irreducible, we deduce that e has Property (P 2 ).
5.3.
A new formulation of the property (P 2 ).Recall that Property (P 2 ) is introduced in Definition 5.2.As has been noticed in the proof of Lemma 5.4, the morphism G × S → g, (g, x) → g(x) is smooth.As a consequence, the set S reg of v ∈ S such that v is regular is a nonempty open subset of S. For x in S reg , g e+t(x−e) has dimension ℓ for all t in a nonempty open subset of k since x = e + (x − e) is regular.Furthermore, since k has dimension 1, [Sh94, Ch.VI, Theorem 1] asserts that there is a unique regular map β x : k → Gr ℓ (g) satisfying β x (t) = g e+t(x−e) for all t in a nonempty open subset of k.
Recall that Y is an affine open subset of B e (S) with Y ∩ p −1 (e) = ∅ and that u * 1 , v * 2 , . . ., v * m is a global coordinates system of Y , cf. (4).Let S ′ reg be the subset of x in S reg such that u * 1 (x) = 0.For x in S ′ reg , we denote by x the element of Y whose coordinates are 0, v * 2 (x), . . ., v * m (x).
Lemma 5.14.Let x be in Proof.(i) The map β x is a regular map and [β x (t), e + t(x − e)] = 0 for all t in a nonempty open subset of k.So, β x (0) is contained in g e .
(ii) Since S ′ reg has an empty intersection with the nullvariety of u * 1 in S, the restriction of p to p −1 (S ′ reg ) is an isomorphism from p −1 (S ′ reg ) to S ′ reg .Furthermore, β x (t) = α(p −1 (e + tx − te)) for any t in k such that e + t(x − e) belongs to S ′ reg and p −1 (e + tx − te) goes to x when t goes to 0. Hence β x (0) is equal to α( x) since α and β are regular maps.
Corollary 5.15.The element e has Property (P 2 ) if and only if z(g e ) ⊂ β x (0) for all x in a nonempty subset of S = e + g f .Proof.The map x → x from S ′ reg to Y is well-defined and its image is an open subset of Y ∩p −1 (e).Let S ′′ reg be the set of x ∈ S ′ reg such that x ∈ Ω and let Y ′′ be the image of S ′′ reg by the map x → x.
Then S ′′
reg is open in S reg and Y ′′ is dense in Ω ∩ p −1 (e) since p −1 (e) is irreducible.Since α is regular, e has property (P 2 ) if and only if α(x) contains z(g e ) for all x in Y ′′ .By Lemma 5.14(ii), the latter property is equivalent to the fact that β x (0) contains z(g e ) for all x in S ′′ reg .
(ii) If z(g e ) has dimension 1, then e has Property (P 2 ).
(ii) is an immediate consequence of (i) since ϕ 1 (e) = e by our choice of d 1 .
Remark 5.17.When g is simple of classical type, z(g e ) is generated by ϕ 1 (e), . . .,ϕ ℓ (e) means that z(g e ) is generated by powers of e.For example, this situation always occurs when g has type A or C and occurs in most cases when g has type B or D (cf.[Y06b] and [Mo06c, Théorème 1.1.8]).
Proof of Theorem 1.2 for rigid nilpotent orbits
We intend to prove in this section the following theorem: Theorem 6.1.Suppose that g is reductive and let e be a rigid nilpotent element of g.Then the index of g e is equal to ℓ.
Theorem 6.1 will complete the proof of Theorem 1.2 by Theorem 4.10.As explained in introduction, we can assume that g is simple.We consider two cases, according to g has classical type or exceptional type.
6.1.The classical case.We consider here the classical case.Theorem 6.2.Assume that g is simple of classical type and let e be a rigid nilpotent element.Then z(g e ) is generated by powers of e.In particular, the index of g e is equal to ℓ.
Proof.The second assertion results from Remark 5.17, Corollary 5.16(i) and Theorem 5.13.Furthermore, by Remark 5.17, we can assume that g has type B or D.
Set n := 2ℓ + 1 if g has type B ℓ and n := 2ℓ if g has type D ℓ .Denote by (n 1 , . . .,n k ) the partition of n corresponding to the nilpotent element e.By [Mo06c, Théorème 1.1.8],z(g e ) is not generated by powers of e if and only if n 1 and n 2 are both odd integers and n 3 < n 2 .On the other hand, as e is rigid, n k is equal to 1, n i ≤ n i+1 ≤ n i + 1 and all odd integers of the partition (n 1 , . . .,n k ) have a multiplicity different from 2; cf.[CMa93, Corollary 7.3.5].Hence, the preceding criterion is not satisfied for e; so z(g e ) is generated by powers of e. Remark 6.3.Yakimova's proof of Elashvili's conjecture in the classical case is shorter and more elementary [Y06a].The results of Section 5 will serve the exceptional case in a more relevant way.
6.2.The exceptional case.We let in this subsection g be simple of exceptional type and we assume that e is a nonzero rigid nilpotent element of g.The dimension of the center of centralizers of nilpotent elements has been recently described in [LT08, Theorem 4].On the other hand, we have explicit computations for the rigid nilpotent orbits in the exceptional types due to A.G. Elashvili.These computations are collected in [Sp82, Appendix of Chap.II] and a complete version was published later in [E85b].From all this, we observe that the center of g e has dimension 1 in most cases.In more details, we have: Proposition 6.4.Let e be nonzero rigid nilpotent element of g.
(i) Suppose that g has type G 2 , F 4 or E 6 .Then dim z(g e ) = 1.
(ii) Suppose that g has type E 7 .If g e has dimension 41, then dim z(g e ) = 2; otherwise dim z(g e ) = 1.
(iii) Suppose that g has type E 8 .If g e has dimension 112, 84, 76, or 46, then dim z(g e ) = 2, if g e has dimension 72, then dim z(g e ) = 3; otherwise dim z(g e ) = 1.By Corollary 5.16(ii), indg e = ℓ whenever dim z(g e ) = 1.So, as an immediate consequence of Proposition 6.4, we obtain: Corollary 6.5.Suppose that either g has type G 2 , F 4 , E 6 , or g has type E 7 and dim g e = 41, or g has type E 8 and dim g e ∈ {112, 84, 76, 72, 46}.Then dim z(g e ) = 1 and the index of g e is equal to ℓ.
According to Corollary 6.5, it remains 7 cases; there are indeed two rigid nilpotent orbits of dimension 46 in E 8 .We handle now these remaining cases.We process here in a different way; we will study technical conditions on g e under which indg e = ℓ.For the moment, we state general results about the index.
Let a be an algebraic Lie algebra.Recall that the stabilizer of ξ ∈ a * for the coadjoint representation is denoted by a ξ and that ξ is regular if dim a ξ = inda.Choose a commutative subalgebra t of a consisted of semisimple elements of a and denote by z a (t) the centralizer of t in a. Then a = z a (t) ⊕ [t, a].The dual z a (t) * of z a (t) identifies to the orthogonal complement of [t, a] in a * .Thus, ξ ∈ z a (t) * if and only if t is contained in a ξ .Lemma 6.6.Suppose that there exists ξ in z a (t) * such that dim (a ξ ∩ [t, a]) ≤ 2. Then inda ≤ indz a (t) + 1.
Proof.Let T be the closure in z a (t) * × Gr 3 ([t, a]) of the subset of elements (η, E) such that η is a regular element of z a (t) * and E is contained in a η .The image T 1 of T by the projection from z a (t) * × Gr 3 ([t, a]) to z a (t) * is closed in z a (t) * .By hypothesis, T 1 is not equal to z a (t) * since for all η in T 1 , dim (a η ∩ [t, a]) ≥ 3. Hence there exists a regular element ξ 0 in z a (t) * such that dim (a ξ 0 ∩ [t, a]) ≤ 2. Since t is contained in a ξ 0 , If [t, a] ∩ a ξ 0 = {0} then inda is at most indz a (t).Otherwise, a ξ 0 is not a commutative subalgebra since t is contained in a ξ 0 .Hence ξ 0 is not a regular element of a * , so ind a < dim a ξ 0 .Since dim a ξ 0 ≤ indz a (t) + 2, the lemma follows.
From now on, we assume that a = g e .As a rigid nilpotent element of g, e is a nondistinguished nilpotent element.So we can choose a nonzero commutative subalgebra t of g e consisted of semisimple elements.Denote by l the centralizer of t in g.As a Levi subalgebra of g, l is reductive Lie algebra whose rank is ℓ.Moreover its dimension is strictly smaller that dim g.In the preceding notations, we have z g e (t) = z g (t) e = l e .Let t 1 be a commutative subalgebra of l e containing t and consisted of semisimple elements of l.Then [t, g e ] is stable under the adjoint action of t 1 .For λ in t * 1 , denote by g e λ the λ-weight space of the adjoint action of t 1 in g e .Lemma 6.7.Let λ ∈ t * 1 be a nonzero weight of the adjoint action of t 1 in g e .Then −λ is also a weight for this action and λ and −λ have same multiplicity.Moreover, g e λ is contained in [t, g e ] if and only if the restriction of λ to t is not identically zero.
Proof.By definition, g e λ ∩ l e = {0} if and only if the restriction of λ to t is not identically zero.So g e λ is contained in [t, g e ] if and only if the restriction of λ to t is not equal to 0 since g e λ = (g e λ ∩ l e ) ⊕ (g e λ ∩ [t, g e ]).The subalgebra t 1 is contained in a reductive factor of g e .So we can choose h and f such that t 1 is contained in g e ∩ g f .As a consequence, any weight of the adjoint action of t 1 in g f is a weight of the adjoint action of t 1 in g e with the same multiplicity.Furthermore, the t 1 -module g f for the ajoint action is isomorphic to the t 1 -module (g e ) * for the coadjoint action.So −λ is a weight of the adjoint action of t 1 in g f with the same multiplicity as λ.Hence −λ is a weight of the adjoint action of t 1 in g e with the same multiplicity as λ, whence the lemma.
(ii) Suppose that g has type E 8 and that g e has dimension 84, 76, or 46.Then, for suitable choices of t and t 1 , Condition (2) of Proposition 6.4 is satisfied.
6.3.Proof of Theorem 1.2.We are now in position to complete the proof of Theorem 1.2: Proof of Theorem 1.2.We argue by induction on the dimension of g.If g has dimension 3, the statement is known.Assume now that indl e ′ = rkl for any reductive Lie algebras l of dimension at most dim g − 1 and any e ′ ∈ N(l).Let e ∈ N(g) be a nilpotent element of g.By Theorem 4.10 and Theorem 6.2, we can assume that e is rigid and that g is simple of exceptional type.Furthermore by Corollary 6.5, we can assume that dim z(g e ) > 1.Then we consider the different cases given by Proposition 6.9.
If, either g has type E 7 and dim g e = 41, or g has type E 8 and dim g equals 112, 72, or 46, then Condition (1) of Proposition 6.8 applies for suitable choices of t and t 1 by Proposition 6.9.Moreover, if l = z g (t), then l is a reductive Lie algebra of rank ℓ and strictly contained in g.So, from our induction hypothesis, we deduce that indg e = ℓ by Proposition 6.8.
If g has type E 8 and dim g e equals 84, 76, or 46, then Condition (2) of Proposition 6.8 applies for suitable choices of t and t 1 by Proposition 6.9.Arguing as above, we deduce that indg e = ℓ.In conclusion, Condition (1) of Proposition 6.8 is satisfied for t := kt and t 1 :=span(Bt1).
(2) E 8 , dim g e = 84: In this case, dim l e = 48 and dim t 1 = 3.The matrix A(7) has order 5 and it is singular of rank 4. The order of the other matrices is at most 2.
Theorem 2.8 ([CMa93],Theorem 7.1.1).Let O l be a nilpotent orbit of l.There exists a unique nilpotent orbit O g in g whose intersection with O l +p u is a dense open subset of O l +p u .Moreover, the intersection of O g and O l + p u consists of a single P -orbit and codim g (O g ) = codim l (O l ).The orbit O g only depends on l and not on the choice of a parabolic subalgebra p containing it [CMa93, Theorem 7.1.3].By definition, the orbit O g is called the induced orbit from O l ; it is denoted by Ind g l (O l ).If O l = 0, then we call O g a Richardson orbit.For example all even nilpotent orbits are Richardson [CMa93, Corollary 7.1.7].In turn, not all nilpotent orbits are induced from another one.A nilpotent orbit which is not induced in a proper way from another one is called rigid.
Proof.Remind that a sheet of g is by definition an irreducible component of a set {x ∈ g | dim G.x = m}, for some m ≥ 0. Dixmier sheets are those which contain semisimple elements.By [TY05, Theorem 39.4.8], the set of polarizable elements of g are the union of the Dixmier sheets of g.As e is Richardson, it is polarizable [TY05, Theorem 33.5.6].Hence, there is a Dixmier sheet S of g containing e. Let s be a semisimple elements of g belonging to S. Then C(s) is contained in S and by [TY05, Proposition 33.5.4],there is a nilpotent element e ′ in C(s) satisfying dim G.e ′ = dim G.s.Moreover, e ′ belongs to S and by [TY05, Proposition 35.3.5], the set of nilpotent elements belonging to S is precisely G.e ′ , whence G.e = G.e ′ .So, G.e is the subset of nilpotent elements of C(s) whose orbit has dimension dim G.e. | 2014-10-01T00:00:00.000Z | 2009-04-11T00:00:00.000 | {
"year": 2010,
"sha1": "e803856e1202c75d70f65a8572c12b49b09b13ba",
"oa_license": "CCBY",
"oa_url": "https://ems.press/content/serial-article-files/26082",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "35c3f066f03e50ab2411cd475662bea2fa8a74d7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
253569828 | pes2o/s2orc | v3-fos-license | Research Article A Covert-Aware Anonymous Communication Network for Social Communication
. To efectively protect the communication content and communication behavior of social networks, anonymous communication technologies are widely used. However, the anonymous communication networks represented by Tor and I2P lack the covertness of the control plane design, which leads to important user behavioral characteristics in the process of accessing anonymous communication networks. Terefore, network monitors can analyze users’ communication behavior by tracking these characteristics. In this paper, the concept of covert measurement is proposed. On this basis, a software-defned anonymous communication network architecture is presented, which also considers the covertness of the anonymous communication network control plane and the data plane. According to the theoretical analysis and experimental results, the anonymous communication network architecture proposed in this paper has better anonymity and usability than traditional anonymous communication networks, such as Tor.
measurement method based on Shannon entropy [8] in information theory was proposed in 2002, many diferent measurement methods have been derived from the information theory and entropy measurement methods, such as normalized entropy [16], Rényi entropy [17], and conditional entropy [18]. Ten, some scholars proposed methods based on time [11], game theory [19], and diferential privacy [20] to measure the anonymity of anonymous communication networks. In 2018, Das et al. [21] proposed the triangle dilemma of an anonymous communication network, emphasized that only two of the factors of anonymity, delay overhead and trafc overhead, can be selected and utilized their evaluation model to evaluate the anonymous communication network based on onion routing.
Covert Access and Detection.
Covert access includes confusion, encryption, and felds disguised as normal protocols to eliminate the trafc characteristics during software access. Tis paper introduces the technologies used in common anonymous communication networks, such as Tor. 1. Meek: its principle is to use the nonprohibited protocol as the tunnel to pass the Tor trafc in the tunnel. It uses domain fronting technology [22], utilizing HTTPS and CDN to bypass censorship. Meek detection is mainly based on machine learning. Shahbar and Zincir-Heywood [23] used the decision tree classifer to analyze the time span, the number and repeatability of connections, the amount of data transmitted and the number of connections established, or the packet size, the number of bytes sent, and the maximum packet size; then, the trafc can be identifed after learning. Qureshi et al. [24] indicated that the total duration of TCP connections of normal HTTPS and meek is diferent and the length distribution proportion of TCP payload is also different. Zhao et al. [25] studied Tor trafc classifcation using the state-of-the-art algorithm, which included J48, J48Consolidated, BayesNet, jRip, OneR, and RRPTree. In addition, the entropy characteristic of Meek also has a certain efect in detection when using machine learning.
Te Obfs includes obfs2, obfs3, obfs4, and Scrabblesuit [26]. At present, obfs4 is the most commonly used. Its principle is to encrypt the trafc, which makes it look like random bytes, to avoid fngerprint detection based on a blacklist. Obfs4 can combat active probing attacks [11] with key negotiation to prevent reviewers from utilizing connection discovery bridges. In terms of detection, Wang et al. [27] found that joint detection based on entropy detection and simple heuristic algorithms (such as length detection) can identify Obfs trafc. Other detection methods include packet length detection and truncated sequential probability ratio testing.
Covert Trafc Detection.
At present, trafc detection technology [28] can be divided into the following four categories: (1) semantic-based detection, (2) entropy-based detection, (3) machine learning-based detection [25], and (4) combined detection [29]of DPI and frewall, which can reconstruct the complete trafc, analyze the specifc protocol, identify the keywords of packets, and actively detect suspicious servers to avoid false-positives. . Te development of a software-defned network also provides a new optimization scheme for the traditional anonymous communication network, such as deploying the softwaredefned anonymous communication protocol of the network layer on the autonomous domain router of the network service provider (such as lap [30] and Phi [31]). Tese new anonymous communication networks separate the control plane from the forwarding plane, making the information transmission path programmable.
Threat Models
Tis paper assumes that there is large-scale supervision of ISPs in the network and that the relevant monitoring platform of each cloud platform server has been exposed to a supervisor. However, supervisors do not have direct control over individual hosts. Terefore, nodes in the Internet are defned in this paper as the following four diferent nodes: client nodes used for users to access the network, nodes used for forwarding information in anonymous communication networks (hereafter referred to as controlled nodes), malicious nodes controlled by supervisors (hereafter referred to as malicious nodes), and dazed third-party nodes.
For the client node, in the environment described in this paper, the supervisor can only see the trafc at the entrance and exit of the node but cannot obtain the control authority of the node. Terefore, the supervisor detects whether users use anonymous communication networks and try to obtain users' communication relationships and other information through wiretapping, recording, replay and trafc analysis, and other means.
For the controlled nodes, due to their wide distribution, this paper assumes that the supervisor can only monitor and analyze the controlled node state and its incoming and outgoing trafc in a certain physical area. However, it cannot obtain all the node states in the anonymous communication network. Nevertheless, the supervisor can add the malicious nodes under its control to the anonymous communication network to achieve a man-in-the-middle attack or a witch attack.
For malicious nodes, this paper only considers that a large number of malicious nodes controlled by supervisors are concentrated in one physical region and a small number are distributed in other regions. Beyond that, supervisors can only access data from some Internet infrastructure providers. Terefore, because anonymous communication networks are distributed all over the world, their design rules include the fact that each node in the path does not exist in the same country or region to ensure that the case that all trafc is tracked in a transmission path is not considered.
In this paper, the dazed third-party nodes mainly refer to some Internet infrastructure platforms with massive users and data fles, such as web storage for storing media fles, social platforms for publishing information, and various Git repositories for hosting code. Tis article assumes that the custodian has the same access rights as the user and does not have access to the user's usage records of these dazed third parties.
Tor.
Tor is a widely deployed and popular anonymous communication network and its main purpose is to prevent attackers from identifying communication parties or associating communication links with a single user. Tor is based on the P2P network architecture and uses the onion routing protocol. Its data are transmitted through a series of uncontrolled voluntary nodes in the Internet; that is, there are controlled nodes, malicious nodes, and dazed third-party nodes in the Tor network.
Tor works as follows: clients build a link by selecting entry, intermediate, and exit nodes. Te Tor client obtains the current Tor network consensus fle from the current Tor's authoritative directory server. Tis fle contains basic information, such as the IP address, bandwidth and location of each forwarding node in the current Tor network, and the services supported by the node, and this information is updated every hour. Te client selects three nodes from the nodes listed in the consensus fle. For randomly selected nodes, the selection probability is approximately proportional to the bandwidth weight of the node. When creating and using links, layered encryption of onion routing ensures that each forwarding node only knows the information of the previous hop and the next hop in the link and no single forwarding node can transmit the client's information to the destination [32].
To improve the security and anonymity of services, Tor clients use diferent access protection mechanisms when they access Tor networks. In the client access process, Tor adopts a series of security mechanisms, such as bridge node [33], Meek covert channel construction, Obfs obfuscation, and FTE encryption, to protect user trafc from supervision during access. Te protection mechanism during access is randomly selected by users, and the probability of the optional nodes obtained for link establishment is proportional to the bandwidth [34]. To be an optional node, a forwarding node must meet a number of selected criteria to ensure good performance and increase the cost of being attacked. Te selection criteria are as follows: frst, the forwarding node must be measured by Tor's bandwidth measurement system, which takes two weeks [23]. Second, the forwarding node must have enough bandwidth to make its weight reach at least 2000. Te bandwidth value measured by Gerry Wan [35] et al. is approximately 35.5 Mbit/s. Tird, the forwarding node must always be online to be considered a stable state. Fourth, forwarding nodes must remain online long enough to be considered familiar nodes.
Anonymous Communication Network Based on Software
Defnition. Tis paper proposes a software-defned anonymous communication network that can achieve good covertness. Te network architecture is displayed in Figure 1. Te access method based on Internet public service is used in the network of the user access stage. In the data transmission stage, the data will pass through two parts: an isolated network and a core network. Te isolated network consists of a control center with multiple Internet infrastructures (dazed third parties). Te core network consists of a control center and several controllable forwarding nodes distributed all over the world.
User Access.
Tis stage is jointly completed by the client, the anonymous communication network (controller, access agent), and the dazed third party, realizing the process of establishing an implicit communication relationship with the dazed third party and obtaining response data fles under the condition that the client and the servers in the anonymous communication network are unaware of each other. Te specifc process is demonstrated in Figure 2.
Te working mechanism of the client, access agent, and the control center can be described by the following Algorithm 1.
Te client in Figure 2 obtains the temporary address of the registry in out-of-band mode like SMS or hiding the key information in tiktok [36] and obtains the identity identifer, the public and private key pair, and the list of accessible access agents from the registry before accessing the communication network. After that, it downloads the real resource until it obtains the responses from the access agent.
Te access agent forwards content from both the client and the controller Algorithm 2.
For the controller, when it receives the request from the access agents, it would verify the messages and send the real resource to the web storage. Te address of this storage is sent to the access agent Algorithm 3.
Before accessing the anonymous communication network, the client obtains the temporary address of the registry in out-of-band mode and obtains the identity identifer, the public and private key pair, and the list of accessible access agents from the registry. When accessing the anonymous communication network, the access request message is sent to the access agent. After receiving the access request message, the access request message is forwarded to the control center server. Additionally, after receiving the access request message, the control center server packages the control information corresponding to the access request and saves it to a third-party web storage. After receiving the reply message from the authoritative directory server, the access agent returns the reply message to the client and notifes the client to read the control information from the specifed third-party storage node. After receiving the response message from the access agent, the client reads the control information from the specifed third-party storage node.
Isolate Network.
Trough the fle exchange rather than the data streaming anonymous information based on the fle name and the fle encryption implementation content encryption, the isolation network transmission method based on the Internet of Basic Public Services implements anonymous communication users through asynchronous communication, fragmentation, and screen fow mechanisms and ensures trafc data from the client before entering the covertness of the anonymous communication networks.
At this stage, the data sent by the client is transmitted to each dazed third-party platform in the form of a fle, and then the corresponding A-nodes in the core switching network obtain the fle from the dazed third-party platform. Due to the public nature of the third party, the regulator's perspective cannot simply identify the controlled nodes and the transmission trafc.
Core Network.
Te core network borrows the idea of software defnition and adopts the form of the controller and the controlled forwarding node to realize the programmable node and the forwarding path. To realize the covertness of the network communication, the system uses fle exchange instead of message exchange to realize asynchronous communication. After being removed from the isolated network, the data to be transmitted are synchronized to each intermediate node in the form of a fle, and the intermediate node transmits the data to the receiver according to the specifed forwarding path. Te two parties do not directly transmit encrypted trafc.
(1) Architecture. Te system proposed in this paper consists of N nodes and K console servers and is shown in Figure 3.
As a communication user, a node also provides fle storage and forwarding services for the anonymous communication of other nodes. Each node maintains N folders
Anonymous User
Internet User Internet User Mix Network Access Network Security and Communication Networks and N − 1 backup fles, where N folders correspond to each node i. Te console server is the core of the system, which controls the IP addresses of all nodes and determines whether each node participates in the communication process. Ten, the sender can set the forwarding route through the console server before communication. Te console server can also control whether the node performs fle synchronization. Due to the controller's large throughput, the system needs to use multiple controllers to prevent the supervisor from tracing the source.
As the core of the system, the controller server controls the IP address of all nodes.Tese nodes running in the internet are silent at frst, and can be activated by the controller server for the communication process.. Te sender can determine the forwarding route through the console server before communication. Te console server can also control whether the nodes synchronize fles.
(2) Route Selection. Route selection is performed by the controller selecting m controlled nodes (m < N − 1.) or the sender selects m nodes to form the path R, which can be seen in the following equation: In addition, the design of this route has the following constraints: it needs to go through diferent countries; it needs to go through diferent VPS manufacturers; and at least three controlled nodes must be passed. Te control center server then sends synchronization confguration commands to each node, as depicted in Figure 4. (3) Information Transmission. As shown in Figure 5, node A synchronizes information to the nodes in the forwarding path in the form of an A fle, for example, the exchanged keys, but based on Wildcard identity-based encryption [37].
Te nodes in the path synchronize information in turn until node B receives the fle and returns the receiving identifer in the same way. In this process, the trafc identifed by the supervisor is that node A communicates with another node C and Bob communicates with another node D.
Finally, A and B realize the complete communication process in the core switching network. During the whole process, BOTH A and B perform fle synchronization operations with multiple nodes, masking the real trafc transfer information. Tird parties cannot track specifc data trafc.
Security Analysis.
Te background of the proposed system is to build an anonymous communication system implemented by controllable nodes at the application layer in an uncontrolled network. It shields all information below Input: request, SK controller , PK AA , webStorageList Output: address (1) message � dec(request.data, SK controller ) (2) K � message.K (3) If verify(message, PK AA ) �� False then (4) return error (5) address � null (6) for webStorage in webStorageList do (7) If checkAvailable( webStorage) then (8) address � webStorage (9) break; (10) (1) Security. In terms of security, this article considers several common attacks: Sybil attacks, man-in-the-middle attacks, and DoS attacks. A Sybil attack refers to the fact that a few nodes in a P2P network control the majority of nodes and obtain multiple false identities, making it no longer a peerto-peer network. In this system, since the console server is credible, the scenario of the Sybil attack is that the node is controlled by the attacker and all synchronized fles are obtained by the attacker. In fact, what distinguishes this system from other P2P networks is that the console server is a trusted central control node that can control and monitor the abnormal trafc of all nodes and notify the node user when there is an abnormality. Abnormal nodes are quickly separated from the network, ending the witch attack. A manin-the-middle attack means that the information of the communicating parties is intercepted and forwarded by the attacker. However, this system not only uses TLS1.3 to encrypt the trafc at the network layer but also uses digital signatures and encryption for valid information at the application layer; therefore, only the receiver can successfully decrypt it and avoid man-in-the-middle attacks. According to Abhishta [38], the possible DOS attacks in this system occur during the communication process when a node is maliciously controlled, and before the console server takes it ofine, a large amount of malicious data is sent to other nodes, which causes the network bandwidth to be occupied and other normal forwarding services cannot be performed.
In the information transmission of 3.1, this paper has proposed the fact that the system will send a maximum time limit during precommunication. Terefore, when the sender node in the network does not receive the fag information returned by the node within the maximum time limit, the console server sends data to ensure that all nodes discard the malicious data, thereby preventing DOS attacks. Besides that, the cost and efectiveness must be two targets for each communication network. Naiwei Liu [39] came up with a method for trustzone, in which way we could get the same way to fnd out the cost and efectiveness of our node.
(2) Antitraceability. Since this article implements routing in the form of fle forwarding using software in an uncontrolled network environment, it can better resist traditional network-level traceability attacks, including passive traceability and active traceability. In passive traceability, all types of attacks come from correlation attacks. A correlation attack occurs when the attacker can control the nodes of the anonymous channel and can observe the ingress and egress trafc of the anonymous channel at the same time. Next, they can compare the trafc packets and their sequence within a certain time delay and then analyze the corresponding information to achieve a traceability efect. Correlation attacks require that both ends of the communication be under control, but, in large-scale network confrontations, the network where the sender and receiver nodes are located is within the supervision of the supervisor, and the trafc at the network layer is monitored and correlated by the supervisor. In our system, the sender and the receiver only exist once in the point-to-point communication at the application layer, the amount of payload data that they have is small, and most of the data are encrypted and transmitted through other nodes. Terefore, within a certain time delay, the supervisor cannot associate the trafc of the two communicating parties from the massive trafc, which guarantees the noncorrelation. Active traceability is mainly based on network watermarking attacks. A network watermarking attack means that when the trafc enters the anonymous channel, the network supervisor inserts specifc watermark information into the trafc, and when the trafc is received by the receiver, the two are correlated, thereby destroying the anonymity. According to the watermark form, it can be divided into four forms based on content, delay, packet length, and ratio. Te common point of this type of attack method is that the object is a network stream. Terefore, the produced watermark is inevitably lost in multiple asynchronous forwarding of multiple nodes in diferent physical environments. Te supervisor cannot obtain the relationship between the node and the console server, which guarantees the noncorrelation of the system. In addition, due to the diferent paths used to forward valid data each time, the supervisor cannot distinguish the real recipient, thus ensuring anonymity.
The Modeling Method
Te goal is to quantify the invisibility of an anonymous communication network, which consists of the invisibility of client access and the invisibility of trafc in the network. Terefore, this paper indicates the need to detect both client programs and trafc covertness. Te covertness detection model can be represented by the covertness block diagram, which is similar to the malware detection model [40]. Tis block diagram is a logical graphical description method that determines the probability of covertness behavior by probability analysis of each available data and obtains a relative covertness score based on this probability. Terefore, as long as the general characteristic data collection is stipulated, the covertness of the client access stage and data transmission stage of any anonymous communication network can be evaluated.
In the general model, the detection program collects all commonly available data. Based on the threat modeling in this paper, the data collected by the detection program will not be tampered with by attackers.
Covertness Block Diagram.
Te covertness block diagram is defned as the following path from left (start state) to right (end state). Each node in the path corresponds to a condition, according to which the probability can be determined as PI. Tus, the probability of each node I on the entry path is as follows: Te order in the direction is determined by the order from small to large according to the judgment probability of the collected data.
Access Covertness.
In order to observe users accessing anonymous communication networks, observers frst need to observe egress trafc data. As seen from the relevant work in Section 2, the mainstream domain fronting technology now mainly uses large Internet cloud service providers, such as Microsoft, Amazon, and Cloudfare. Terefore, the domain names, DNS query records, and IP addresses accessed by the outbound trafc have become critical observability indicators. Second, to further check whether the host where the client resides has covert access behavior, the detection program periodically samples and scans the node within a certain time T after detecting the preegress trafc to see if there is trafc with the same destination. If there is trafc, P i will prove that the node has no covert access.
Covertness of Transmission.
In the transmission stage, the trafc and the performance status of each node before and after user data enter the anonymous communication network should be considered simultaneously. Considering the specifc parameters involved in the access and data transmission of an anonymous communication network, the following table is given in this paper Table 1.
Based on the above methods, the model constructed in this paper is displayed in Figure 6.
Te corresponding equation is as follows: (3)
Experiment
By building and deploying the system, this chapter shows the results of the basic performance, including the response of the control center, the forwarding delay, and the throughput of the core network. In addition, the results of the covertness of the system and Tor have been measured.
Response Time of the Control Center.
Te response of the control center mainly includes the delay of fow table switching, the response time of distribution, and the response of node state acquisition. Te data collected in this section are the diferences between the reading database record time and the execution operation time of the web system. Te test results show that the response time of these activities in the actual test is less than 5 seconds and in most cases not more than 3 seconds, which can be regarded as a real-time response.
Forwarding Delay.
Te forwarding delay of the core network is the time required by both sides of communication from sending data to receiving data. Te sender will split the original data to be sent into diferent slices and send them to the receiver, and the receiver will restore the original data. Te test results of this section are shown in the following Table 2. It can be seen from the experimental results that when the number of slices increases, the delay is considerably reduced. Tis is because the scheme described in this paper transmits fle units. When the fle size is less than the size of the data transmitted in unit time, the transmission queue can maintain fast parallel transmission. When the fle size is too large, each fle is transmitted in a single queue. Terefore, when the business scenario is faced with the need to reduce delay, the requirements can be customized according to packet fragmentation.
Forwarding Troughput of the Core Network.
Te forwarding throughput of the core network is the data forwarding volume of each node in the process from sending data to receiving data. Te sender will split the original data to be sent into a fxed number of pieces and send it to the receiver through multiple links, and the receiver will restore the original data. Te test results of this section are shown in the following Table 3.
It can be seen from the experiment that when the number of links increases, the load of each link is relatively balanced with that of each receiver. Terefore, it is not difcult to determine that high concurrency can be achieved through multiple links. When the business scenario is faced with the need to improve transmission efciency, the demand can be customized by increasing the number of routes.
Security Lower Bound Assessment.
Because the anonymous communication network described in this paper adopts the idea of software defnition, the requirements can be customized according to diferent scenarios. Terefore, this section attempts to obtain diferent levels of covertness scores by adjusting the number of nodes and carries out experimental tests on them. Te specifc method is that, in the system described in this paper, the fle with a data size of 10 MB is forwarded from the overseas node to the domestic node. On the basis of ensuring that each data exchange in the link in the transmission stage occurs in nodes in two diferent countries, the covertness score of a set of anonymous communication networks can be calculated by manually adjusting the number of nodes that passed by the client trafc. We calculate the covertness score according to the above covertness test method (Figure 7).
It is easy to know that the data transmission delay increases linearly with the increase in the number of relay nodes. From the global perspective, the more nodes there are on the link, the greater the probability that forwarding behavior has the same characteristics.
Terefore, when the number of nodes is greater than 5, the covertness score does not increase substantially. In the current scenario, when using the anonymous communication network scheme in this paper, the security lower bound of the relay node is 5.
Covertness
Comparison. Te anonymous communication network described in this paper transmits data through fles, while Tor and other anonymous communication systems transmit data in the form of streams. Considering that the link selection of Tor cannot be set manually and locally, this paper simulates the Tor network through shadow on the server during covertness comparison and runs the anonymous communication network client described in this paper on a centos7 virtual machine according to Tor's simulation communication log, restoring the communication relationship of the simulation log. Ten, we collect the data of the two communication processes and measure the covertness. In order to add the systems to be compared, this paper selected several systems that can be simulated in the Intranet. As an anonymous fle sharing system with high latency, freenet can deployed through docker. Besides that, both PrivaTegrity and Dicemix, which is based on cMix [41], could be built and evaluated.
In this experiment, we can adjust the probability of detecting characteristic trafc in the outlet trafc by adjusting the number of redundant segments (1/2/4/8) when the client program sends data. At the same time, a curl is used to send requests to the domain names of major Internet cloud service providers according to diferent ratios to set the probability of indicators in another I/O. Te data graph obtained at the end of this experiment is as follows.
When using the anonymous communication network described in this paper, we send the same picture to another server located abroad through the designated client.
We run Wireshark on the host of the centos7 virtual machine to capture the virtual machine program and the network card, which is used to simulate the cloud server operator and the defender to detect its export trafc and service status. We run tcpdump on the controlled node to simulate the supervisor to supervise each node.
Finally, by comparing and transmitting the communication behavior in the simulation log many times, the covertness score comparison between the anonymous communication network described in this paper and Tor is obtained, as shown in Figure 8 as follows. It can be seen from the fgure that the covertness evaluation method can evaluate the covertness of onion routed and mix-based anonymous communication network. In addition, in some specifc network scenarios, softwaredefned anonymous communication networks can obtain higher covertness points than Tor. Tanks to the high latency, freenet could get the highest covertness points.
Conclusions and Discussions
Tis paper focuses on the construction of an antieavesdropping anonymous communication network system under the condition of the uncontrolled existence of various trafc characteristic detection environments and analyzes the problems existing in all stages of anonymous communication networks from access to data transmission. Tis paper proposes an anonymous communication network system based on the idea of software defnition. Due to the current situation that there is no good quantitative evaluation method to solve these problems, this paper proposes a method to measure invisibility and uses this method to compare the proposed system with the onion network to prove the efectiveness of the system on invisibility.
What is more, there are still many details to be improved. Besides information in the transport layer or application layer, an optimized frewall anomaly resolution improved by Fulvio Valenza [42] can make specifc rules changed more quickly. And the total of our channels can be increased. For example, Sherifdeen Lawa [43] introduced microfrontend and it could be used to deploy the microservice faster and more fexible [14].
Data Availability
All relevant data used to support the fndings of the study are included within the article.
Conflicts of Interest
Te authors declare that they have no conficts of interest to report regarding the present study | 2022-11-17T16:09:00.444Z | 2022-11-15T00:00:00.000 | {
"year": 2022,
"sha1": "55bcb5a76e882f05258c08ed4b65e42d19c42617",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/scn/2022/2255047.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "72263a6e4d3def589371931e662955f45ca45327",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
67788977 | pes2o/s2orc | v3-fos-license | Nordic Innovative Trials to Evaluate osteoPorotic Fractures (NITEP) Collaboration: The Nordic DeltaCon Trial protocol—non-operative treatment versus reversed total shoulder arthroplasty in patients 65 years of age and older with a displaced proximal humerus fracture: a prospective, randomised controlled trial
Introduction The proximal humerus fracture (PHF) is one of the most common fractures in the elderly. The majority of PHFs are treated non-operatively, while 15%–33% of patients undergo surgical treatment. Recent randomised controlled trial (RCT) and meta-analyses have shown that there is no difference in outcome between non-operative treatment and locking plate or hemi-arthroplasty. During the past decade, reverse total shoulder arthroplasty (RTSA) has gained popularity in the treatment of PHF, although there is a lack of RCTs comparing RTSA to non-operative treatment. Methods This is a prospective, single-blinded, randomised, controlled, multicentre and multinational trial comparing RTSA with non-operative treatment in displaced proximal humeral fractures in patients 65–85 years. The primary outcome in this study is QuickDASH-score measured at 2 years. Secondary outcomes include visual analogue scale for pain, grip strength, Oxford shoulder score, Constant score and the number of reoperations and complications. The hypothesis of the trial is that operative treatment with RTSA produces better outcome after 2 and 5 years measured with QuickDASH. Ethics and dissemination In this protocol, we describe the design, method and management of the Nordic DeltaCon trial. The ethical approval for the trial has been given by the Regional Committee for Medical and Health Research Ethics, Norway. There have been several examples in orthopaedics of innovations that result in failure after medium-term follow-ups. In order to prevent such failures and to increase our knowledge of RSTA, we feel a large-scale study of the effects of the surgery on the outcome that focuses on the complications and reoperations is warranted. After the trial 2-year follow-up, the results will be disseminated in a major orthopaedic publication. Trial registration number NCT03531463; Pre-Results.
Introduction The proximal humerus fracture (PHF) is one of the most common fractures in the elderly. The majority of PHFs are treated non-operatively, while 15%-33% of patients undergo surgical treatment. Recent randomised controlled trial (RCT) and meta-analyses have shown that there is no difference in outcome between non-operative treatment and locking plate or hemi-arthroplasty. During the past decade, reverse total shoulder arthroplasty (RTSA) has gained popularity in the treatment of PHF, although there is a lack of RCTs comparing RTSA to non-operative treatment. Methods This is a prospective, single-blinded, randomised, controlled, multicentre and multinational trial comparing RTSA with non-operative treatment in displaced proximal humeral fractures in patients 65-85 years. The primary outcome in this study is QuickDASH-score measured at 2 years. Secondary outcomes include visual analogue scale for pain, grip strength, Oxford shoulder score, Constant score and the number of reoperations and complications. The hypothesis of the trial is that operative treatment with RTSA produces better outcome after 2 and 5 years measured with QuickDASH. Ethics and dissemination In this protocol, we describe the design, method and management of the Nordic DeltaCon trial. The ethical approval for the trial has been given by the Regional Committee for Medical and Health Research Ethics, Norway. There have been several examples in orthopaedics of innovations that result in failure after medium-term follow-ups . In order to prevent such failures and to increase our knowledge of RSTA, we feel a large-scale study of the effects of the surgery on the outcome that focuses on the complications and reoperations is warranted. After the trial 2-year follow-up, the results will be disseminated in a major orthopaedic publication. trial registration number NCT03531463; Pre-Results.
IntroduCtIon
In the ageing population, the proximal humerus fracture (PHF) is one of the most common fractures. In addition to the significant disability caused by PHF among older individuals, such fractures are also associated with a high economic impact. 1 2 In general, the operative interventions and rehabilitation after a shoulder fracture are resource consuming. Furthermore, it has been suggested that a significant proportion strengths and limitations of this study ► The publication presents the efficacy randomised controlled trial (RCT) on proximal humerus fracture. ► The trial fills an urgently needed knowledge gap in a rapidly increasing method used in proximal humerus fracture comparing reversed prosthesis with non-operative treatment. ► In order to improve generalisability of results, the trial will be conducted in several trauma centres in Nordic countries with similar healthcare system. ► The strength of our study setting is the experience of the researchers and personnel gained from previous large RCTs. ► The limitation is the normal issue of external validity.
Open access
of common medical interventions-including orthopaedics-are not based on solid high-quality scientific evidence. 3 Despite this, many of them are still widely used. 4 5 There have been alarming reports showing that operative treatment of some common fractures, such as distal radius and proximal humerus, is increasing without any evidence to support the operative treatment of these fractures. 4 5 The sex-specific fracture incidence for proximal humeral fractures for women in Sweden was 135 per 100 000 person-years 6 ; in total, almost 10 000 fractures were diagnosed in 2012. 6 The majority of PHFs are treated non-operatively and approximately 15%-33% of patients are treated surgically. 7 Recent randomised controlled trials (RCTs) and meta-analyses have shown that there is no difference in functional outcome between non-operative treatment and locking plate or hemi-arthroplasty (HA) in the treatment of PHF. However, operative treatment has a significantly higher risk of complications and reoperations of up to 30%. [8][9][10][11] Originally, reverse total shoulder arthroplasty (RTSA) was used in osteoarthrosis in patients without cuff function to gain better functional outcomes. During the past decade, however, RTSA has gained popularity in the treatment of PHF. In a recent Medicare population analysis carried out between 2005 and 2012, the proportion of surgical procedures for PHF that were total shoulder arthroplasties (TSA) (of which RTSAs constituted 89% in 2011) increased from 3% to 17%, while the proportion of HA decreased from 42% to 24% during the same time period. 7 12 There have been some systematic reviews based on case series and patient cohorts including one RCT that compared HA and RTSA. The results are equivocal, RTSA resulted in better functional outcomes compared with HA in some studies, 13 14 with no difference seen in others. 15 In the RCT, the complication rate in the HA group was significantly higher than in the RTSA group (24% vs 10%, respectively). 16 However, an arthroplasty registry analysis including 10 844 operations (6658 TSA and 4186 RTSA) showed the RTSA postoperative complication rate to be 22% at 2 years. 17 The results from a cost analysis concluded that RTSA treatment is significantly more expensive than HA treatment ($57 000 vs $33 480, respectively). 18 Currently, there are no RCTs comparing RTSA to locking plate or non-operative treatment after PHF. The current literature seems to discourage the operative treatment of PHF with locking plate or HA, and there is no evidence that favours surgery over non-operative treatment. 9 11 In spite of the substantial costs and lack of evidence supporting the effectiveness of RTSA for PHF, it has become the accepted standard of care in the USA. 19 Therefore, there is an urgent need for high-quality RCTs that compare RTSA with non-operative treatment.
The Nordic Innovative Trials to Evaluate osteoporotic fractures-NITEP collaboration began with a trial of proximal humerus fractures (R10127, NCT01246167). The aim of the present trial is to compare RTSA and non-operative treatment in the treatment of proximal humerus fracture in the elderly. When conducting a randomised controlled multicentre trial, the critical points are the patient recruitment rate and the stability of key personnel. Therefore, a multicentre Nordic collaboration is warranted. In our previous RCT, our collaboration was found to be both reliable and effective in the recruitment of patients. Furthermore, we are confident that the planned Nordic DeltaCon trial is feasible and that it will have an impact on the daily management of these difficult and controversial fractures.
MEthods And AnAlysIs
This is a prospective, superiority, single-blinded, randomised, controlled, multicentre and multinational trial that will compare RTSA and non-operative treatment in proximal humerus fractures in patients aged 65-85 years with displaced three-part and four-part fractures (B and C types) according to the recent AO/OTA 2018 revision. 20 The trial setting has been drafted in accordance with the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) statements.
The hypothesis of the trial is that RTSA produces a better functional outcome and less pain compared with non-operative treatment at 2 years.
The primary outcome in this study will be the Quick-DASH (the short form of Disabilities of the arm, shoulder and hand) score measured at 2 years. Secondary outcomes will be the QuickDASH score after 1, 2 (short term) and 5 years (medium term), general visual analogue scale (VAS) for pain, grip strength, the Oxford shoulder score (OSS), the Constant score (CS) and the number of reoperations and complications. Quality of life will be assessed with 15-D. Cost-effectiveness analysis will be performed after completion of the trial. The questionnaires used in the trial will be repeated after 10 years with those patients who are still reachable. During all time points, the complications and reoperations will be recorded.
Patient selection
The eligible study population will comprise all consecutive patients aged 65-85 years with a proximal humerus fracture diagnosed within 7 days and operated within 14 days of the trauma. The upper age limit was chosen to limit loss to follow-up due to unrelated cause mortality and to exclude those patients with a very high surgical risk. The lower age limit was chosen according to a recent publication by Sebastian-Forcada et al. 16 They found that RTSA had better outcomes in patients >70 years of age compared with HA. Additionally, the complication rate with RTSA was found to be lower than for HA, and thus we decide that it would be safe to set the lower age limit at 65 years. In another RTSA trial that included patients aged 65 years or over, 21 the number of adverse events (AEs) after interim analysis was less than reported in the literature, which also supports the lower age limit.
Open access
The research nurse will be notified of any patients screened for the trial who decline to take part or are excluded from the randomisation. The nurse will then complete a patient information form in order to collect the total number of patients screened. 16 Patients who decline from taking part in the trial will be asked to join the follow-up cohort allowing us to evaluate external validity.
The following criteria will be used throughout the study for patient selection.
Inclusion criteria: ► Low-energy AO/OTA group 11-B1.1, 11-B1.2 and 11-C1.1, 11-C3.1. Both B and C type includes the subgroups: displaced, 2 impacted 3 or non-impacted 4 from the universal modifiers list. Exclusion criteria: Radiographic ► Mal-inclination less than varus 30°or valgus 45°. ► Less than 50% contact between head fragment and metaphysis/diaphysis. ► Head split fractures (group 11-C3.2 and 11-C3. 3) with >10% of the articular surface in the main head fragment. ► Dislocation or fracture dislocation of the gleno-humeral joint. ► Pathological fracture. ► Glenoid abnormality (retroversion, >15°; glenoid fracture; cuff arthropathy). General ► Refuses to participate in the study. ► Aged <65 years or >85 years. ► Serious poly-trauma or additional surgery. ► Non-independent, drug/alcohol abuse or institutionalised (low co-operation). ► Contraindications for surgery (severe cardiovascular, pulmonal or neurological comorbidity). ► Does not understand written and spoken guidance in local languages. ► Previous fracture with symptomatic sequelae in either shoulder. ► Patients living outside the hospital's catchment area. randomisation Patients will be randomised using a random number matrix in block allocation fashion. The blocks will be stratified by age (65-75 years and >75 years) since age has been shown to associate with the main outcome measure. The treatment allocations from the matrix will be acquired from an online randomisation system (website http:// randomize. net), where the researcher logs in after written consent and receives the correct intervention. The physician responsible for the intervention or treatment will not participate in any part of the collection of patient outcomes during the follow-ups. The research coordinator will monitor the study flow. An independent monitoring committee has been established with our previous RCT.
surgical treatment Operative treatment will be performed as a daytime procedure by trained and experienced upper extremity surgeons. The surgeons' skills and number of procedures from each centre will be reported according to the criteria given by the Consort Group. 22 The aim of surgical treatment is to restore proper biomechanics, to achieve an optimal range of motion and to minimise patient discomfort. The standardised approach will be the delto-pectoral to minimise any damage to the deltoid muscle. Supraspinatus excision and biceps tenotomy will be performed. A cemented monoblock humeral stem will then be implanted, in a neutral version. An important point that will be addressed is to ensure proper tension and stability of the prosthesis. Fixation of the greater and lesser tubercles is important in optimising the ability and strength of internal and external rotation. 23 When necessary, braided polyester suture-cerclages engaging the insertion of the subscapular and infraspinatus tendons enforced by a bone graft or a 'horseshoe-graft' 24 from the humeral head will be used. If the surgical neck fracture extends further distal than the humerus metaphysis, a diaphyseal cerclage will be applied to prevent further diaphyseal fracturing. Finally, to reduce the risk of the radiographic 'scapular notching', the largest glenosphere will be used to secure an inferior prosthetic overhang with reference to the scapular neck and to reduce the risk of any instability of the prosthesis. 25 non-operative treatment Patients in the non-operative group will be immobilised in a sling for 2 weeks before starting self-exercises and instructed physiotherapy. Postoperative treatment differs with respect to timeline between the surgical treatment group and the non-operative group due to the different degree of stability for a reversed prosthesis and a non-operatively treated displaced fracture (table 1). The elements of physiotherapy will, however, be the same. 21 26 rehabilitation In order to achieve as good functional outcomes as possible, the rehabilitation protocols will be standardised in both treatment groups and the patients will be given a written protocol. Patients in both groups will be guided by in-ward physiotherapists and will be given written physiotherapy guidelines for both instructed physiotherapy and self-exercises. After discharge from the hospital, patients will be referred to physiotherapy for further guidance. Patients in the operative group will start exercises from the first postoperative day to reduce haematoma in the 'dead space' created by resection of the supraspinatus tendon and the design of the reverse prosthesis. For a detailed rehabilitation programme, please see table 1.
Follow-up
Patients will visit the orthopaedic outpatient clinic at the hospital for a follow-up visit with the orthopaedic Open access surgeons at 3 months. We recommend a 2 weeks follow-up for the non-operative group to confirm the rehabilitation program can start: An additional radiographic examination to exclude secondary facture displacement, and instruction by a physiotherapist after removal of the sling. Research visits will take place at 1, 2, 5 and 10 years. During these visits, the QuickDASH, OSS, CS, 15D and plain X-rays examinations will be performed. At the research visit, the patients will be asked to wear a shirt and instructed not to provide information about their treatment group to ensure the researcher or physiotherapist are blind to the initial treatment.
Should any AE occur at any point during the follow-up, an AE report will be sent to the Tampere research nurse. Patients initially allocated to the non-operative group but later operated on during the trial will be analysed based on the intention-to-treat principle.
At 1-year control in selected sites, patients will be asked for their consent to take part in an additional study. Should patients agree to take part they will have accelerometer censors attached by plasters to both upper arms for a week. The sensors will measure 24/7 activity and degree of movement. With these data, we will be able to compare the activity levels of both treatment groups Stretching exercises to progressively increase ROM in all positions. No 10 weeks Progress strengthening exercises, weightbearing exercises through the upper extremity to improve shoulder proprioception.
weeks weeks
All weeks mean after treatment start in weeks. *Gradually increasing mobility, external rotation to neutral position in the first six weeks after surgery. ASA, American society of Anesthesiologists -Physical status classification system; ROM, range of motion.
Open access
at 1 year after the fracture treatment and to compare the acquired data with the patients' healthy side. A full list of trial assessments and procedures is presented in table 2.
Patients who decline to attend the intervention trial will be asked to join the external follow-up group. This group will be used as external validation; the group content and outcomes will be compared with the allocated intervention and control groups. The treatment will be carried out in line with normal clinical practice, but the patients will have the same follow-up and be asked to fill out the same questionnaires as the allocated patients.
Complications
Complications will be categorised as follows: ► Infection.
The definition of infections is the following: a. Less serious infection: Superficial wound infection with sign of skin inflammation and/or a positive bacterial culture, without call for resurgery. 27 b. Deep infection: Any postoperative wound infection or sign of deep infection that calls for resurgery with positive perioperative bacterial cultures or as defined after in consensus criteria for periprosthetic joint infection (musculoskeletal infection society). ► Non-union. ► Implant failure, including dislocation. ► Painful capsulitis after 6 months. ► Nerve damage. ► Complex regional pain syndrome.
Power analysis
Assuming the effect size of a 14-point difference in the QuickDASH score and an SD of 26.8 points (from previous Olerud and MacDermid trials), the estimated sample size will be 59 patients per group (delta=14, SD=26.8, alpha=0.05, power=0.8). [28][29][30] With this age group, the estimate of loss in follow-up rate will be set to 30% and results in a total of 154 patients in the trial. 31
statistical analysis
The differences between groups in main outcome variables will be analysed by t-test when variables are unskewed, and by the Mann-Whitney U test if continuous skewed. Results will be presented with 95% CIs. Two-way tables with the χ 2 test will be used for dichotomous variables. In subgroup analysis, the effect of age, sex, fracture group, smoking, ASA class and premorbidity will be evaluated against the scores and overall quality of life after fracture.
The effect of the treatment using the QuickDASH will be investigated in the multivariate manner. Multivariable analysis will be performed with linear regression analysis since the outcome variable QuickDASH is normally distributed due to adequately sized groups. The main variable of interest included will be the intervention, and age, sex, fracture group, smoking and premorbidity will be used as confounding variables. data management plan Each patient will be assigned a unique trial identification number (TIN), which is matched with the patient's identification. The matching key will be stored in a locked partition on the hospital research server at Tampere University Hospital, Finland, and will only be available to two study nurses. The identification of each patient will only be possible after retrieving the matching key. Throughout the trial, the research data will only be handled with a TIN. The research data will be saved to a database with a NITEP tailored online patient management programme (Berta) located on a secure research server provided by Tampere University Hospital and approved by the Security Committee of Tampere University Hospital. Only users with a registered account will be able to log in to the system and registered accounts will be provided by the administrator. The research data saved to the server will contain only anonymous TINs with a set of numbers acquired from the questionnaires; that is, each question is answered with a number. This will ensure the anonymity of each individual patient, and that the identity of the patient will remain secret, even if the server data are revealed to third parties.
All primary and secondary data will be acquired and stored on the study trial server. Data will be entered either by the patient during the control visits (via tablets) or by a researcher or research nurse when the questionnaires are returned by post. The researchers from each hospital will have access to the secure study server where the trial research data are stored. Each researcher will gain access to the data at the end of the trial for further analyses. All variables in the data set will be described, and suitable metadata standards will be used, when available.
The copyright of the trial research data will be owned and created by the collaboration parties. The data will be shared freely among collaboration parties, and all participating researchers will receive access after the trial is completed. Due to confidentiality and legal agreements, public data sharing will be restricted because we only have permission to hold the data in the specific research server, not to transfer data. Under certain circumstances, for example, when a new member joins the collaboration, we will grant access to the data. All data will be saved for 15 years after the end of the trial.
Patient and public involvement in trial
In order to improve patient involvement in the trial, we will interview patients with proximal humerus fracture before the onset of the trial. The aim of the interviews will be to move towards patient-centred medicine by taking into account the goals, preferences and values of patients. We will further involve patients by asking questions at the beginning of the treatment (self-assessment) in order to identify the questions to ask and the outcomes to measure. The interviews will be repeated after 1 and 2 years, and the results (difference or indifference between the primary and follow-up responses) will be reported.
Interim analysis
The external trial board will execute the interim analysis after half of the patients have been recruited. The report will be focused on the number of AEs and will give a recommendation as to whether or not the trial should continue. AEs and serious adverse events (SAEs) will be reported according to the recommendations given by the Consort Group.
AE is defined as follows: 1. For the RTSA intervention group, the primary focus will be postoperative deep infections 27 requiring revision surgery, instability, periprosthetic fracture, radiographic early signs of loosening (within 2 years of the surgery) of the humeral stem or notching of the scapula neck. 2. For the non-operative group, the primary focus will be on the rate of secondary surgery for any reason (eg, non-union, symptomatic avascular head necrosis, osteoarthrosis). Both groups will be monitored for SAE during the first four weeks after discharge for embolism (cardiac or brain) or death. The research nurse will fill out AE and SAE forms at 3 months control, if needed.
At the halfway point of the trial (50% of patients recruited), an independent steering committee will evaluate the complication rates and correlate them to the expected rates published in the available literature.
An unexpected high rate of complications in either group will be reported to the project group, who will then decide whether to end the randomisation.
Contingency plan
All participating hospitals have a significant volume of trauma patients, and they provide continuous emergency care and trauma surgery. In addition, all collaboration sites are academic teaching hospitals that are familiar with good clinical research practices. Proximal humerus fractures are common in these hospitals, and thus the infrastructure of the hospitals is highly standardised. For example, all participating hospitals have upper-extremity treatment units. Moreover, all hospitals have the appropriate equipment available, such as an operating environment and facilities for postoperative hospitalisation. All participating institutes have agreed to provide all the equipment and facilities necessary to conduct the trial. Local science centres will provide support in maintaining Good Clinical Practice principles, in assisting in the administration and invoicing for the trials, and in executing trial monitoring.
recruitment policy
The centres will be encouraged to recruit as many patients as possible. However, if one centre is unable to continuously recruit enough patients annually, it would be impracticable to include the centres findings in the statistical model, and therefore the centre would be excluded from the trail. The minimum number of patients recruited per year will be five.
trIAl sChEdulE
The initial piloting of the trial will begin in Norway in early October 2018 all-together with six sites. The other collaborators will join after the trial protocol has been shown to be working flawlessly. During 2019, all sites will begin recruiting, and we estimate that the inclusion process will be completed with full groups after 2 years (end of 2020). Analysis and results of the trial will be published (disseminated) during 2022.
EthICs And dIssEMInAtIon
The ethical approval for the trial has been given by the Regional Committee for Medical and Health Research Ethics, South-East Authority, Norway (2018/476 REK sør-øst D, https:// helseforskning. etikkom. no/. All patients included in the trial and those who declined but were asked to take part in ordinary follow-up will be asked to give written informed consent. All patient data will be handled in an anonymous fashion and the results will be published at the group level only, and individual patients will not be identifiable.
In this protocol, we describe the set-up and management of the Nordic Deltacon trial. The efforts to develop new operative techniques during the past years seem to have resulted in no improvement for patients suffering from PHF. With all the excitement around the newest technique, RTSA, we feel there is an urgent need for a large-scale study of the effects of the technique on the outcome, with special attention on complications and reoperations. This will assure the safe and ethical usage of RTSA in the future.
The absolute strength of the study will be the experience of the study group in handling a large-scale RCT. In this kind of efficacy study, the paramount aspect is the uniformity of the patient handling, and it may be the limitation of the study if not taken care of properly. However, with our previous experiences, we have learnt to overcome these pitfalls by regular biannual meetings among the researchers, personnel education sessions and written aftercare and follow-up protocols. Third-party monitoring is essential for checking the trial management and for notifying of any missing parts in the data handling. External validity is always a matter of debate in RCTs. Good documentation, pre-trial workshops and continuous discussion among the team will clarify the patients recruited to the trial.
The previous data show moderate to good outcomes with non-operative treatment for patients with PHF in cases where the fracture parts are in continuity with the stem. And therefore, the usage of RTSA should be limited to only the most severe and displaced cases of PHF until the primary results of this trial are available. | 2019-02-01T14:02:49.641Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "969ddc6d14c10a914558a4efc1b9602281e15779",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/9/1/e024916.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b27ba39277d658809265a91e8095e2ac284c565",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257664634 | pes2o/s2orc | v3-fos-license | A Bioinspired Gelatin–Amorphous Calcium Phosphate Coating on Titanium Implant for Bone Regeneration
Biocompatible and bio‐active coatings can enhance and accelerate osseointegration via chemical binding onto substrates. Amorphous calcium phosphate (ACP) has been shown as a precursor to achieve mineralization in vertebrates and invertebrates under the control of biological macromolecules. This work presents a simple bioinspired Gelatin‐CaPO4 (Gel‐CaP) composite coating on titanium surfaces to improve osseointegration. The covalently bound Gel‐CaP composite is characterized as an ACP‐Gel compound via SEM, FT‐IR, XRD, and HR‐TEM. The amorphous compound coating exhibits a nanometer range thickness and improved elastic modulus, good wettability, and nanometric roughness. The amount of grafted carboxyl groups and theoretical thickness of the coatings are also investigated. More importantly, MC3T3 cells, an osteoblast cell line, show excellent cell proliferation and adhesion on the Gel‐CaP coating. The level of osteogenic genes is considerably upregulated on Ti with Gel‐CaP coatings compared to uncoated Ti, demonstrating that Gel‐CaP coatings possess a unique osteogenic ability. To conclude, this work offers a new perspective on functional, bioactive titanium coatings, and Gel‐CaP composites can be a low‐cost and promising candidate in bone regeneration.
Introduction
Dental titanium implants have been in use in clinical applications for more than 40 years due to their remarkable mechanical properties and bioinert ability compared with other artificial materials. Despite the many advantageous properties, initial osseointegration and fast healing of bone implants are still important issues in the clinical field. Numerous studies showed that surface composition and topography could play important roles in the ingrowth of such implants and could improve the cell-implant interaction. [1] Generally, depending on the bio-response of the body, implants can be categorized into several groups: bio-tolerant, bio-inert, and bio-reactive. [2] Since the surface properties of the implant primarily govern the biological response to implants, it is critical to engineer the surface of the implants appropriately to achieve the desired surface interaction with the surrounding cells and proteins. [3] Since titanium metal is bio-inert, Ti implants have been designed with bioactive oxide surfaces to promote osteoblast adhesion and proliferation, leading to active bone formation. [4] In general, the methods for titanium surface modification can be divided into different strategies: a physical treatment and chemical techniques or combinations thereof. Commercial treatments like sandblasting, grit blasting, and ion beamassisted deposition have been carried out to increase the surface roughness to enhance positive cell behavior. [5] Abundant studies showed that surface roughness and surface wettability could influence biomechanical fixation and osteogenic cell adhesion, differentiation, proliferation, and calcification. [6] However, physically treated implants, e.g., sandblasted materials, could pose potential risks of surface corrosion or surface contamination due to the presence of blasting particle remnants. [7] Such particles could induce inflammation at the bone-implant interface and induce osteoclast activity. Unfortunately, these modification techniques demand expensive and massive instruments to operate. Due to chemical modifications, such as alkali heat treatment or electrochemical anodization, activation of the titanium surface and resulting attraction of osteoblast cells could be achieved. [8] Especially, oxygen plasma is one of the simple procedures to alter titanium surface chemistry and wettability. [9] Chemical www.advancedsciencenews.com www.advhealthmat.de modifications can be feasible for surface modifications with their cheap cost and simple procedures.
An ideal coating should not only mimic the bone structure and establish a fast bonding to the host bone, enabling fast healing and integration. In recent decades, calcium phosphate (Ca-P) coatings have become a promising alternative to enhance metallic implants' osteointegration and improve stress shielding. [10] Hydroxyapatite (HAP), which could exhibit physiological compatibility and osteoconductive properties, became the first choice of titanium coatings. Besides, it was assumed that nano-dimensional HAP particles have a higher surface area and play an essential role in facilitating favorable osteogenic cell adhesion and proliferation. [11] Nanometric calcium phosphate like Calcium-deficient hydroxyapatite (CDHAP) with needle shape could generate a favorable osteoimmune environment to regulate osteoblast differentiation and osteogenesis. [12] However, the problem with physically bound coatings is their partial or complete detachment by particle abrasion or coating delamination, which could lead to inflammation. [13] Amorphous calcium phosphate (ACP) is another phase in the group of calcium orthophosphates and was reported to be more beneficial for promoting early bone formation and remineralization than highly crystalline HAP. [14] From the "bio-reactivity" point of view, ACP could facilitate bone-like apatite development more efficiently, thus inducing faster bone regeneration. [15] It is reported that ACP and HAP coatings would not be dissolved in SBF solution, thus not weakening the bonding strength and decreasing the adhesive strength. [16] From the viewpoint of bone formation, many studies supported the theory of ACP precursors in bone mineralization. They showed a disordered ACP as a major component in the newly formed parts of the zebrafish fin bone. Observation in invertebrates also illustrated that the initially deposited ACP transforms into a crystalline mineral phase over time via deposition inside the gap regions of collagen fibrils directly or delivering as extra fibrillar nanoparticles. [17] However, this is still disputed for vertebrates since an amorphous phase is hard to detect or observe in the formation of bone minerals by conventional analytical techniques. Nevertheless, recent studies established a successful biomimetic mineralization model in vitro. [18] For example, Andersson et al. offered a possible detailed transformation mechanism of amorphous calcium phosphate spherical particles to apatite platelet-like crystals. [19] All these studies point out the significance of ACP in organism mineralization.
In nature, macromolecules are known to serve as a template to control the growth of calcium phosphate crystals. Imai et al. reported that collagen-derived gelatin molecules could govern the construction of lattice architecture of dicalcium phosphate. [20] Sommerdijk et al. also demonstrated that collagen promotes the infiltration of ACP into fibers to control mineralization actively by exploiting polyaspartic acid (pAsp) as an inhibitor. [21] Tay et al. showed that the same is possible using a polycation and pointed out the role of a Gibbs-Donnan equilibrium for collagen intrafibrillar mineralization. [22] A model of "brick-and-mortar" nacre explained the relationship of ACP, HAP, and biological molecules during the process of aggregation of nanometric apatite. ACP and macromolecules could act as "mortar" to cement the crystallized "bricks" of HAP. [23] In order to improve the compressive strength of calcium phosphate cement, up to 10.7 -14 Mpa, Panzavolta et al. showed that the addition of gelatin to calcium phosphate could improve the compressive strength up to 10.7 -14 Mpa, compared to 2 -4 Mpa. [24] Thus, gelatin-CaP composites appear to be a promising material with excellent mechanical properties as a coating for a bone implant.
A noncovalent gelatin coating of arterial implants was already reported [25] as well as noncovalent titanium implant collagen coating, [26] also in CaP-mineralized form of collagen and gelatin. [27] However, to prevent potential detachment, which is problematic for physically bound coatings, forming covalent bonds between implant and coatings is an advantageous strategy. [13] This has indeed already been realized by covalently binding hyaluronan to titanium [28] as well as collagen type 1 via silane chemistry. [29] However, as gelatin is much better watersoluble than collagen and cheaper, we adapt similar chemistry to gelatin towards a low-cost strategy to endow titanium implants with a bioactive property to achieve stable initial osteointegration and rapid bone formation. We fabricated the titanium implant surface coatings by grafting two different silane coupling agents through vapor and liquid methods. Then, in vitro mimicking biomineralization was introduced in synergy with a biocompatible HAP nucleation inhibitor (i.e., pAsp). The underlying mechanism of mineralization has also been investigated to confirm if the compound is similar to a bone apatite precursor. More importantly, the osteoblast response stimulated by the Gel-CaP coating in vitro was evaluated showing high expression of osteogenic genes in both different Gel-CaP coatings.
Morphologies and Elemental Distribution within the Coatings
In this work, a feasible, simple, and low-cost route has been established to fabricate Gel-CaP coatings on a titanium surface to endow it with osteoconductive ability with titanium ( Figure 1). Briefly, titanium plates are cleaned by Piranha solution and pretreated with subsequent oxygen (O 2 ) plasma to alter the surface chemistry terminated by hydroxyl groups, which is labeled as Ti-OH. Then the hydroxyl groups were chemically reacted with triethoxysilypropylmaleamic acid or 3-(triethoxysilyl)propyl succinic anhydride yielding covalent silane attachment, which is called Ti-TESPMA or Ti-TPSA. Subsequently, gelatin was bound to the carboxyl groups via NHS/EDC activation chemistry labeled as Ti-TESPMA-Gel or Ti-TPSA-Gel. At last, these plates were immersed in a mineralizing solution to mimic the process of biomineralization for 7 days. The final product is called Ti-TESPMA-Gel-CaP or Ti-TPSA-Gel-CaP. The morphology of the titanium surface was analyzed by SEM after every procedure (Figure 2A). The analysis revealed that Ti displays a porous surface after the surface treatments. The generation of pores is attributed to sub-surface corrosion by piranha solution treatment. In comparison, the surfaces of the corresponding Ti-TESPMA-Gel or Ti-TPSA-Gel appear to be smoother after the gelatin graft. In addition, Gel-CaP coatings were observed to have a porous, rough surface. Rough surfaces have been demonstrated to provide excellent mechanical properties for the interface and a three-dimensional structure for cell adhesion. [30] The insets in Figure 2A and Table S1, Supporting Information, show the EDS results of each compound. The EDS results demonstrated the appearance of silicon signals in Ti-TESPMA or Ti-TPSA and nitrogen signals in Ti-TESPMA-Gel or Ti-TPSA-Gel, confirming the success of the chemical coupling of the silanes resp. gelatin. The appearance of calcium and phosphorus in Ti-TESPMA-Gel-CaP or Ti-TPSA-Gel-CaP demonstrated a successful mineralization with calcium phosphate, but the Ca and P mass content was found to be low (Table S1, Supporting Information). Also, calcium phosphate did not form any aggregates on the surface, which indicates that it is incorporated within the gelatin layer. This finding is important since big calcium phosphate particles like HAP on the implant surface have the potential risk of being detached and can lead to inflammation. [31] Moreover, the low solubility product of HAP is considered the bottleneck, and it is reported that HAP cannot dissolve even after nine month's implantation. [32] On the contrary, calcium ions could easily be released from our titanium surface and then participate in the entire life cycle of bone formation. [33] In order to further explore the chemical composition of the coatings, XPS measurements were performed. Figure 2B and Figure S1, Supporting Information, show the XPS survey spectra of the different compounds. The surface of Ti-TESPMA-Gel-CaP and Ti-TPSA-Gel-CaP showed the presence of elements of Ti, O, C, N, Si, Ca, and P, which are in accordance with EDS mapping results. As expected, Ca and P cannot be observed on Ti-TESPMA-Gel or Ti-TSPA-Gel. Si2p peaks could be observed at the binding energy of 101.58 eV, indicating Si-O bonds on the surface. Figure S2, Supporting Information, depicts the highresolution XPS spectra of the elements of oxygen (O 1s), calcium (Ca 2p), and phosphorous (P 2p) in the corresponding CaP coating compound. As for Ti-TESPMA-Gel-CaP, the binding energy of O 1s is located at 531.2 eV, 531.8 eV, and 532.1 eV, which could be attributed to Ti-O bonded on the surface. P2p peaks can be observed at 132.8 eV, which can be deconvoluted into two separate peaks: 133.3 eV and 132.5 eV, indicating the existence of HPO 4 2− and PO 4 3− . Furthermore, Ca 2p peaks can be observed at 351 eV and 347.4 eV, which can also be associated with calcium present in the Ca 3 (PO 4 ) 2 . As for Ti-TPSA-Gel-CaP, O 1s peaks located at 530.2 eV, and the measured P 2p showed the peak at 132.8 eV, which could be attributed to Ti-O and P-O bonds. The high-resolution XPS spectrum of Ca 2p showed two peaks of Ca 2p 1/2 (349.8 eV), and Ca 2p 3/2 (346.1 eV), which could be assigned to bivalent calcium. [34] The values measured here all represent the composition of the first few nanometers of the coating on the surface and might also reflect the situation deeper in the coatings. The detailed surface elemental composition is shown in Table S2, Supporting Information. The content of different elements (at.%) was determined by area under the curve fittings. The data clearly demonstrated that the atomic ratios of Ca/P in Ti-TESPMA-Gel-CaP and Ti-TSPA-Gel-CaP are 1.48 and 1.65 respectively. According to previous research, calcium-deficient hydroxyapatite (CDHA) has Ca/P ratios ranging from 1.5 to 1.667, and amorphous calcium phosphate (ACP)'s ratio is within the scope of 1.2-2.2, which fits the compound of calcium phosphate on the surface. [35] In conclusion, the coatings consisted of calcium phosphate.
Phase Determination of the Mineral Component of the Coatings
FTIR was first used to investigate the calcium phosphate on the surface of the different compounds ( Figure 2C, Figure S3B, Supporting Information). Ti-TESPMA or Ti-TPSA each show a signal in the range between 1005 and 894 cm −1 , which can be attributed to the Si-O group, while the sharp signals at 1704 cm −1 and the broad signals ≈1500-1700 cm −1 can be assigned to the carboxyl group. Also, the amount of bound carboxy groups on the Ti-TESPMA can be estimated by titration with toluidine blue O (TBO), which is known to quantitatively bind to carboxyl groups ( Figure S4, Supporting Information). First, we used titania plates with six different concentrations of silane coupling moieties and made a calibration between TBO concentration and absorption at 630 nm. Then, the concentration of the carboxy groups from the grafted silane coupling agent of each sample on the modified Ti surface was determined via the bound TBO after the TBO had been released by acetic acid. With increasing concentration of the silane coupler on the titanium surface, the concentration of the surface grafted carboxyl group initially increased ( Figure S5, Supporting Information). At a certain point, the concentration of COOH groups reached a maximum of 5.73 μmol cm −2 (6 wt.% silane coupling agent). This corresponds to a maximum of 35 COOH groups per nm 2 on the Ti-TESPMA surface theoretically. However, the concentration of bound carboxy groups slightly decreased again at a higher coupling agent concentration. These results showed that the bound silane on the surface reaches a maximum concentration bound to the surface despite the increasing TESPMA concentration. These results indicate that the silane grafting reaction was successful.
Meanwhile, the signal at 1563 cm −1 in Ti-TESPMA-Gel and Ti-TPSA-Gel suggested the presence of secondary amide, showing that the gelatin molecule was connected to the titanium substrate via a covalent amide bond with the silane coupling agents (triethoxysilypropylmaleamic acid and (triethoxysilyl)propyl succinic anhydride). As for the mineralized coatings, signals at 1015 cm −1 and 947 cm −1 were observed, which were attributed to the P-O stretching band. [36] Nancy et al. also observed that the second derivative of the v 3 PO 4 band changed from 992 to 1015 cm −1 during the process of ACP in the developing matrix preceding the formation of apatite. [37] Overall, these results indicate that the phase of the mineral component of the coating may be ACP or poorly crystallized HAP.
In addition, giXRD was applied to gain a more detailed insight into the phase composition of the CaP coatings. Figure 2D shows XRD micrographs of the different compounds. Signals at 35, 38.4, 40.2, 52.9, and 62.9°were measured for every compound. These signals correspond to the (100), (002), (101), (111), and (002) planes of Ti, respectively. The XRD patterns of Ti-TESPMA-Gel and Ti-TESPMA-Gel-CaP show similar Ti signals. The lack of additional reflexes indicates that the CaP on the surface is either amorphous or the CaP amount in the surface layer is too low to be detectable. All XRD patterns in Ti-TPSA compounds ( Figure S3A, Supporting Information) were similar to those in the Ti-TESPMA compounds. That means CaP on the surface of Ti-TPSA-Gel-CaP could also be inferred as amorphous calcium phosphate rather than poorly crystallized hydroxyapatite or its amount was too low for detection.
To further study the mechanism to underline the combination of calcium phosphate and gelatin, a 1 wt.% gelatin solution was selected to connect with CaCl 2 and Na 2 HPO 4 under similar mineralization conditions to those for the surface-bound gelatin layer. TEM analysis showed that CaP-gelatin composites exerted nanoparticle clusters with a hollow structure. Selected area diffraction (SAED) confirmed that the CaP-gelatin compound was in amorphous form ( Figure 2E). Furthermore, EDS in Figure S6, Supporting Information, clearly shows the elemental composition of the nanoparticles. Strictly speaking, the nucleation of free calcium phosphate-gelatin composites and the nucleation of calcium phosphate on surface-bound gelatin does not work in a similar way. First, some important functional groups of the surface-bound gelatin are not available during nucleation in comparison to free gelatin. Furthermore, gelatin does not have the completely same structural features as functional collagen, the organic matrix in bone. However, gelatin as a denatured form of collagen, still retains some of the functional properties, enabling a relative comparison of the mineralization process. Some researchers suggested that pAsp could inhibit calcium phosphate nucleation in synergy with collagen fibers to control mineralization. [21,38] In that experimental setup, ACP could actively penetrate into the discrete spaces of collagen fibrils and grow in a specific crystalline orientation under the interaction of amino acids. [21] Hence, it appears to be plausible that calcium phosphate on the coating of Ti-TESPMA-Gel-CaP or Ti-TPSA-Gel-CaP may be in the amorphous form.
Thickness, Surface Roughness, Wettability, and Mechanical Properties of the Surface Layers
To determine the thickness of the covalently attached surface layer, it was sputtered with gold first and then observed by FIB-SEM. Figure 3A shows the SEM images of the FIB-cuts, where the approximate thickness in Ti-TESPMA-Gel-CaP was ≈600-800 nm while that in Ti-TPSA-Gel-CaP was ≈200-400 nm. The variant thickness may attribute to different concentrations of silane coupling agents and different methods for grafting silane coupling agents.
In order to measure the approximate amount of gelatin of Ti-TESPMA-Gel, we chose Fluoresceineisothiocyanate (FITC) to attach to the amine groups of Ti-TESPMA-Gel and then measured the fluorescence intensity. [39] Briefly speaking, different concentrations of gelatin solutions were utilized to react with EDC-NHS-activated Ti-TESPMA. A calibration was made between different concentrations of FITC solution and absorbance. Thus, bound gelatin concentrations were obtained by calculating the corresponding absorbance for the unbound FITC left in solution. The results showed that the thickness of Ti-TESPMA-Gel was not related to the concentration of gelatin solutions before binding (Table S3, Supporting Information). The thickness of the gelatin coatings can be calculated from the amount of bound gelatin considering that the Ti-TESPMA-Gel coatings have approximately a thickness of 500-1000 nm depending on the concentration of the gelatin solution in the reaction (Table S3, Supporting Information). For 1% of gelatin solution, applied in the reaction to prepare Ti-TESPMA-Gel-CaP, the thickness of the swollen gelatin layer is 680 nm. This confirms the experimental result from FIB-SEM, which showed that the thickness of the dry Ti-TESPMA-Gel-CaP layer was in the range of 700-800 nm. Nanometric apatite has been confirmed to offer calcium and orthophosphate ions for the bone "remodeling" process [35] and our FiB-SEM shows that the coatings have a sub-micron-scale thickness. The unaggregated nano-ACP in the gelatin layer could facilitate the new bone formation at the interface of bone and implant.
Furthermore, high-resolution AFM images revealed the roughness of the CaP coatings. In Figure 3B, the roughness with CaP coatings does not seem to change dramatically compared with untreated Ti (Ti-TESPMA-Gel-CaP, 188 ± 91 nm; Ti-TPSA-Gel-CaP, 171 ± 40 nm compared to Ti, 154 ± 42 nm). It was reported that a micron-nanoscale modification of an implant surface would be advantageous for the binding of a variety of proteins, which can combine with selective receptors of osteoblasts to influence osteoblast proliferation and maturation, eventually leading to new bone formation. [30] In terms of altering the interaction of the cell with the surface, surface energy or wettability is reported to be an important factor. [40] A static water contact angle (WCA) was measured from two Gel-CaP coatings and control samples. As shown in Figure 3C, the WCA of untreated Ti was nearly 77.06 o ± 4.26 o . The WCA of Ti decreased significantly due to the piranha solution treatment. After grafting coatings, the mean WCA in Ti-TESPMA-Gel-CaP and Ti-TPSA-Gel-CaP were 52.47 o ± 10.36 o and 58.72 o ± 17.58 o respectively. This indicates that the surface of the CaP coatings is more hydrophilic and therefore more suitable for osseointegration by attracting more proteins to promote the adhesion of osteoblasts. [40][41] The mechanical properties of Ti-TESPMA-Gel-CaP were investigated by nanoindentation in the dry state ( Figure 3D). The values of nanohardness and Young's modulus were calculated and are displayed in Table S4, Supporting Information. Ti-TESPMA-Gel-CaP showed a drop in elastic modulus compared to Ti (i.e., Ti-TESPMA-Gel-CaP 18.20 ± 1.78 GPa and Ti 125.15 ± 15.56 GPa), which is much closer to bone's elastic modulus (7 -30 GPa). [42] Stress shielding is a major problem for titanium implants, and it would easily lead to new bone injuries. [43] In this line, the Gel-CaP coatings have desirable mechanical properties for osseointegration. Moreover, the ratio of nanohardness (H) and Young's modulus (elastic modulus, E) was calculated to evaluate the elastic strain to failure of the coating. [44] The results showed a higher H/E ratio of Ti-TESPMA-Gel-CaP than Ti (0.034 compared to 0.022, respectively), which may indicate better wear resistance. Some researchers proposed that HAP or poorly crystalline HAP is always considered responsible for brittleness in a load-bearing application. [45] In this regard, our CaP coatings' mechanical property was improved by the addition of gelatin. From this viewpoint, surface modification of titanium with covalently bound Gel-CaP could be a promising addition to titanium implants.
Release of Calcium Ions from the Coatings
Despite a solubility product of only 10 −25 , amorphous calcium phosphate was reported [46] to release calcium ions in water or body fluid to bind to acidic proteins and create supersaturation conditions of surrounding biological fluids for bone mineral nucleation, [35,47] since the solubility product of biological apatite is much lower with 10 −50 for calcium deficient carbonated HAP. [46] Therefore, the calcium potential was measured in solution to estimate the amount of calcium release from the Gel-CaP surface layers. In Figure 4, it is displayed that calcium ions were rapidly released from the surface of Gel-CaP. The calcium concentration reached 3.37 μmol l −1 and 5.56 μmol l −1 , respectively, after 12 h in Ti-TPSA-Gel-CaP and after 48 h in Ti-TESPMA-Gel-CaP. Later, the Ca concentration in Ti-TESPMA-Gel-CaP arrives at nearly 16 μmol l −1 after 140 h, and that in Ti-TPSA-Gel-CaP can reach 5.5 μmol l −1 after 45 h. This is interesting to note since the Gel-CaP Surface layers should, in principle, be the same, but they exhibit a difference in Ca 2+ release after 45 h. This can be explained by the 1/3 lower surface Ca 2+ concentration in Ti-TPSA-Gel-CaP as compared to Ti-TESPMA-Gel-CaP (Table S2, Supporting Information). Also, from XPS and EDS, the Ca 2+ concentrations in the entire Ti-TPSA-Gel-CaP layer are considerably lower than in Ti-TESPMA-Gel-CaP (Tables S1 and S2, Supporting Information), which could explain the higher Ca 2+ release from in Ti-TESPMA-Gel-CaP. Nevertheless, it needs to be considered that in both cases Ca 2+ was still released after the end of the measurement with an even increasing release rate after 100 h for Ti-TESPMA-Gel-CaP. Thus, it can be inferred that both surface layers release Ca 2+ even after days, which is advantageous for new bone generation. Interestingly, Xue et al. found that the expression of BSP, a later-phase marker of osteogenic differentiation, was higher in low concentrations of Ca 2+ media than in high concentrations of Ca 2+ media. Besides, in vivo results showed that the higher Ca 2+ content delays the maturation of cells and allows greater proliferation before reaching maturation, leading to increased bone volume while reaching the same terminal fate. [48] The results were similar to our subsequent biological experiments. MC3T3 cells cocultured in Ti-TESPMA-Gel-CaP showed better adhesion than in Ti-TPSA-Gel-CaP, while the osteogenic-related markers like BSP expressed higher in Ti-TPSA-Gel-CaP than that in TESPMA-Gel-CaP. In this regard, releasing Ca ions is suitable and imperative to maintain the extracellular concentration of calcium to promote bone formation. [49]
Biocompatibility of the Modified Surfaces
In this study, Gel-CaP surface layers on titanium implants with submicron thickness were designed in order to increase osseointegration. Thus, cell responses or cellular behavior to Gel-CaP surface layers were investigated in this study. At first, the viability of MC3T3 osteoblast cells cultured on Ti, Ti-TESPMA-Gel, Ti-TPSA-Gel, Ti-TESPMA-Gel-CaP, and Ti-TPSA-Gel-CaP was quantified by Cell counting Kit-8 (CCK8) (Figure 5A). The amounts of cells in the five groups showed an upward trend for seven days. After one day, Ti-TESPMA-Gel-CaP did not experience an apparent cell growth compared to Ti. However, compared with Ti, Ti-TPSA-Gel-CaP, or Ti-TESPMA-Gel-CaP, it exhibited a higher OD value of cell density from 3 days to 7 days after cell seeding. Concurrently, all Gel-CaP coatings showed a higher OD value than corresponding gelatin coatings at each observation time.
Moreover, the morphology and adhesion of MC3T3 cells were shown by immunofluorescence (IF) staining and SEM. In Figure 5B, all MC3T3 cells showed good adhesion on Ti, Ti-TESPMA-Gel-CaP, and Ti-TPSA-Gel-CaP. However, the MC3T3 cells that adhered to Gel-CaP coatings exerted much more filopodia and lamellipodia than those on the Ti surface, which indicated better attachment. The amount of MC3T3 cells was significantly higher on Gel-CaP coatings than the Ti group after three days of cell spreading. Figure 6A,B shows the SEM images of MC3T3 cells cultured with Ti, Ti-TPSA-Gel-CaP, and Ti-TESPMA-Gel-CaP one day after cell seeding. The adhesive area proportion on Ti-TESPMA-Gel-CaP was significantly higher than that on Ti. The SEM images exhibited similar results as the immunofluorescence (IF) results shown above. It is distinct that MC3T3 cells on Gel-CaP coatings were more prolongated and spindle-shaped than those on the Ti. Vinculin is a ubiquitously expressed actinbinding protein and is used as a marker for cell-extracellular matrix junctions. [50] We also tested the expression of Vinculin on Ti, Ti-TESPMA-Gel, and Ti-TESPMA-Gel-CaP. Gel-CaP surfaces showed a higher expression of Vinculin than other surfaces (Figure S7, Supporting Information). These results further indicated that Gel-CaP coatings have an excellent capacity for promoting initial cell adhesion and cell proliferation.
Osteogenic Ability of the Coatings
Furthermore, the osteogenic ability of the Gel-CaP coatings was also studied by polymerase chain reaction (PCR) and alkaline phosphatase (ALP) staining. The osteogenic genes of ALP, Runtrelated transcription factor 2 (RUNX2), Osterix (OSX), Bone morphogenetic protein 2 (BMP2), bone sialoprotein (BSP) and Osteocalcin (OCN) were investigated by reverse transcriptionpolymerase chain reaction (RT-PCR) at day 4 and 7 (Primer sequences used are shown in Table S5). Gel-CaP surfaces experienced a remarkably increased expression over time compared with Ti. Furthermore, the expression of RUNX2, ALP, OSX increased significantly in Ti-TESPMA-Gel-CaP compared with Ti on day 4. On day 7, the expression of RUNX2, ALP, OSX, BSP, and BMP-2 rose more in Ti-TESPMA-Gel-CaP than in Ti ( Figure 6C). Most genes expressed in Ti-TPSA-Gel-CaP increased remarkably more than in Ti on days 4 and 7 ( Figure S8, Supporting Information). Besides, BMSCs have been considered to be the progenitor cells for skeletal tissues. [51] We utilized BMSCs to complement the conclusion that Gel-CaP coatings could accelerate MSCs differentiation at the mRNA level. The results exerted that the expression of ALP, RUNX2, OCN, BMP-2, BSP, and OSX increased significantly in the Ti-TESPMA-Gel-CaP group rather than in the Ti group on day 7 ( Figure S9, Supporting Information).
To some extent, ALP activity is the early marker of osteoblast differentiation since ALP can provide phosphate at the early stage of mineralization. RUNX2 is an osteoblast-specific transcription factor that plays a central role in osteoblast differentiation, and the expression of RUNX2 could be upregulated in immature osteoblasts. [52] OSX is a zinc-finger-containing transcription factor located downstream of RUNX2, responsible for osteoblast differentiation and bone mineralization. [53] BMP-2 occupies an essential position in stimulating the differentiation of mesenchymal cells into osteoblasts and could be detected during the whole stage of osteoblast differentiation. [54] BSP, as a matricellular protein, can further increase the hydroxyapatite nucleation during bone mineralization and can be regarded as a marker in the later stage of osteogenesis. OCN can also be thought of as a finalstage osteoblastic differentiation marker. Mahamid et al. found that ACP is a major component of the forming fin bones of zebrafish, [17] and Lotsari demonstrated a detailed transformation mechanism of ACP to apatite platelet-like crystals. [19] Besides, several hypotheses were proposed to explain the mechanism of how ACP infiltrates into collagen fibrils, such as electrostatic interaction [21] and the balance between electroneutrality and osmotic equilibrium. [38] Accumulating evidence indicates a transformation from ACP to HAP in bone's nucleation and growth of mineral crystals. Our results showed that Gel-ACP coatings could release Ca ions, which means ACP is more soluble and flexible for reorganization and fusion. Moreover, ALP, BSP, and OCN showed higher expression on Gel-ACP coatings than on Ti. It may attribute to the efficiency in mineralization and convenience in delivery of ACP. All that means the Gel-CaP coatings can improve the osteogenic ability of osteoblasts throughout the entire lifecycle of cells. [55] ALP staining also showed the difference between Gel-CaP coatings and Ti directly. Figure 6D demonstrated that Gel-CaP coatings exerted a deeper color and larger colored area than Ti. Figure S10, Supporting Information, shows the upregulation of RUNX2 and OSX by Gel-CaP coatings. These results are analogous to that in PCR. To conclude, our Gel-CaP coatings could greatly impact the osteogenic activity of osteoblasts through the nanometric CaP, surface wettability, roughness, and the release of calcium ions.
Conclusions
This work demonstrated a facile method to fabricate Gel-CaP coatings on titanium implants to mimic the initial bone biological apatite. We applied a literature-reported strategy [29] for silanemediated collagen coupling to titanium surfaces to gelatin and mineralized the covalently bound surface layers with submicron thickness with ACP, which is an advantageous precursor for re-modeling to bone due to its much higher solubility (factor 10 25 in the solubility product) as compared to HAP.
This should lead to rapid bone formation at the interface of bone and implant. To the author's knowledge, it is the first time to fabricate a Gel-ACP composite coating successfully on titanium implants. The amorphous phases in the coatings were found to facilitate calcium release by titration methods. The excellent MC3T3 cell viability and higher expression of osteogenic genes in osteoblasts demonstrated the "bio-active" response on the interface, which can be attributed to calcium release. In addition, this coating proved to exert advantageous nano-topography, good surface roughness, and high wettability, further promoting cell adhesion. On the other hand, the mechanical properties of the surface layer are rather close to those of natural bone so that the covalently bound surface layer might prevent "stress shielding", which is observed for conventional titanium implants. Therefore, the reported covalently bound Gel-CaP surface layer is a significant improvement for commonly used titanium implants with much better bioactivity towards osseointegration.
Experimental Section
Preparation of Substrates: Commercial pure titanium discs (10 × 10 × 2 mm, L × W × H) were used as substrates. Titanium discs were first ground and polished with grinding paper from 800 to 4000 grids and then ultrasonically cleaned in acetone, ethanol, and distilled water. After drying with nitrogen, titanium discs, and silicon wafers were immersed in a 2:1 (v/v) mixture of concentrated H 2 SO 4 (Sigma-Aldrich, Germany) and 30% H 2 O 2 (Sigma-Aldrich, Germany) piranha solution at 25°C for 1 h. The samples were then rinsed with distilled water three times and dried in N 2 atmosphere at room temperature. The samples were labeled as Ti.
Biomimetic Surface Preparation: In this study, triethoxysilypropylmaleamic acid (TESPMA; Gelest, Germany) and 3-(triethoxysilyl)propyl succinic anhydride (TPSA; Sigma-Aldrich, Germany) were grafted onto the titanium oxide or silicon wafer surface in different ways. The silane solution was prepared with 0.1 g TESPMA in 10 mL of absolute ethanol (TCI, German) containing 1% acetic acid. Then, titanium plates were soaked for 1 h at room temperature. Later samples were heated at 120 o C for 2 h and labeled as Ti-TESPMA.
As for Ti-TPSA, surface silanization was carried out using the vapor method. The Ti, 1,1-diiso-propylethylamine (20 μL), and TPSA (60 ul) were placed inside a desiccator and left for 2 h under vacuum conditions. Then, samples were heated in an oven at 120°C for 2 h. Afterward, they were immersed with 93.75 wt.% ethanol aqueous solution followed by heat treatment applying 90°C for 1 hour (labeled as Ti-TPSA).
Subsequently, Ti-TESPMA or Ti-TPSA was immersed into 70 mM EDC and 28 mM NHS solution in 50 mM MES buffer (pH = 5.0) at room temperature for 2 h according to literature. [56] After the titanium samples were rinsed with distilled water thrice, 10 mM HEPES buffer containing 1 wt.% gelatin (pH = 7.4) was used to soak titanium samples at room temperature for 1 h. Afterward, the samples were washed with distilled water and dried under N 2 flow (labeled as Ti-TESPMA-Gel or Ti-TPSA-Gel, respectively). For each titanium sample, a CaCl 2 (1.35 mM in water) solution in HEPES buffer was prepared. The Ti-TESPMA-Gel or Ti-TPSA-Gel was immersed into the solution followed by the addition of pAsp (10 μg mL −1 in water) and Na 2 HPO 4 (1.35 mM in water) via a peristaltic pump (3 mL min −1 ). All procedures were maintained at a pH level of 9 and finished after ≈15 min. Finally, the sample solution was placed inside an oven which was set to 37°C for 7 days. The samples were labeled as Ti-TESPMA-Gel-CaP or Ti-TPSA-Gel-CaP.
Surface Characterization of Gel-CaP Coatings: The morphology of coatings was observed by scanning electron microscopy (SEM; Zeiss Gemini500, Germany), and energy dispersive X-ray spectroscopy (EDS) was performed with an Oxford instruments X-max detector. The X-ray photoelectron spectroscopy (XPS) spectra were obtained with a PHI Quantera SXM equipped with an aluminum anode (15 kV, 1486.6 eV) and a quartz monochromator to analyze the surface composition and chemical state. During the process, the pressure was kept below 2 × 10 −7 Pa. Spectra analysis was performed with CasaXPS software, which includes a Shirley background subtraction and peak separation adopting mixed Gaussian−Lorentzian functions in a least-squares curve fitting program. Contact angle measurements were performed by a surface contact angle instrument (DSA25, Krüss, Germany). Three different samples were analyzed (two water drops/sample). Each sample was taken for ten measurement points and the reported WCA is the average of all values obtained. The surface hardness was measured by a nanomechanical test instrument (Hysitron Ti980 Triboindentor, Germany) with a 2N load on six random locations of the sample surface, and the average value was calculated. Fourier transform infrared (FT-IR) spectroscopy (Perkin Elmer Spectrum 100, Germany) was applied to study the vibrational modes of coatings in the infrared region within the region of 4000-550 cm −1 . The phase composition was recorded by X-ray diffraction (XRD; Bruker D8 Discover, Germany). Data were collected at room temperature over a 2 range of 0-65 o with an incident degree of 1.5 o and a step time of 10 800 s. For STEM-EDS analysis, 10 mM HEPES solution containing 1 wt% gelatin (PH = 7.4) was prepared. CaCl 2 (1.35 mM in water) and pAsp (10 ug/mL in water) were added into the gelatin solution followed by the slow addition of Na 2 HPO 4 (1.35 mM in water) while maintaining a pH value of 9. Stirring at 37°C was maintained for 7 days. Afterward, the solution was diluted 100 times. 100 μL of the suspension was drop-casted onto a carbon-coated copper grid and dried in air for high-resolution transmission electron microscopy characterization (TEM; JEM-2200FS, JEOL, Japan). The morphologies of the gelatin-CaP compound were examined using Scanning transmission electron microscopy (STEM, JEOL-2200FS, Japan) at an accelerating voltage of 200 kV. A TEM-energy dispersive X-ray analysis (TEM-EDX) was performed to measure the calcium (Ca) and phosphorus (P) contents of the CaP-gelatin composite.
Thickness or Toughness of Gel-CaP Coatings: Atomic force microscopy (AFM; JPK, Japan) was applied to measure the three-dimensional morphology and roughness of the Gel-CaP coating. The thickness of Gel-CaP coatings was examined by focused ion beam-scanning electron microscopy (FIB-SEM; Zeiss, Germany). The acceleration voltage and current were set to be 30 kV and 50 pA at a normal incident angle, respectively. Finally, the etching section was polished with 100 pA current to achieve a clearer cross-section, and the thickness was measured.
Quantification of Concentration of Surface-Grafted Silane in Ti-TESPMA: Toluidine blue O (TBO) staining was applied to determine the graft concentration of silane coupling agent. First, UV spectroscopy (Varian Cary 50 spectrometer, Germany) was introduced to measure the optical density of different known concentrations of TBO solutions at 630 nm, and a calibration curve was generated. Different samples immersed in several known concentrations of silane coupling agent were prepared. An aqueous solution of TBO (0.5 mM) was prepared and adjusted to a pH of 10 by adding 0.1 mM NaOH. Then, 1 mL TBO solution was added to each 1 cm 2 Ti-TESPMA. After 5 h, the COOH group from Ti-TESPMA could coordinate with TBO at room temperature. Residual compounds were removed by rinsing with 0.1 mM NaOH solution. 1 mL of 50% (v/v) acetic acid was used to desorb the coordinated TBO from the surface. The amount of COOH groups was calculated from the optical density of the TBO dye at 630 nm.
Quantification of the Theoretical Thickness of Ti-TESPMA-Gel: The optical density of various concentrations of fluorescein isothiocyanate (FITC) was first measured with UV spectroscopy (Varian Cary 50 spectrometer, Germany) at 490 nm to generate a calibration curve. Then, FITC was applied to bind to the amino groups of gelatin. Optical density was measured to obtain the difference before and after immersion of EDC-NHSactivated Ti-TESPMA in different concentrations of FITC-labeled gelatin (0.1%, 1%, 5%, 10% by weight). The bound amount of gelatin on the surface of Ti-TESPMA-Gel can be inferred subsequently, and the specific volume of gelatin could also be calculated by using Equation 1: where is the relative density of gelatin ( = 1.35 g ml −1 ), V is the volume of bound gelatin, W is the weight of bound gelatin. The theoretical thickness of Ti-TESPMA-Gel can be calculated by Equation 2 where T bound gelatin is the thickness of the specific volume of gelatin, S bound gelatin is the surface area of the coating, and Q m is the equilibrium mass swelling ratio, which was calculated using Equation 3 Q m = (m s − m p) ∕m p × 100% where m s and m p are swollen gelatin mass and unswollen gelatin mass. Release of Calcium Ions by Ti-TESPMA-Gel-CaP or Ti-TPSA-Gel-CaP: pH and calcium concentration were measured using a glass electrode (Metrohm Unitrode flat membrane 6.0256.100) with internal reference and a calcium-selective electrode using polymer-based ion-selective electrodes (ISE, Metrohm 6.0508.110) respectively.
Calcium ISE electrode was calibrated by titration of calcium chloride 10 mM solution into 10 mL of ultrapure water set at a desired pH (previously adjusted by addition of NaOH) while a gentle stream of nitrogen was flushed over the calibration sample to limit CO 2 uptake during measurement and exclude any unwanted calcium ion binding. Calcium was added via an automated titration setup (Metrohm 906 Titrando and a Metrohm 800 Dosino dosing units) operated via the software Tiamo 2.5.
Then, the titanium sample was placed into 10 mL of distilled water followed by monitoring the calcium potential for a certain time (2 days or 7 days).
Cell Viability Test and Cell Adhesion Test: MC3T3-E1 cell line was bought from ATCC (U.S.A). MC3T3 was seeded with Ti, Ti-TESPMA-Gel, Ti-TESPMA-Gel-CaP, Ti-TPSA-Gel, Ti-TPSA-Gel-CaP three groups at a density of 5000 per well. After 1 day, 3 days, 7 days, and 10 days of culture, cell viability was assayed using a Cell counting Kit-8 (CCK8; Beyotime, China) and observed according to the manufacturer's instructions. As for SEM observation, 5000 MC3T3 cells were cocultured with different groups. After 24 h, cells were washed with PBS and fixed with 4% PFA for 30 min. Finally, samples were washed with tert-butyl alcohol to replace the ethanol and were dried with a vacuum freeze-dryer.
As for the adhesion test, 5000 MC3T3 cells were plated to Ti, Ti-TESPMA-Gel-CaP respectively. After 1 day and 3 days of culture, cells were washed with PBS and fixed with 4% PFA for 30 min. They were then stained with Phalldin (Sigma) for 40 min and DAPI for 5 min at 37°C. After washing with PBS, the cells and materials were observed under confocal laser scanning microscopy (Nikon Ni-U/Fl/Ri2/Elements-D, Nikon, Japan). The images were reconstructed by Imaris software.
Osteogenic Cell Differentiation with Gel-CaP Coating: MC3T3 cells were plated with Ti, Ti-TESPMA-Gel, Ti-TESPMA-Gel-CaP at a density of 5000 per well. After 4 days and 7 days of culture, total RNA was extracted using the RNAiso Plus reagent (Takara Bio, Japan), reverse transcribed using a PrimeScript RT reagent kit with gDNA Eraser (Perfect Real Time) (Takara Bio), and then subjected to qPCR analysis using SYBR Mix and a LightCycler System (Roche, Switzerland) according to the manufacturer's protocol. Fold changes of mRNA were calculated by the 2−ΔΔC t method after normalization to the expression of a housekeeping gene (GADPH).
MC3T3 cells were plated with Ti, Ti-TESPMA-Gel, Ti-TESPMA-Gel-CaP at a density of 10 000 per well. After 7 days, Alkaline phosphatase (ALP) stain kit was applied to identify the activity of osteogenic cells. The 24well plates were fixed by PFA and cleaned by PBS. Incubating solution was added to plates for 60 min and then cleaned by PBS. The images were observed by microscopy(Olympus, Japan) and analyzed by Image J software.
Statistical Analysis: The data is displayed as mean value and the corresponding standard deviation (SD). For statistical analysis, a one-way ANOVA test was conducted, and values of p < 0.05 were deemed to be statistically significant. www.advancedsciencenews.com www.advhealthmat.de
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2023-03-23T06:17:30.258Z | 2023-03-21T00:00:00.000 | {
"year": 2023,
"sha1": "4c7765f77772fe76c96b364e394a181e18daccf5",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adhm.202203411",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "026083b2b17b81a7b8dc229d59e8668c9a8b26e5",
"s2fieldsofstudy": [
"Materials Science",
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16500124 | pes2o/s2orc | v3-fos-license | Hemispheric Asymmetry of Human Brain Anatomical Network Revealed by Diffusion Tensor Tractography
The topological architecture of the cerebral anatomical network reflects the structural organization of the human brain. Recently, topological measures based on graph theory have provided new approaches for quantifying large-scale anatomical networks. However, few studies have investigated the hemispheric asymmetries of the human brain from the perspective of the network model, and little is known about the asymmetries of the connection patterns of brain regions, which may reflect the functional integration and interaction between different regions. Here, we utilized diffusion tensor imaging to construct binary anatomical networks for 72 right-handed healthy adult subjects. We established the existence of structural connections between any pair of the 90 cortical and subcortical regions using deterministic tractography. To investigate the hemispheric asymmetries of the brain, statistical analyses were performed to reveal the brain regions with significant differences between bilateral topological properties, such as degree of connectivity, characteristic path length, and betweenness centrality. Furthermore, local structural connections were also investigated to examine the local asymmetries of some specific white matter tracts. From the perspective of both the global and local connection patterns, we identified the brain regions with hemispheric asymmetries. Combined with the previous studies, we suggested that the topological asymmetries in the anatomical network may reflect the functional lateralization of the human brain.
These previous studies examined the structural or functional asymmetries of some specific brain regions or the anatomical connections between them. The asymmetries of the gray matter or white matter were analyzed from the local regional attributes. Recently, network model was proposed as a useful tool for investigating the structural organization and functional mechanisms of the human brain [22][23][24][25][26][27][28][29][30][31]. Graph theory approaches the analyses of complex network that could provide a new powerful way of quantifying the brain's structural and functional systems. With the network models, more and more studies have revealed that the structural and functional networks of the human brain exhibit smallworld attributes [23-26, 28, 30, 32-34] and modular structure 2 BioMed Research International [35][36][37]. Using diffusion MRI, several studies have proposed different methods to construct the brain anatomical network [25,32,33,36,38,39]. All these studies revealed that the cortical networks of the human brain have a "smallworld" topology, which is characterized by large clustering coefficients and short average path length [30,40]. However, few studies have examined the hemispheric asymmetries from the perspective of the cerebral anatomical network, and little is known about the asymmetry of the connection patterns of brain regions, which may reflect the functional integration and interaction between different regions.
In this study, we first constructed the anatomical network for each subject by deterministic diffusion tensor tractography (DTT) technique, and then we applied graph theory approaches to examine the topological properties of bilateral brain regions of the network. To investigate the hemispheric asymmetries of the brain, statistical analyses were performed to reveal the brain regions with significant differences between bilateral topological properties. Furthermore, local structural connections were also investigated to examine the local asymmetries of some specific white matter tracts. From the perspective of both the global and local connection patterns, we identified the brain regions with hemispheric asymmetries.
Subjects.
This study included 72 healthy adult subjects (42 males; mean age 23.4 ± 3.7 years; mean years of education 13.5 ± 4.7 years). All participants were right-handed according to the Edinburgh handedness inventory [41]. Each participant provided a written informed consent before MRI examinations and this study was approved by the Medical Research Ethics Committee of Xuanwu Hospital of Capital Medical University.
Data
Acquisition. DTI was performed with a 3T Siemens Trio MR system using a standard head coil. Head motion was minimized with restraining foam pads provided by the manufacturer. Diffusion-weighted images were acquired employing a single-shot echo planar imaging (EPI) sequence in alignment with the anterior-posterior commissural plane. Integral Parallel Acquisition Technique (iPAT) was used with an acceleration factor of 2. Acquisition time and image distortion from susceptibility artifacts can be reduced by the iPAT method. The diffusion sensitizing gradients were applied along 12 nonlinear directions ( = 1000 s/mm 2 ), together with an acquisition without diffusion weighting ( = 0 s/mm 2 ). The imaging parameters were 45 continuous axial slices with a slice thickness of 3 mm and no gap, field of view = 256 mm × 256 mm, repetition time/echo time = 6000/87 ms, and acquisition matrix = 128 × 128. The reconstruction matrix was 256 × 256, resulting in an in-plane resolution of 1 mm × 1 mm. For each participant, a sagittal T1-weighted 3D image was also collected using a magnetization prepared rapid gradient echo (MP-RAGE) sequence. The imaging parameters for this were a field of view of 22 cm, repetition time/echo time = 24/6 ms, flip angle = 35 ∘ , and voxel dimensions of 1 mm × 1 mm × 1 mm.
Data Preprocessing.
Eddy current distortions and motion artifacts in the DTI dataset were corrected by applying affine alignment of each diffusion-weighted image to the = 0 image, using FMRIB's Diffusion Toolbox (FSL, version 3.3; http://www.fmrib.ox.ac.uk/fsl). After this process, the diffusion tensor elements were estimated by solving the Stejskal and Tanner equation [42,43], and then the reconstructed tensor matrix was diagonalized to obtain three eigenvalues ( 1 , 2 , and 3 ) and eigenvectors. The fractional anisotropy (FA) of each voxel was calculated according to the following formula: DTT was implemented with DTIstudio, Version 2.40, software (http://www.mristudio.org), by using the "fiber assignment by continuous tracking" method [44]. All tracts in the dataset were computed by seeding each voxel with FA greater than 0.2. Tractography was terminated if it turned an angle greater than 50 degrees or reached a voxel with FA less than 0.2 [45].
Construction of Anatomical Network.
We constructed the anatomical network for each subject based on the fiber connectivity from deterministic DTT. The main procedures are as follows: First, the brain was automatically segmented into 90 cortical and subcortical regions (45 for each hemisphere; see Table 1) through AAL template (for details, see [32]). Briefly, the individual T1-weighted images were coregistered to the b0 images in the DTI space. The transformed T1 images were then nonlinearly transformed to the ICBM152 T1 template in the Montreal Neurological Institute (MNI) space. Inverse transformations were used to warp the AAL atlas from the MNI space to the DTI native space. Using this procedure, we obtained 90 cortical and subcortical ROIs, each representing a node of the network. Second, all fibers in the brain were obtained by deterministic fiber tractography. And then two nodes and V are connected with an edge if there exist at least three fibers with end points in regions and V; the threshold of three fibers was chosen to ensure that the average size of the biggest connected component of the network keeps 90 across all subjects. The number of fibers between regions was only used to indicate the existence/absence of the edge. Therefore, the binarized anatomical network for each subject was constructed and represented by a symmetric 90 × 90 matrix.
Network Analysis.
We investigated the topological properties of the anatomical network at regional (nodal) levels.
Regional properties were described in terms of degree ( ), shortest path length ( ), and betweenness centrality ( ) of the node . Here, we provide brief, formal definitions of each nodal property used in this study.
Degree.
The degree of a node is defined as the number of connections to that node. Highly connected nodes have large degree. The degree of a graph is the average of the degrees of all nodes in the graph: which is a measure to evaluate the degree of sparsity of a network.
Shortest Path
Length. The mean shortest path length of a node is in which , is the smallest number of edges that must be traversed to make a connection between the node and the node . The characteristic path length of a network is the average of the shortest path length between the nodes: quantifies the ability of parallel information propagation or global efficiency (in terms of 1/ ) of a network [47].
Betweenness Centrality.
Betweenness centrality is widely used to identify the most central nodes in a network, which are associated with those nodes that act as bridges between the other nodes. The betweenness of a node is defined as the number of shortest paths between pairs of other nodes that pass through the node [48,49]. The normalized betweenness was then calculated as .
The nodes with the largest normalized betweenness values were considered as pivotal nodes (i.e., hubs) in the network.
Reconstruction of White Matter Tracts.
To further investigate the local asymmetries of the structural connections, we then reconstructed several major white matter tracts connecting different brain regions. Based on the anatomical knowledge of fiber projections, several studies have suggested the tracking protocols for the major white matter tracts [21,50,51]. According to the published tracking protocols, we reconstructed bilateral cingulum bundles (CB), optic radiation (OR), inferior frontooccipital fasciculus (IFO), inferior longitudinal fasciculus (ILF), arcuate fasciculus (AF), and uncinate fasciculus (UF) for each subject. Based on the reconstructed tracts for each subject, the mean FA of each fiber tract were calculated by averaging the FA values across the voxels that form the three-dimensional tracts derived from tractography.
Asymmetry Analysis.
To analyze hemispheric differences in topological properties for brain regions, we computed the laterality ratio LI = ( − )/( + ) for each property ( , , and ). We tested the nullity of this ratio over the group using a nonparametric one-tailed sign test ( < 0.05 after Bonferroni correction for multiple comparisons; i.e., 90/2 = 45 pairs of regions). To analyze the hemispheric differences in structural properties for white matter tracts, we compared mean FA values and fiber numbers of each tract between left and right hemispheres by paired -tests. For each tract, significant asymmetry of FA or fiber number was defined if < 0.05. All the statistical analyses were performed with Matlab.
Brain Regions with Hemispheric Asymmetries in Node
Properties. Based on the binary anatomical network constructed for each subject, we calculated the topological properties ( , , and ) of each node for each subject. Through statistical analyses for all subjects, we revealed some regions with hemispheric asymmetries in nodal properties ( Figure 1). We defined leftward asymmetries with better topological properties in the left hemisphere than in the right, such as larger and and smaller in the left. Similarly, rightward asymmetries were defined as regions with larger and and smaller in the right hemisphere. From the results, we revealed some regions with hemispheric asymmetries in all three topological properties, such as leftward asymmetries in the inferior frontotriangular gyrus, insula, inferior parietal gyrus, and posterior medial cortex (paracentral lobule, precuneus, and posterior cingulate gyrus) and rightward asymmetries in the superior frontal gyrus, hippocampus, superior parietal gyrus, supramarginal gyrus, angular gyrus, and middle temporal pole ( < 0.05 after Bonferroni correction).
Structural Asymmetries of the White Matter Tracts.
For each subject, we can successfully reconstruct most of the bilateral white matter tracts (Figure 2(a)). However, the right AF is difficult to be tracked out for some subjects (15 out of 72). From Figure 2(b), we can see that several white matter tracts exhibit hemispheric asymmetries in both the microstructural (FA value) and macrostructural (fiber number) properties, such as CB, ILF, and AF ( < 0.05, uncorrected).
Discussion
In this study, we investigated the hemispheric asymmetries of the human brain from the perspective of the cerebral anatomical network constructed from DTI data. By comparing bilateral topological properties, we revealed some brain regions with significant leftward or rightward asymmetries, which indicated the asymmetric connection patterns of these regions. Moreover, the structural properties of some local white matter tracts also exhibit hemispheric asymmetries, which indicated the asymmetries of local connections. It suggested that the structural organizations of the human brain are asymmetric from both the global and local connection patterns. Then, the functional meanings of these structural asymmetries should be discussed.
Hemispheric Asymmetries in Node Properties of the Anatomical Network.
Previous studies of hemispheric asymmetries in the human brain focused on the structures or functions of some local regions [1,2,20,52,53]. In this study, we explored the hemispheric asymmetries from the connection patterns between different brain regions, by comparing the topological properties of nodes between bilateral hemispheres in the anatomical network.
Different nodal properties reflect different aspects of the node in the network. In this study, we chose three topological properties, degree, normalized betweenness centrality, and shortest path length, to analyze the hemispheric asymmetries of the anatomical network. Degree means the number of direct connections to the node. Larger degree means more structural connections to other brain regions in the binary anatomical network. Betweenness centrality reflects the importance of the node, and a node with high centrality is thus crucial to efficient communication [48,49]. Shortest path length quantifies parallel information propagation or global efficiency (in terms of 1/ ) of the node, and smaller means higher global efficiency of the parallel information transfer [47]. Although these three properties interrelated with each other, they reflect different aspects of the node in the network. Therefore, a node with a larger degree, a higher centrality, and a smaller shortest path length will play a more important role in the network. Then we suggested that the asymmetric properties of the regions in bilateral hemispheres indicate the lateralization of these regions in the anatomical network.
Since the regions with hemispheric asymmetries in nodal properties were revealed, we categorized these regions by their functions as follows.
From the results, we can see that regions with leftward asymmetries are mainly related to language, visual processing, and sensory functions. Regions with rightward asymmetries are mainly related to the functions of spatial attention, face recognition, emotion, and memory. Some regions in the association cortex with multiple functions also exhibit leftward or rightward asymmetries in the nodal properties.
Combined with the findings of some previous studies, we speculated that the topological asymmetries in the anatomical network are likely to form the structural substrate of different functional principles of information processing in the two hemispheres. Since Paul Broca's discovery in 1861, the notion of left hemisphere specialization for language has been established [11]. Recently, more and more studies revealed the leftward asymmetries in the size of regions involved in language and auditory processing, such as planum temporale [4][5][6][7], sylvian fissure [8], and Heschl's gyrus [4,9,10], which may supply the anatomic basis of the functional lateralization in the human brain. In this study, the most obvious finding is the leftward asymmetries in several language related regions, such as triangular and orbital areas of inferior frontal gyrus, middle and inferior temporal gyrus, and Heschl's gyrus. The results dovetailed with the prevailing notion that the left hemisphere is the dominant hemisphere for language [75,76]. It may also suggest that not only the structural asymmetries of these regions, but also the asymmetric connection patterns of these regions support the functional lateralization of the human brain.
Another finding is the rightward asymmetries of angular and supramarginal gyrus, which located in the temporoparietal junction. The right angular and supramarginal gyrus have been widely implicated in the functions of spatial attention [67][68][69]. Previous functional MRI studies have also revealed that right temporoparietal junction plays a dominant role in actual implementation of spatial attention by functional connectivity analysis [77,78]. Therefore, this result corresponded with the right hemispheric specializations for visuospatial functions [75,[79][80][81].
Some subcortical structures in the limbic system, such as hippocampus and amygdala, were also revealed with rightward asymmetries in the topological properties. Hippocampus plays an important role in memory and spatial navigation [68,74], and amygdala performs a primary role in the processing of memory and emotion [72,73]. Then, in this study, the rightward asymmetries in topological properties of hippocampus and amygdala may suggest that the right hemisphere is more prominent in the functions of emotion and memory. It is also consistent with the findings of some structural MRI studies which indicated that the hippocampus and amygdala are rightward asymmetric based on the volume measurements [82].
Besides the above results, we revealed some regions with hemispheric asymmetries in all three topological properties, such as leftward asymmetries in three regions of posterior medial cortex (paracentral lobule, precuneus, and posterior cingulate gyrus), inferior parietal gyrus, and insula and rightward asymmetries in middle temporal pole, superior parietal gyrus, and superior frontal gyrus. Most of these regions are located in the association cortex, which plays a central role in receiving convergent inputs from multiple cortical regions [68]. To be mentioned, three continuous regions in the posterior medial cortex have been revealed as the structure core of the cerebral cortex by a diffusion MRI study [36]. However, no study has examined the functional or structural asymmetries of these core regions yet. Then, in this study, it is the first time to reveal the leftward asymmetries of the core regions in the anatomical network.
Of note, abnormal asymmetric patterns of brain structure or function have been implicated in some psychiatric disorders, such as schizophrenia, and the extent of altered asymmetry is related to the symptoms of the patients [83]. Therefore, we speculated that the asymmetric topology of brain networks would also change under various conditions with mental diseases and may supply as sensitive biomarkers for early disease detection, which should be further investigated.
Structural Asymmetries of the White Matter Tracts.
Based on the tractography results of six major white matter tracts, we analyzed the structural asymmetries of these tracts in mean FA values and fiber numbers. Previous DTI studies have identified the anatomical asymmetries of some fiber tracts, such as leftward asymmetries of the arcuate fasciculus [2,3,[19][20][21]. As one of the most important language pathways, arcuate fasciculus starts from Broca's area in inferior frontal gyrus and projects into the middle and inferior temporal gyrus [84]. In this study, we revealed the leftward asymmetries of AF from both the micro-and macrostructural properties. The leftward asymmetry of AF corresponds with the leftward asymmetries of the triangular area of inferior frontal gyrus, middle and inferior temporal gyrus revealed by the topological analysis of the anatomical network. These results suggest that the language related regions exhibit leftward asymmetries from both the global and local anatomical connection patterns. We speculated that these structural asymmetries may provide the anatomical substrate of language related functional lateralization of the human brain. Besides the leftward asymmetries of AF, the CB and ILF are also leftward asymmetric in both mean FA values and fiber numbers. The cingulum bundles have been investigated in several previous DTI studies, and the leftward asymmetries in fiber integrity were identified by different methods [15,16,85]. The inferior longitudinal fasciculus, which connects the temporal lobe and occipital lobe, plays an important role in visual memory [86,87] and is considered as an indirect pathway of language semantic processing [88]. This is the first time leftward asymmetry of ILF is revealed and it may provide new information for future studies. The inferior frontooccipital fasciculus, which connects the posterior occipital areas and the orbitofrontal region, is a direct pathway of language semantic processing [88]. A previous DTI study has reported the leftward asymmetries in the fiber integrity of IFO and has suggested that the structural asymmetries of the tract correspond with the hemispheric dominance for language [21]. Additionally, we found that the optic radiation and uncinate fasciculus are rightward asymmetric in fiber numbers. However, the functional meaning of these asymmetries should be examined in future studies.
As both the global (structural connectome) and local (FA) measures were investigated in the present study, some results can be cross-validated by different measures. For example, we found that both the language related regions and WM tracts exhibited significantly leftward asymmetries. However, global and local measures may represent different physiological meanings. The local measure, such as FA, reflects the white matter integrity or the consistency of fiber orientation at microstructural level, while the nodal properties of brain connectome, such as nodal efficiency, are an integrated metric of global information flow capacity and related to all of the nodal connections, which consist of a specific tract or several tracts together. Therefore, the findings from network analysis can supply more comprehensive information than the traditional regional and local investigations from a system level.
Methodological Issues.
The most essential elements of a network are the nodes and edges. The definition of the nodes and edges has a great effect on the constructed network and the analysis results. Therefore, we need to address some methodological issues about how we carried out the network construction.
BioMed Research International
First, we applied the AAL template to define the nodes for each subject's network. The AAL template was taken from a MNI single-subject brain [46]. The biggest limitation of this template is the absence of anatomical lateralization of some regions, such as leftward asymmetry of the planum temporale for the vast majority of right-handers [46]. This limitation will affect the results of the topological asymmetries of the anatomical network in this study. In future studies, a more fine parcellation representation, which is defined at a voxel population level rather than a regional level to partition the cerebral cortex into thousands of regions [36], should be employed to investigate the asymmetries of the brain network, in order to localize the asymmetric topological organizations more accurately.
Second, we employed deterministic DTT to define the edges of the anatomical network. However, the "fiber crossing" problem is a limitation of deterministic tractography algorithms, because the tracking always stops when it reaches fiber crossing regions with low factional anisotropy values [89]. This will result in the loss of some existing fibers and hence some edges of the network. Another limitation of deterministic tractography, especially for long-distance fiber bundles, is erroneous tracking results due to noise and resolution limitations [89]. To solve this issue, several researchers have used probabilistic fiber tracking algorithms [90][91][92]. By modeling a probability distribution of the fiber orientations within a voxel, these statistical methods can identify fiber connections missed by deterministic tracking approaches. However, the number of gradient directions in our diffusion dataset is not sufficient to accurately estimate the probability density function of the fiber orientations. Therefore, future studies with more advanced diffusion imaging techniques or tractography methods could yield a more complete and accurate anatomical network for each subject.
Another issue about the choice of a binary or weighted network needs addressing. For a weighted network, a challenge is to decide on the most representative measure of structural connectivity. Several candidate measures, such as fiber numbers, mean fiber length, fiber density, and mean fraction anisotropy, can be selected as the connectivity measure [25,38,39,93]. But the physiological meaning of these measures is unclear. It is also hard to validate which measure describes the information transfer of neural signals most accurately. In this work, we constructed the binary network by just taking into consideration the existence/absence of regional connections. However, a weighted network with a proper connectivity measure may better reflect the topological asymmetries of the network.
Besides the above methodology limitations, some other important issues should be investigated in the future. First, as the sex effects on the topological organization of brain networks have been suggested [94], the sex differences in the network asymmetry should be examined in the future. Second, due to the relatively small sample size in the present study, other independent datasets with high quality MRI acquisition and larger samples, such as Human Connectome Project (HCP) datasets [95], should be employed to validate the current results.
Conclusion
In this study, we have analyzed the hemispheric asymmetries from the perspective of the whole-brain anatomical network and revealed the topological asymmetries of some brain regions, which indicated the asymmetric connection patterns of these regions at the global level. Moreover, we found the structural asymmetries of some local anatomical connections between regions, and the structural asymmetries of the white matter tracts are interrelated with the topological asymmetries of the brain regions. We speculated that the asymmetric connection patterns of brain regions might reflect the functional lateralization of the human brain. | 2016-05-16T14:03:57.559Z | 2015-10-11T00:00:00.000 | {
"year": 2015,
"sha1": "c90205b08979ca148feef4ebcfcadea3d826719b",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2015/908917.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de736c9a74614152502da875ee4e257455376dde",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
222177476 | pes2o/s2orc | v3-fos-license | Small data blow-up for the weakly coupled system of the generalized Tricomi equations with multiple propagation speed
In the present paper, we study the Cauchy problem for the weakly coupled system of the generalized Tricomi equations with multiple propagation speed. Our aim of this paper is to prove a small data blow-up result and an upper estimate of lifespan of the problem for a suitable compactly supported initial data in the subcritical and critical cases of the Strauss type. The proof is based on the test function method developed in the paper [16]. One of our new contributions is to construct two families of special solutions to the free equation (see (2.16) or (2.18)) and prove their several roperties. We also give an improvement of the previous results [10, Theorem 1.1], [12, Theorem 1.1] and [25, Theorems 1.2 and 1.3] about the single case, that is, remove the point-wise positivity of the initial data assumed previously.
Setting and backgrounds
In the present paper, we study the Cauchy problem of the weakly coupled system of the generalized Tricomi equations with multiple propagation speed on the Euclidean space R N : (1.1) where N ∈ N denotes the spatial dimension, ∂ t := ∂/∂t is the time derivative, ∆ := N j=1 ∂ 2 /∂ 2 x j is the Laplace operator on R N , m 1 , m 2 ≥ 0 stands for the strength of the diffusion, G 1 : R → R ≥0 and G 2 : R → R ≥0 are nonlinear functions which satisfy G 1 ∈ C 1 (R), G 2 ∈ C 1 (R) and the assumption that there exist positive constants a, b > 0 and p, q > 1 such that the estimates G 1 (0) = 0, G 2 (0) = 0, G 1 (σ) ≥ a|σ| p , G 2 (σ) ≥ b|σ| q (1.2) hold for any σ ∈ R, the pair of functions (u, v) : R N × [0, T ) → R 2 is an unknown function, f 1 , f 2 , g 1 , g 2 ∈ C ∞ 0 (R N ) are prescribed functions describing the shape of the initial data, ε > 0 is a small parameter showing the size of the initial data and T = T (ε) > 0 denotes the maximal existence time of the function (u, v), which is called lifespan (see Definition 1.2).
The significance to study the problem (1.1) mathematically comes from physical problems of Transonic gas dynamics (see [4,Chapter 4] for more detail).
Our aim of the present paper is to prove a small data blow-up result and an upper estimate of lifespan of the problem (1.1) for suitable compactly supported initial data in the subcritical case and the critical case of the Strauss type, that is, the case Ω S S (N, m 1 , m 2 , p, q) ≥ 0 for m 1 where w = w(x, t) : R N × [0, T ) → R is an unknown function, m ≥ 0, ρ > 1 is the exponent of the nonlinear term and f, g ∈ C ∞ 0 (R N ) are given functions. More precisely, we relax the assumption of the positivity of the initial data f and g in a point-wise sense assumed in [
Known results
We recall related results about the problem (1.1). There are many mathematical results about existence or non-existence of solution and properties of solution such as regularity and time decay to the problem (1.1) or (1.3) (see [2,3,10,11,12,28,29,30,34,35] and their references). The classical Tricomi equation, which was firstly investigated by Tricomi [33], is known as a typical second order partial differential equation of mixed type, which means that the cases t < 0, t = 0 and t > 0 are elliptic, parabolic and hyperbolic, respectively. The fundamental solutions to (1.4) were computed explicitly in [2,3]. Yagdjian [34] studied the linear generalized Tricomi equation of the hyperbolic type with an external force e : R N × [0, ∞) → R: and derived an explicit formula of the fundamental solution to this problem. Moreover he proved L p -L q time decay estimates for the solution. Ruan, Witt and Yin [28] proved local existence and singularity structures of solution with low regularity to the Cauchy problem of (1.5) with a smooth inhomogeneous term e = e(t, x, w) and discountinuous initial data in the case of N ≥ 2.
Next we recall results about the Cauchy problem of a semilinear equation with a ρ-th order power nonlinearity: 6) where N : R → R is a ρ-th order nonlinear function. The typical examples are N(z) := ±|z| ρ−1 z and ±|z| ρ . Yagdjian [35] proved large data local well-posedness in the Lebesgue space L q (R N ) for some q > 1, and global existence or non-existence of L q -solutions to the equation (1.6) for some small initial data under some restrictions on the exponent ρ. Ruan, Witt and Yin [29] studied the Cauchy problem (1.6) with m = 2l − 1 and l ∈ N on a mixed type domain R N × (−T, T ) with N ≥ 2 and proved existence of a local solution to (1.6) for low regular initial data. They [30] showed local existence and uniqueness of a minimal regular solution to the Cauchy problem (1.6) with m ∈ N and some ρ ∈ N on the hyperbolic domain R N × [0, T ) with N ≥ 2. He, Witt and Yin [10] studied existence of a global solution to the problem (1.3) with m ∈ N and N ≥ 2 and showed a small data blow-up result for suitable non-negative compactly supported smooth data f, g ≥ 0 in the subcritical case ρ ∈ (1, ρ S ), where ρ S = ρ S (N, m) is called a critical exponent defined as the positive root of the quadratic equation of ρ, We note that in the case of m = 0, the quadratic polynomial γ S (N, 0, ρ) is γ S (N, 0, ρ) := −(N − 1)ρ 2 + (N + 1)ρ + 2 and the exponent ρ S (N, 0) is which is known as the Strauss exponent, and divides small data blow-up and small data global existence to the Cauchy problem (1.3) with m = 0. They [10, Theorem 1.2] also proved small data global existence to the problem (1.
Here ρ conf is called a conformal exponent given by ρ conf = ρ conf (N, m) := (m+2)N+6 (m+2)N−2 . They [11, Theorem 1.1] proved small data global existence to the problem (1.3) with N ≥ 3 and m ∈ N and with smooth compactly supported data f, g ∈ C ∞ 0 (R N ) in the supercritical case ρ ∈ (ρ S , ρ conf ). They [12, Theorem 1.1] showed small data blow-up to the problem (1.3) with N ≥ 2 and m ∈ N and with non-negative smooth compactly supported data f, g ∈ C ∞ 0 (R N ) in the critical case ρ = ρ S and they also showed small data global existence to (1.3) with N = 2 and m ∈ N and with low regular and weighted data in the supercritical case ρ ∈ (ρ S , ρ conf ) (see [12,Theorem 1.1]). The second and third authors [25, Theorems 1.2 and 1.3] proved small data blow-up and an upper estimate of lifespan to the problem (1.3) with m ≥ 0 and N ∈ N and with suitable non-negative smooth compactly supported data f, g ∈ C ∞ 0 (R N ) in the subcritical and critical cases ρ ∈ (1, ρ S ], that is, the following estimate holds for any ε ∈ (0, ε 0 ], where ε 0 and C are positive constants independent of ε. From the above results, we see that the exponent ρ S (N, m) at least in the case m ∈ Z ≥0 is a critical exponent dividing global existence and blow-up to (1.3) for small data, that is, the following statement holds: if ρ > ρ S (N, m), the solution exists globally in time for small data, if ρ < ρ S (N, m), the solution blows up in a finite time even for small data. (1.8) The problem to determine the critical exponent to the semilinear classical (m = 0) wave equation is well known as the Strauss conjecture and was solved (see [36,37] and their references). Also, roughly speaking, the sharp lower and upper bounds of lifespan in the case N ≥ 2 to (1.9) were proved as where C > 0 is a positive constant independent of ε and a = a(ε) satisfies ε 2 a 2 log(1 + a) = 1 (see [32,38,16] and their references). Here for N ∈ N and m ≥ 0, γ G (N, m, ρ) is defined as the following monomial of ρ ≥ 1 and the root of the equation γ G (N, 0, ρ) = 0, that is, ρ G = ρ G (N, 0) := 1 + 2 N−1 is called the Glassey exponent in the research field of wave equations (see [13]). Our approach of this paper for the proof of our main result (Theorem 1.1) is applicable to the single equation (1.3) and can extend the result of the upper estimate to more general m ≥ 0 except for the case N = ρ = 2 (see Remark 1.3). Very recently, small data blow-up and upper estimate of lifespan for the generalized Tricomi equation (1.9) with a replacement of |u| into the derivative nonlinearity |∂ t u| p were studied in [26,24].
Determining the critical (p.q)-curve for the weakly coupled system of the classical semilinear wave equations was also studied in the case N ≥ 2 where c 1 > 0 and c 2 > 0 are positive constants and denote the propagation speeds. In the case of c 1 = c 2 , the following statement is proved in [1,9,18,21,22,16]: if Γ S S (N, 0, p, q) > 0, the solution exists globally in time for small data, if Γ S S (N, 0, p, q) < 0, the solution blows up in a finite time even for small data.
Here the closed set (p, q) ∈ (1, ∞) 2 | Γ S S (N, m, p, q) = 0 is called a critical (p, q)-curve. Also, the sharp lower and upper estimates of lifespan are shown as for any ε ∈ (0, ε 0 ], where ε 0 and C are positive constants independent of ε. We emphasize that our main result extends this upper estimate to more general case m ≥ 0 and also treat the one spatial dimension N = 1. For other nonlinearities or the different propagation speeds c 1 c 2 to (1.10), see [19,20].
Main result
In this subsection, we state our main small data blow-up results in the present paper. To do so, we introduce several terminologies and notation. First we give the definition of the weak solution to the Cauchy problem (1.1).
Next we give the definition of lifespan T (ε) of the solution to the problem (1.1). Definition 1.2 (Lifespan). We call the maximal existence time of the weak solution (u, v) to the Cauchy problem (1.1) to be lifespan, which is denoted by T (ε) = T (ε, ( f 1 , g 1 ), ( f 2 , g 2 )), that is For N ∈ N and m ≥ 0, we introduce three rational functions F S S = F S S (N, m, p, q), Γ S S = Γ S S (N, m, p, q) and F GG = F GG (N, m, p, q) of p, q ≥ 1 given by For m 1 , m 2 ≥ 0 with m 1 m 2 , we also introduce a rational function Ω S S = Ω S S (N, m 1 , m 2 , p, q) and Ω GG = Ω GG (N, m 1 , m 2 , p, q) of p, q ≥ 1 given by (1.14) Now we state our first main result, which means small data blow-up and an upper bound of the lifespan to the problem (1.1) with different propagation speeds m 1 m 2 in the critical and subcritical cases Ω S S (N, m 1 , m 2 , p, q) ≥ 0: ) satisfy the following estimates Let ε > 0. We assume that a pair of functions (u, v) is a weak solution of the problem (1.1) on [0, T ) and satisfy the finite propagation property, that is, is given by (1.14). Then the lifespan T (ε) < ∞. Moreover there exist positive constants A = A(N, m 1 , m 2 , p, q, g 1 , g 2 ) > 0 and ε 0 = ε 0 (N, m 1 , m 2 , p, q, g 1 , g 2 ) ∈ (0, 1] independent of ε such that for any ε ∈ (0, ε 0 ], the estimate holds. Next we study the problem (1.1) with the same propagation speed m 1 = m 2 =: m > 0. In this case, we can prove a better upper estimate of the lifespan than Theorem 1.1 in the critical and subcritical cases Γ S S (N, m, p, q) ≥ 0: holds.
Our results (Theorems 1. . However, we do not need to assume that the initial data f and g are positive in the pointwise sense assumed in the previous results and we only need the assumption I[g] = R N g(x)dx > 0 (see also [16,31] for the classical case m = 0).
Strategy of the proof
We explain the strategy of the proof of Theorem 1.1. The proof is based on a test function method developed in [16], which is effective to prove small data blow-up results and the sharp upper estimates of lifespan to the classical several semilinear wave equations (m 1 = m 2 = 0) in the critical and subcritical cases. To apply the method to our general case (m 1 , m 2 ≥ 0), we require to construct two families of
Organization of the present paper
The rest of this paper is organized as follows. In section 2, we construct two families of special solutions {w λ,m (x, t)} λ>0 and {W β,m (x, t)} β>0 to the free generalized Tricomi equation with m ≥ 0, and prove their several properties. In section 3, we give proofs of Theorem 1.1 and Theorem 1.2 by applying the test function method developed in [16] with appropriate test functions including the special solutions constructed in section 2.
Construction of two families of special solutions and their properties
In this section, we introduce two families of special solutions {w λ,m (x, t)} λ>0 and {W β,m (x, t)} β>0 given by (2.16) For m ≥ 0 and λ > 0, we consider the following second order ordinary differential equation where y = y(t) is an unknown function. As shown in [27, Section 2.1.2], one of the solutions to (2.1) can be expressed as where C 0 ∈ R is an arbitrary number. Here for ν > 0, K ν : R ≥0 → R ≥0 is the modified Bessel function of the second kind given by , it can be seen that where C ′ 0 depends only on m. We state several estimates of the solution y λ (t) to (2.1).
Next we introduce a family of special solutions {w λ,m } λ>0 to the free equation (1.18).
Definition 2.1 (A family of special solutions I).
Let m ≥ 0 and λ > 0. Let y λ = y λ,m : R ≥0 → R >0 and ϕ λ : R N → R >0 be defined by (2.2) and (2.12), respectively. We define a function w λ = w λ,m : We remark that for any λ > 0, the equation (2.1) and Lemma 2.2 imply that the function w λ satisfies From Lemma 2.1 for the function y λ , Lemma 2.2 for the function ϕ λ and the condition β > 0, we see that the function W β is well defined, i.e. |W β (x, t)| < ∞ for any (x, t) ∈ R N × R ≥0 . We remark that since w λ satisfies (1.18) on R N × R ≥0 , the function W β satisfies Moreover we state several estimates for the solution W β,m in the following proposition.
Proof of Theorem 1.1
In this subsection, we complete the proof of Theorem 1.1. By symmetry, we may assume that m 1 > m 2 , which implies that the identities Ω GG (N, m 1 , m 2 , p, q) = Γ GG (N, m 1 , p, q) and holds for any R ∈ (T 2 , T ). By the assumption F GG (N, m 1 , q, p) > 0, the estimate T (ε) ≤ Aε −F GG (N,m 1 ,q,p) −1 holds for some A > 0 independent of ε. Next we consider the opposite case p < q. Then the identity Γ GG (N, m 1 , p, q) = F GG (N, m 1 , p, q) holds. By the estimate (3.9), the estimate εI[g 1 ] ≤ R −F GG (N,m 1 ,p,q) holds for any R ∈ (T 2 , T ). By the assumption F GG (N, m 1 , p, q) > 0, the estimate T (ε) ≤ Aε −F GG (N,m 1 ,p,q) −1 holds. Therefore the estimate T (ε) ≤ Aε −Γ GG (N,m 1 ,p,q) −1 holds, which completes the proof of theorem in the case of Γ GG (N, m 1 , p, q) holds for any R ∈ (T 2 , T ). Noting that the identity holds, by combining the estimates (3.7) and (3.15), the inequality holds for any R ∈ (T 2 , T ). By the assumption F S S (N, m 1 , p, q) > 0, the estimate holds for any R ∈ (T 2 , T ), which implies that the estimate T (ε) ≤ Aε −F S S (N,m 1 ,p,q) −1 holds for some A > 0 independent of ε. This completes the proof of the theorem in the case of F S S (N, m 1 , p, q) > 0. (N, m 1 , p, q) = 0 and N ≥ 2 | 2020-10-08T02:05:56.770Z | 2020-10-07T00:00:00.000 | {
"year": 2021,
"sha1": "542c7f0caceda403976b574907070c1ddaea241f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2010.03156",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "542c7f0caceda403976b574907070c1ddaea241f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
198953059 | pes2o/s2orc | v3-fos-license | Annotation-Free Cardiac Vessel Segmentation via Knowledge Transfer from Retinal Images
Segmenting coronary arteries is challenging, as classic unsupervised methods fail to produce satisfactory results and modern supervised learning (deep learning) requires manual annotation which is often time-consuming and can some time be infeasible. To solve this problem, we propose a knowledge transfer based shape-consistent generative adversarial network (SC-GAN), which is an annotation-free approach that uses the knowledge from publicly available annotated fundus dataset to segment coronary arteries. The proposed network is trained in an end-to-end fashion, generating and segmenting synthetic images that maintain the background of coronary angiography and preserve the vascular structures of retinal vessels and coronary arteries. We train and evaluate the proposed model on a dataset of 1092 digital subtraction angiography images, and experiments demonstrate the supreme accuracy of the proposed method on coronary arteries segmentation.
Introduction
Quantitative measurement of coronary arteries in medical images is important for the diagnosis, prevention and therapeutic evaluation of related diseases including hypertension, myocardial infarction, and coronary atherosclerotic disease. In the diagnosis of coronary diseases, digital subtraction angiography (DSA) has been widely used and considered the "gold standard". To quantitatively segment blood vessels in DSA, researchers have developed automated methods including region growing, level sets, and Hessian analysis [1,11]. However, due to the complexity of the vascular morphology, these unsupervised methods are difficult to obtain a clinically satisfactory segmentation of coronary arXiv:1907.11483v1 [eess.IV] 26 Jul 2019 arteries. On the other hand, supervised learning such as deep neural networks (DNNs) can produce better segmentation results but relies heavily on pixel-level image annotation, which is often expensive, time-consuming, and even impossible to access especially for coronary artery segmentation.
To solve the problem of the lack of DSA vessel annotation, we aim to apply a strategy to transfer the knowledge of retinal vessel segmentation to the coronary artery segmentation. The researchers have established and validated a number of public retinal vessel segmentation datasets, including DRIVE [12], STARE [4], and RITE [5]. Due to the significant differences between their anatomical regions, traditional transfer methods are not suitable. Therefore, we present a novel knowledge transfer based adversarial model containing three parts called generator, discriminator, and segmentor. Training of the model contains three major steps: 1) Frangi vessel analysis [2] is used to segment the coronary artery in the DSA images roughly. 2) The adversarial training between generator and discriminator allows the model to fuse the fundus image and the DSA image.
3) A synthetic label is then created by computing the union of rough coronary artery segmentation and retinal vessel annotation. Moreover, a shape-consistent scheme is used to ensure the shape consistency of synthetic images and synthetic annotations. The fused image and corresponding synthetic label are used to train the segmentor. The supreme accuracy demonstrates the effectiveness of our methods, which improves the accuracy of coronary artery segmentation by using the knowledge of fundus segmentation without additional manual annotation of the DSA images. The ideas of knowledge transfer and data fusion in this paper have many other application scenarios, including cell segmentation, neural segmentation, and airway segmentation. As long as the object structures in the two datasets are similar, we can use the knowledge from the annotated dataset to guide the analysis of the unannotated one.
Our work relates closely to the recent rise of knowledge transfer techniques. In the field of natural image analysis, Domain-Adversarial Neural Network (DANN) transfers the feature distribution to solve the domain-shift problem [3]. Cycle-GAN introduces a cycle consistency loss and achieves unpaired image-to-image translation [17]. AdaptSegNet adopts adversarial learning in the output space and receives favorably accuracy and visual quality [13]. In the field of medical image analysis, many studies have been dedicated to exploring cross-modality translation with GAN [10]. Using synthetic data to overcome insufficient labeled data is also an active research area. For example, using synthetic data as augmented training data can help lesion segmentation [7] and cardiovascular volumes segmentation [16]. These methods show that knowledge transfer is effective for the same anatomical region, and our work contributes by accomplishing knowledge transfer between two different anatomical regions.
Methods
In this section, we show two models based on knowledge transfer. The first one is the GAN model with a constraint of shape consistency (SC-GAN) proposed in this work. Then, to verify the necessity of fusion, we show another simpler model. It adopts from Mixup [15], which is to train the U-Net model by computing an average of the fundus image and the DSA image (Add U-Net). Figure 1 shows the training and test processes of the proposed model. During the training process, the generator, discriminator, and segmentor are trained simultaneously. In the test process, only the trained segmentor is needed to segment coronary arteries, which requires much less memory and inference time than training. In the previously reported knowledge transfer between different modalities within the same anatomy, the foreground and the background can be reasonably registered. In our task, however, the foreground and background of the images are completely mismatched. So we have designed a shape-consistent scheme that allows the generator and discriminator to complete the knowledge transfer of foreground and background respectively. In the following subsections, we will discuss the model's architectures and the objective functions in detail.
SC-GAN
Generator: As shown in Figure 1, the generator uses U-Net as its network backbone. The input of the generator is an average of the fundus image (Real A) and the DSA image (Real B). The output is a synthetic image (Fake B) with the same dimensions as the input. To ensure that the Fake B has both retinal vessels and coronary arteries, we extract the two vascular regions (Part A and Part B) in Fake B using manual annotation of retinal vessels and Frangi segmentation of DSA images. A shape-consistent loss (l1 loss) is then used to regularize the content of the vessel regions in Fake B to be consistent with the corresponding regions in the original images, (1) where A represents the fundus image, B represents the DSA image, and label A and label B are the retinal vessel annotation and the Frangi segmentation results, respectively.
Discriminator: We expect that the background of Fake B is sufficiently similar to the background of a DSA image, so we first use the annotations and Frangi analysis results to mask out the vascular regions and extract the background in the generated and real DSA images, Where B * (Real B * in Figure 1) represents a randomly chosen DSA image before the injection of the contrast medium, which guarantees a vessel-free background.
For the structure of the discriminator, we adopt PatchGAN [17]. The adversarial loss between the generator and the discriminator can be expressed as, Segmentor: The main structure of the segmentor is also a U-Net. We use Mul-tiLabelSoftMarginLoss [9] as the objective function of the segmentor: whereŷ is the prediction and y the synthetic label. Therefore, the final objective function of our proposed model is, where λ and µ control the relative importance of the objectives. During training, we set λ = 100, µ = 50. The model uses instance normalization [14] instead of batch normalization [6], the generator uses ReLU and the discriminator uses LeakyReLU as activations.
Experiments
In this section, to evaluate the effectiveness of our proposed SC-GAN, we compare the segmentation of four methods: 1) Frangi Algorithm Multi-scale Frangi vessel analysis is used to segment coronary arteries. 2) Classic U-Net.
A U-Net model is trained using Frangi Algorithm results as the learning targets.
3) Add U-Net. A U-Net model is trained using an average of fundus photographies and DSA images. 4) SC-GAN The proposed shape-consistent GAN. We also compare the synthetic results of SC-GAN and Cycle-GAN [17]. The data and experiment details are presented below.
Data: We use the DRIVE [12] dataset as the source domain of knowledge transfer. The DRIVE dataset includes 40 fundus images with manually annotated retinal vessels. We also collect 1092 coronary angiographies (DSA) with no annotations as the target domain of knowledge transfer. Several preprocessing approaches are performed on the fundus images, including color to grayscale transform, median-filtering and contrast-limited adaptive histogram equalization [18]. Finally, we resize all images into the same size of 512×512 and randomly choose 256×256 patches as inputs of the models.
Results
In this section, we briefly report the evaluation results in two aspects: 1) Images synthesis and 2) Images segmentation.
Images synthesis: Figure 3 shows some examples of the fundus, DSA, and synthetic image patches. Compared to the results of Cycle-GAN [17], the synthetic images from our proposed SC-GAN have more realistic DSA background and also preserve the vascular structures corresponding to the labels (see the columns (c) and (d) in Figure 3).
SC-GAN
Cycle-GAN Images segmentation: We annotate 30% of the DSA dataset (328 out of 1092 images) and evaluate our proposed model on it. Table 4 compares the performance of different methods. As the baseline method of this article, the Frangi algorithm has a Dice score of 0.636±0.046. If the result of the Frangi algorithm is used as an annotation to train a U-Net (Classic U-Net), the Dice score reduces to 0.589±0.049. Both Add U-Net and SC-GAN have higher Dice scores (0.742±0.048 and 0.824±0.026). And SC-GAN also outperforms the other methods in terms of accuracy and recall. Figure 4 shows some typical examples in the test set. Columns (d-e) show better results than columns (b-c), indicating that knowledge transfer effectively enhances the identification of small blood vessels. By comparing the results of Add U-Net and SC-GAN, we can also find that GAN is better than an average in terms of the quality of knowledge transfer.
Discussion
In this paper, we proposed a shape-consistent GAN model (SC-GAN) for coronary artery segmentation, which was able to transfer the knowledge of the seg-mentation on public fundus dataset to an unlabeled DSA dataset. Experimental results demonstrated that SC-GAN obtained an obvious superior performance on coronary arteries segmentation. Despite the promising results, our method has several limitations and requires further investigation: 1. How well does the method perform on other datasets; 2. Although the segmentor is light-weighted in test, training SC-GAN is much more complex than in a classic supervised deep model. In future work, we will further test the proposed SC-GAN on other application scenarios and simplify the training process. | 2019-07-26T11:08:32.000Z | 2019-07-26T00:00:00.000 | {
"year": 2019,
"sha1": "2366777b8e655070838b93786b5bebdd61622316",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1907.11483",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2366777b8e655070838b93786b5bebdd61622316",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
86680423 | pes2o/s2orc | v3-fos-license | Service-Oriented Context-Aware Messaging System
In services oriented computing, location or spatial models are required to model the domain environment whenever location or spatial relationships are utilised by users and/or services. This research presents an ontology-based methodology for context-aware messaging service. There are five main contributions to this research. First, the research provides a service oriented methodology for modelling and building contextaware messaging systems based on ontological principles. Second, it describes a method that assists understanding the domain’s spatial environment. Third, it includes a proposal of the generic Mona-ServOnt core service ontology that offers context-aware reasoning for capture and use of context. Mona-ServOnt is able to support the deployment of context-aware messaging services in both indoor and outdoor environments. Fourth, a novel generic architecture that captures the requirements for context-aware messaging services is given. Fifth, the generic messaging protocols that describe the exchange of messages within context-aware messaging services is modelled. A few experiments were completed to measure the performance of the peer-to-peer services using actual smartphone with Bluetooth capability. In addition, the methodology’s main steps have been validated individually in various context-aware messaging domains. It has been evaluated using competency questions that gauge the scope of the proposed ontology. Furthermore, the generic architecture and messaging protocols have been verified in constructing for each domain. Keywords—Context-Awareness; messaging service; service ontology; semantic web service
I. INTRODUCTION
Service-oriented technology is moving beyond the personal computer to everyday mobile devices.It has become increasingly common for people to interrelate with serviceoriented technology in many aspects of their daily lives and the continual miniaturisation, increase in processing power and connectivity has amplified this trend.We shall refer to this result as 'pervasive' service-oriented computing as this trend can be observed in most facets of modern life.
Pervasive computing was first mentioned in [1] that shows that computation resources can be used in many environments.Since then, many pervasive services have been developed.Pervasive computing is currently powered by services oriented computing [2] aims to develop intelligent applications that understand the available context information and respond with the best services.These applications are known as contextaware services.Aljawarneh et al. [3] stated that pervasive services have several common features.For example, pervasive services use distributed sensors and a context source to collect information about the environment.Pervasive services also have reasoning functions to recognise the semantic significance of the collected information and perform the appropriate action.In addition, they possess several types of procedures to handle simple and complex activities.Finally, the main feature of pervasive services is the application of their services in multiple environments.
Context-aware service began two decades ago in [4] which provides examples of context as location, nearby people and objects, and changes to service objects over time.Context acquisition, interpretation, understanding and context response are the primary concepts pertinent to context-aware systems.Location awareness and activity recognition are also paramount as the user's location and activity are necessary to many services [5].Context can play a major role in communication services, especially in messaging services.Context information can be used effectively for addressing or describing targets when sending messages.There are other various services supported by context, such as travel services and commercial services.
In order to provide modelling, an ontology that offers sharing and usage of the available information about the domain is required [6].Ontologies have been extensively used to represent various real-world service domains and are significantly employed as a tool to assist in information sharing between domains.An ontology's target is to achieve a collective knowhow of a provided discipline.
Every context-aware service has its own characteristics in order to achieve its goal and depends on the service's requirements.However, context-aware services share common methods of using context information.The definition of context pertaining to actors in the domain is necessary when proposing and constructing context-aware messaging services and comprises a fundamental comprehension of domain features.Constructing context-aware services depends on understanding several factors within pervasive computing.This paper identifies and addresses a gap in current research, that is, the need for a general methodology for contextaware messaging services as well as the description of a generic ontology for such context-aware messaging services.This notion of what we call Context-Aware Messaging Service Methodology Based on Ontology (CAMSMBO) will be examined as a solution to achieve this.Based on the notion of CAMSMBO, the Mona-ServOnt core service ontology has been built to represent context-aware messaging domains.Accordingly, this paper aims to address the following research questions: 3) How can we address the identified requirements to model a context-aware messaging domain?4) What is the importance of the ontology and how can it be developed for context-aware services?5) How can we evaluate a generic ontology for contextaware messaging?6) How can such a methodology be described?
II. SERVICE ORIENTED METHODOLOGY
We propose a service-oriented methodology for building context-aware messaging services called Context-Aware Messaging Service Methodology Based on Ontology (CAMSMBO).The methodology starts with understanding the domain spatial environment, followed by modelling the context-aware messaging environment based on a core service ontology, called the Mona-ServOnt core service ontology, that can be applied and adapted in all types of context-aware messaging services.Then, the service can be designed based on our previous language service architecture [7] and agent architecture [8] for context-aware messaging with a generic messaging service protocol.The CAMSMBO methodology acts as a guide, providing the steps for building context-aware messaging services.Fig. 1 illustrates the steps in the CAMSMBO methodology.We elaborate on the steps of the CAMSMBO methodology in the following subsections.
A. Understanding the Spatial Environment for Context-Aware Messaging Domains
In order for a messaging service to be effective, it should be noted that the word spatial can represent numerous concepts within the available space; an area or any interval of space for example.Spatial information retrieval and mobile information systems are key elements behind the majority of mobile services.The major processes required in mobile services to perform a task are to be able to recognise the available context information, such as actor location information as well as the service's available context information.Spatial awareness demands acquiring spatial contexts from sensors, representing and interpreting context information and sharing it with other services.In order to propose a spatial environment for contextaware messaging service, three steps need to be undertaken.Fig. 2 describes the procedure for defining the spatial environment in the following steps.First, the spatial environment for context-aware messaging that matches the requested domains is determined.Second, the spatial environment is categorised into sub-divisions.Finally, the relations between the spatial concepts and entities involved within the domain Describing the context information for a context-aware messaging domain is essential in identifying the spatial environment for context-aware messaging that matches any domain.Most context-aware messaging services use common types of context information in the process of achieving a task.In addition, respective context information assists in defining the domain spatial environment.For example, emergency, guidance and notification, social media, medical and learning domains use different types of context information that meet the requirement of each respective domain.However, contextaware messaging domains will often share common context information.
Spatial information such as location information is considered an essential part in context-aware messaging services.The guidance and notification services, such as Community Reminder [9], depend on location in order to execute tasks.Location is commonly used in the area of social network services to facilitate greater interaction between agents such as groups of nearby friends in the services.However, these context-aware messaging services employ different types of context information to describe the spatial environment for the service, depending on the domain.
The spatially separated parts contribute to the description of the domain environment.For example, the spatial environment of a building contains apartments, halls, roof and stairs.This division assists in dealing with the separate parts independent of the spatial environment.Also, defining the spatial environment into sub-divisions enhances the description of the domain context information [10].In addition, spatial environment subdivisions can be anything related to a certain area or space and not necessarily physically geographic, for example, human activities such as going to the farm and walking in the park.Spatial relations can be used in spatially linking instances in ontology knowledge.
B. Constructing the Services Ontology
Constructing an ontology was a process considered an art more than an engineering activity until the mid-1990s.An ontology enables the sharing and use of available information about the domain [6].According to [11], there are five component types to distinguish information in an ontology, i.e. taxonomy, relations, functions, axioms and instances.In addition, an ontology that describes a targeted domain requires domain expertise and comprehensive knowledge of the ontology elements and relationships.
An ontology assists in the distribution of information regarding certain events.However, every development team generally defines a set of principles, design criteria and phases for constructing an ontology to meet their requirements.Furthermore, the nonexistence of universal and structured guidelines is considered time-consuming for the growth of ontologies within and between teams.
We develop Mona-ServOnt core service ontology, a general service ontology that can be applied in several context-aware messaging domains.The process starts with distinguishing the uses of Mona-ServOnt and determining the service domain that wishes to use Mona-ServOnt.The second step is to establish the concepts within Mona-ServOnt that describe the messaging domain.The following step is identifying the spatial relations that connect the Mona-ServOnt concepts that assist in defining the context-aware messaging scenario.Finally, Mona-ServOnt is evaluated to ensure the verification and validity as well as the usability of the ontology, using different techniques.
There are many research projects that apply ontologies for context modelling and reasoning in context-aware messaging services [12], [13].However, these ontologies are developed and defined to meet the requirements of a particular service within particular domains.Mona-ServOnt can be used within several context-aware messaging service domains including emergency services, guidance and notification services, social media services, health and medical management services, and education and learning services.
Generally, an ontology is designed to meet the requirements of a certain service.As a result, there is a need to build an ontology that supports context-aware messaging services for the clear specification and understanding between actors in the domain, as well as facilitating the capturing, filtering, sharing and reasoning of contexts within a spatial environment for messaging purposes.Based on the Mona-ServOnt core service ontology, a Mona-ServOnt domain ontology can be applied in many types of service domains.Mona-ServOnt is built according to several motivating scenarios according to the requested domain and based on the concepts represented in one of the domain articles.This method is inspired by the Cyc method, an ontology based on everyday common sense knowledge that allows reasoning [14].Mona-ServOnt is defined in Web Ontology Language for Services (OWL-S) [15] for several types of context-aware messaging services such as emergency, guidance and notification, social, health and medical management and education and learning as supported in Fig. 3.It describes the Mona-ServOnt classes and the properties that connect them.
Mona-ServOnt assists in describing common scenarios that occur within context-aware messaging domains.It uses ordinary language concepts, attributes and relations that are easily understandable by people.The domain management unit contains information about the user who has position relation and needs to be directed to a POI, that represents points that assist in performing the domain tasks.In addition, according to the information requested by the domain, the domain management unit may categorise the spatial environment into sub-areas, depending on the information for the requested task of the domain.
Fig. 4 gives an overview of the main concepts of Mona-ServOnt and the spatial relations that connect these concepts.The Mona-ServOnt key concepts can be generalised to meet the requirements of five types of context-aware services as follows: • Domain management unit represents the management and service of the actors.It exchanges information with the actors depending on the scenario.
• Domain context represents the context information of the domain that assists in performing the requested tasks during domain events.
• Spatial environment represents the spatial area in which the messaging service is operational.It might be divided into several divisions or sub-areas depending on the task to be performed by the domain.
• Actor refers to the people that use the context-aware messaging service such as the user, flag-bearer and administrator, as explained later.
• POI represents the features that assist in performing a task using context information.The POI is a taskrelated role.Also, it helps in positioning and filtering information as well as performing a required task.It is usually part of the spatial environment.
The spatial relation links the ontology concepts using common English expressions that are easily understood by people.It illustrates the Mona-ServOnt where spatial relationships are applied in order to connect the service's main concepts as well as describing domain events.The ontology is used to define the knowledge that can be shared between actors using contextaware approaches.
The spatial relationships are applied in order to connect the domain's main concepts as well as describing an event during situations.Mona-ServOnt allows the domain service to employ spatial relations qualitatively and quantitatively.The quantitative spatial relation information is converted automatically by the domain management unit into qualitative relation information in the form of domain and position relations.
The qualitative spatial relation is described using common language that the user can easily understand.In addition, the quantitative spatial relation is utilised to provide data to assist in defining the range or distance between objects within the context-aware messaging domain that uses Mona-ServOnt.
We illustrate the Mona-ServOnt concepts and the subconcepts that may represent the context-aware messaging domain, for example, the concept actor described using type, location, age, actor ID, status and actor activity as seen in Fig. 5.It shows the sub-concepts that describe actors within the context-aware messaging domain.These concepts are common within the area of the context-aware messaging domains, such as actor ID which is common to many domains, and assists in clarifying the registered actor in emergency and social media domains.In addition, actor location determines the actor's positional information, and actor type describes the actor's role within the context-aware messaging service such as administrator, user or flag-bearer.
The 'administrator' represents the service provider or the service supervisor side, the 'user' represents the persons who benefit from the services and the 'flag-bearer' is an actor that has more responsibility such as forwarding the messages using peer-to-peer techniques.The flag-bearer can act as an independent server and provide the service to other actors in case the connection with the main server is lost.Moreover, the flagbearer can register new actors with the server.For example, in emergency domains, the flag-bearer is responsible for assisting other actors within his range to ensure that all actors follow the server's instructions.Also, the flag-bearer can provide alert messages to a survivor who has lost communication with the disaster management unit.Moreover, the actor status defines the actor's situation or condition information.Additionally, the 'actor activity' describes the actor's current action according to the domain.
The domain's context information assists representing the domain where the Mona-ServOnt ontology is applied.It can be identified with the context information such as 'task' which clarifies the context-aware messaging domain's list of tasks that can be offered and accomplished by the Mona-ServOnt.In addition, the domain status describes the domain condition where the domain event normally presents the list of occasions, as shown in Fig. 6.
The spatial environment represents the area where the context-aware messaging domain service runs.It has a division, static location and status (see Fig. 7).The divisions of the spatial environment include sub-areas that assist in simplifying and clarifying the domain's positional context information.Also, the spatial environment status describes the contextaware messaging service area condition where the spatial environment division illustrates the context-aware messaging service sub area.
The POI is a fixed point within the spatial environment that assists in positioning or capturing information.It has a type that describes the POI according to the context-aware messaging service objective.Also, static location locates the POI within the spatial environment and the status defines the condition of the POI (see Fig. 8).
The concepts and their sub-concepts are used to give an overview of Mona-ServOnt for context-aware messaging services as illustrated in Fig. 9.It shows that in contextaware messaging service domains, there are several common concepts that need to be addressed such as location, actor ID and status.The purpose of the Mona-ServOnt core service ontology is to create an ontology for a specific domain (answered research question number 4).
C. The Evaluation of the Mona-ServOnt
Intuitively, an ontology can be evaluated in different ways because the main goal of an ontology is to provide an explicit specification and understanding within a particular domain.Ontology content evaluation began in 1994 [16].Ontology evaluation is a technological judgment of the ontology.The Mona-ServOnt core service ontology is designed to meet the requirements of several types of context-aware messaging services.The purpose of Mona-ServOnt is to model and capture the context of entities within a domain, with the purpose of context-aware messaging.
Mona-ServOnt is evaluated using three different methods.First, we employ a set of natural language questions used to measure the capability of the ontology in the real world, called competency questions [17].The competency questions are used to validate the extent of the ontology.These questions and their answers are applied both to extract the main concepts and their properties, relations and axioms on the ontology.We defined the following key questions to verify the scope of Mona-ServOnt.
1) What type of service domains can use the Mona-
ServOnt core service ontology?It can be used in many domains.This is kept in mind when designing the ontology, so as to be general.2) How does the Mona-ServOnt support the representation of different areas within the spatial environment?It contains the concept of the division of the spatial environment which defines a sub-area within the main area.3) What type of information is used to define a POI?It describes domain POIs using different properties such as POI status, POI type and its static location.
For example, POI status can be used to label the POI as negative or positive, restricted or open and then inform the administrator about the status and location of that POI. 4) What useful information about actors' conditions needs to be included to describe actors in the domain?It uses the concept actor status that allows the actor to define their situation, such as "enjoying" or "having a heart attack".
Once Mona-ServOnt is designed and evaluated, structuring a generic architecture for context-aware messaging service is applicable in the building of a context-aware messaging service for a domain.
D. Designing the Service Architecture and Protocol
Inspired by [4], we employ numerous types of contextaware modifications to both the management and actor sides of the ontology in order to structure context-aware architecture for context-aware messaging services.This helps characterise the physical requirements for the context-aware messaging service.The proposed architecture combines two types of techniques.The Mona-ServOnt architecture utilises centralised architecture in the form of client-server architecture at the top level and multiple actor peer-to-peer architecture at the lower levels.The context-aware messaging service architecture includes three main components: the actor, the domain management unit and the database as illustrated in Fig. 10.The figure presents the context-aware general architecture and the flow of information between the service entities and its components.
We propose a message content protocol that has been exchanged between the domain server and the actors within the context-aware messaging approaches.The messaging approach defines the context depending on the task required by the domain as well as the situation and the time of the event.This allows the service to define the target and the content of the message.All context information is stored into the domain database.Most context-aware messaging approaches use location information in addition to particular types of context in order to complete the required tasks.
The messaging protocol supports a few types of services and can be implanted within different types of context-aware service domains.For example, it supports automatic messaging services generating several types of messages that are sent automatically and repeatedly by the server to the actor according to the actor type, time and event within the domain.The messaging protocol can be used to define the content and the target of the message within context-aware messaging domains.
Fig. 11 illustrates the multiplicity of context used with the messaging protocol in the exchange process.First of all, the actor is required to register within the domain server using his context information such as actor ID, name and location.The context information differs slightly depending on the actor's role such as user or administrator as well as being dependent on the domain task.The following scenario may explain the use of the messaging protocol in social media domains.For example, during the New Year festival, the social media service wants to direct people to the most suitable area that would meet their interests.In this case, the user's context information can be location as well as some personal information such as age, type, interest, skills, educational level and activity.
We assume that the server has information about several events that have been held in different places within the city such as the function type, location and the number of people the venue can hold.The server compares the user's context information with the event context information and starts to automatically message the people within the city about the POI which represents the most suitable function to meet their desire, such as a music party, using spatial relations where the suburb is represented by the division of the spatial environment.In addition, Melbourne city symbolises the domain spatial area.Furthermore, the messaging protocol offers manual messaging services where messages can be transferred manually by the service administrator to a group or particular users using custom messages.
The protocol can provide information to other institutions that may involve people during the New Year festival, such as the police, and inform them about the people's context information depending on their specialties.Additionally, the approach can share information with people inside the spatial area using their context information.
III. APPLYING TO MULTIPLE SERVICE DOMAINS
Ontologies have been extensively used to represent various real world service domains and are employed significantly as a tool to assist in knowledge sharing within domains.The use of ontology within context-aware services offers a wider knowledge base that can be incorporated into context information to describe events and activities.
We provide the Mona-ServOnt core service ontology which offers context reasoning and sharing, and allows the capture of context information in various domains.Due to the page limitation, only two service domains are shown in this paper: the guidance and notification domain and the health management domain.The core service ontology serves as a useful beginning point for designing domain ontologies for context-aware messaging systems.Corresponding competency questions relevant to each domain are used to evaluate the particular domain ontology built (based on the Mona-ServOnt core service ontology).
A. The Guidance and Notification Service Ontology
Location information, apart from service context information, is useful for guidance and notification services.This section discusses the use of Mona-ServOnt in a context-aware messaging approach for guidance and notification purposes.We illustrate the Mona-ServOnt guidance and notification service ontology using context-aware services for guidance and notification for a museum environment.The existing museum visitor's guidance service uses context information for messaging.The design supports small groups of visitors in a museum, with context-aware communication services.The framework includes context-aware communication services integrated with facilities such as data projectors to display presentations.The museum visitors' guidance service contains two services to target the sharing of the museum experiences and service-to-visitors communication.
The service ontology enhances and supports knowing sharing capabilities as well as messaging functionalities for guidance and notification purposes such as work done in Community Reminder.Community Reminder [9] provides the user with reminders and situations that can be applied in many ways.Mona-ServOnt contains similar context information that can be applied to offer guidance and notification services.For example, Mona-ServOnt uses location information, in addition to other context information, of actors as well as of points of interest (POI), to determine spatial relations between concepts within Mona-ServOnt; for example, the POI can be a milk bar that has location information that can be compared with the actor's location information.
We design the service ontology to serve Melbourne Museum visitors.For example, Mr. Smith and his family are visiting the museum for the first time.The family includes six people, Mr. & Mrs. Smith, their two sons, James and Mark and their grandparents Mr. & Mrs. Ray.Everyone has different interests and attitudes about their tour plan.Mr. & Mrs. Smith intend on visiting the Mind and Body Gallery whereas their boys are attracted to the Science and Life Gallery, in particular, the Tarbosaurus (giant meat eater, Tyrannosauridae) section.In addition, the grandparents are interested in the Bunjilaka Aboriginal Cultural Centre.Furthermore, the museum contains more permanent exhibits that the whole family wants to see such as the Melbourne Gallery and Large skeleton of a Pygmy Blue Whale.
Furthermore, the family want to share their experiences and message each other while touring inside the Museum.For example, Mr. & Mrs. Smith would like to message their parents saying "the Mind and Body Gallery is closed, and it will open in two hours".Also, James and Mark want to make notes saying "Science and Life Gallery is impressive".Additionally, the grandparents need information about the direction from the Bunjilaka Aboriginal Cultural Centre to the Forest Gallery.These scenarios and more can be addressed using messaging with concepts in the service ontology.For example, the family members are directed to a particular gallery inside the Museum according to the available context information.Also, the museum management unit will continue tracking Mr. Smith's family member's locations to redirect them in case they lose the right path.In addition, other guidance and notification tasks can be performed such as playing a presentation once a user reaches a specific gallery.Also, the visitor can report back to the museum management unit her experiences about a particular section in textual form.The text can be shared through the management unit with other users or family members.
We propose an approach that offers the administrator the ability of the museum management unit to contact a group of visitors or an individual visitor using context information.For example, the administrator may want to address all the visitors with a message saying "The Mind and Body Gallery is closed for maintenance, and will re-open in two hours".The administrator may want to send a message to all the actors who "have been to the Science and Life Gallery" or who "are currently in Science and Life Gallery".In addition, an individual may need to have messages sent out in the case of an emergency such as Mrs. Ray having a heart attack and that all family members should attend a specific location.The administrator messages the family members about Mrs. Ray's situation and asks them to move towards a location.In addition, the administrator might want to send visitors within the Bunjilaka Aboriginal Cultural Centre a message saying "a short presentation will be playing shortly".Moreover, the museum management unit allows the user of the guidance and notification service to share and learn from each other's experiences.For example, Mr. Smith can receive an idea about his father's experiences in the museum and possibly prepare him the ideal birthday gift.In addition, the scheme offers a notification capability to the end user.For example, the grandparents (Mr. & Mrs. Ray) first met each other in China in 1955 while they were travelling along the Great Wall of China and Mr. Ray wants to surprise Mrs. Ray once they reach that area within the museum and give her a gift as a reminder about the time when they first met.The museum management unit will notify Mr. Ray once they reach the Great Wall of China exhibit inside the Touring Hall section so he can give Mrs. Ray his present.
Mona-ServOnt guidance and notification service ontology is an ontological service in OWL-S providing guidance and messaging as sketched in the scenarios above.Fig. 12 shows the concepts in the service ontology apart from the property that connects these concepts.Moreover, these basic concepts can be elaborated on, i.e. new concepts can be further added and linked to these concepts, extending this basic ontology in order to describe different scenarios.It supports the description of relevant entities for guidance and notification functions and uses ordinary-language concepts, attributes and relations.It can be used to describe the targets and the contents of guidance and notification messages.
The service ontology captures context relating to situations of entities that occur over a region.A unit of administration is associated with an area being covered for messaging purposes, and is represented by the concept of the guidance and Fig. 13 shows an overview of the service ontology, expressing the relations between the ontology's fundamental concepts.When a museum is the unit of administration, there are important concepts in the service ontology as follows: • Museum management unit is a conceptual unit of administration associated with a region where the actors are tracked and context information including guidance and notification context information is collected and managed.
• Museum context refers to context information about the museum including museum tasks, status and events.Museum tasks refer to the list of tasks that can be done within the museum such as send notification message, get direction, leave a note, play presentation when actors arrive, and so on.For example, we want to direct Mr. Smith inside the museum towards a particular section and also inform Mr. Ray once he is near the Great Wall of China exhibit.The museum context status describes the museum context condition such as available, postpone or not available.The museum event describes the list of events that may happen within the museum such as an event for children or a scientific demonstration.
• Region refers to the area where the service is running, such as the museum environment.The region, in this example, includes several divisions such as the Science and Life Gallery, Mind and Body Gallery, and Bunjilaka Aboriginal Cultural Centre.Also, the region has its own static location and status which describes the condition of the museum; 'open', 'closed' or 'under maintenance'.
• Actor refers to the people within the museum that use the guidance and notification messaging services.
Actors have the following context information; actor location, actor type, ID, experience, status, activity and age.The actor type refers to either administrator or visitor.It is categorised to define the actor's different roles.The actor may have multiple roles.For example, the actor can be assigned as flag-bearer who has more responsibility within the museum such as a guide.Moreover, administration manages the available context information to administer the museum tours such as 'play presentation' or 'notify visitor'.In addition, the actor's relatives can be defined using the social aspect within the service ontology, especially in the case of urgent calls.The actor experience refers to the actor's opinion regarding a particular section or the whole tour.The actor status describes the actor's current condition such as 'busy', 'happy' or 'enjoying' and the activity describes the actor's current action such as 'watching presentation'.
• POI represents the geographical (which can be indoor) points that the actors are interested in and where the actor may want to perform a certain task, such as giving a present, in the case of Mr. Ray's scenario.
Examples of POI's are "Skeletons of Dinosaurs" and "Hatching the Past: Dinosaur Eggs and Babies".It has a static location.Also, the POI has a type attribute that assists in categorisation.Also, POI has status to describe the POI condition such as positive and negative.The positive POI refers to the sections where the visitor is allowed to visit, whereas, a negative POI refers to the sections that are prohibited.
The relations between concepts connect the ontology's main concepts and can be used to describe a situation in guidance and notification scenarios, as well as to describe the information shared between actors using the guidance and notification service such as finding certain actor's locations within the museum categories as follows: • Guidance and notification relation: Used to explain the guidance and notification circumstances such as "start", "finish", "end", and "belong" within the museum.For example, a presentation will start in the Forest Gallery section in 10 minutes.
• Position relation: Used to capture practical position relation concepts such as "near", "far", "next to", "close to", "far away from" and "in the neighbourhood of".
We apply the service ontology in the Melbourne Museum visitor scenario (see Fig. 14).The figure shows the classes and subclasses as well as the properties that connect them in The service ontology for the museum represents the available context information to describe the situations of actors and objects (e.g., for guidance and notification tasks) in the visitor scenario within the museum.The perspective taken is that messages are provided by (and via) the Museum management unit to the actors within the museum.Fig. 15 (a specialisation of Fig. 13) gives an overview of the main concepts in the service ontology adapted for the museum environment.
The museum represents the area, which itself has many sections.The requested messaging tasks are like those mentioned in the scenario earlier.Furthermore, the museum has POI relevant to the actors.These POIs are in position relations within the museum and with respect to the actors.
Note that the service ontology might include more concepts as explained the previous scenario.For example, Fig. 16 illustrates a detailed elaboration of the guidance and notification ontology with more concepts for museum visitors and museum administrator.
The figure illustrates the relations between the service ontology concepts as described before (in blue) and new concepts added (in white).Note that the "has" relation is short for "has X" where X is the property; for example, the actor has a type and location: actor "has type" actor type and actor "has location" location.
The service ontology clarifies the information about visitor's situations.It uses qualitative spatial relations that can be mapped from quantitative spatial relations.The spatial relations assist in the connection of information about visitor's activities within the museum.The service ontology clarifies the information that can be used by the visitor and the guidance and notification service for context-aware messaging in the museum.
We evaluate the service ontology by suggesting a range of key competency questions.These questions are answered by our version of the service ontology applied to the museum that shows the expressiveness of the ontology as it stands.But we note that, indeed, further elaboration of the ontology can be done.
Competency questions are used to show that the service ontology is able to capture and manage information useful for messaging in the guidance and notification tasks, as well as to illustrate various issues addressed by the service ontology presented earlier: • What information about actors and their context can be used if one wants to send messages to actors in the museum?
The service ontology classifies actors into different types, e.g.'visitor' and 'administrator', and have information about actors such as actor type and other context information such as 'location', 'status' and 'interests' that assists in defining the requested messaging tasks.For example, the ontology may be used to describe a group of actors according to their position relative to a certain POI in order to provide messaging services such as 'play presentation', or notifications for the group.Or, the Museum management unit (administrator) wants to trigger messages and a presentation once Mr. & Mrs. Smith are within the Mind and Body Gallery and near the sample of the brain cells.
• What information about actors is needed to facilitate actors touring the museum in a way that they see exhibits most relevant to them?
The Museum management unit serves the actors according to their interests.An actor's interest (as represented in the ontology) defines the requested messaging tasks of the museum management unit.For example, the museum management unit will explain different tours to Mr. & Mrs. Smith's family members according to their respective interests, such as directing the boys James and Mark to the Science and • What context can support the tasks of actors sharing information with each other?
The visitors can follow each other's locations and message each other to share experiences.Also, visitors can update their experiences at any time to be shared with others.Moreover, we assume that visitors can use the guidance and notification messaging service to access the museum guidance and notification tasks which include leaving notes that display their opinion, getting directions and triggering presentations.For example, James and Mark update their experiences about the Science and Life Gallery, in particular the Tarbosaurus section expressed as a note.
• What knowledge does the museum administration need to direct actors through the museum during an event such as a family member being in an emergency situation?For example, the administrator wants to inform Mr. & Mrs. Smith's family about Mrs. Ray's situation and tell them to go to the main gate in the case of an emergency.
• What is the knowledge that the service ontology offers to guide actors through the museum and to help actors know where they are?The service ontology offers a wide range of context information to the museum visitors which allows them compare their current location and the POI location so that they can check and adjust their tour plan.Also, it allows them to discover the location of other sections and define a new tour plan.
• What kind of notification can the service ontology generate?
The service ontology allows the Museum management unit to keep tracking the visitors' locations in order to provide suitable location-based notifications when it's required.For example, the Museum management unit will remind Mr. Ray about the surprise gift that he prepared for his wife once Mr. and Mrs. Ray reach the Great Wall of China exhibit inside the Touring Hall part of the museum.
• How can different actors and roles be distinguished?
The actor type helps distinguish different actors.
• How can an actor, whose role is to forward messages to other actor(s), be identified?She can be identified using the flag-bearer concept.
• What is the required knowledge for a flag-bearer to forward messages to other actors?
The flag-bearer is one of the actor types which have more responsibilities towards other actors near her position.The service ontology provides position relations to support communication among the actors, and the flag-bearer forwards the messages to a cluster via ad-hoc communication (for instance, using Bluetooth communication) filtered via position relations.
• What types of relations can be used to describe situations within museum?The spatial relations, described in the ontology earlier, relate to situations that are illustrated in the museum scenario such as position relation.In addition, the service ontology uses social relations to describe the social aspects between the actors.The spatial relations can be used together with social relations to help describe a situation.
• What is the information that actors can share with one another inside the museum?
The service ontology support actors who wish to share their experiences, and combined with context, such experiences are "geo-tagged" or located.Also, actors can share their interest as well as a range of context information.For example, actors can leave a note that presents her experience about certain sections or update her profile information to display her interests and experiences during the tour.
• What are the messaging tasks that can be performed with the museum?
The service ontology describes the tasks such as 'leave notes', 'play presentation', 'give direction' and 'reminder messaging'.These tasks are an example of range of tasks that might be included in a real implementation.
• How can messaging related to different sections of the museum be performed?
The service ontology offers information that relates to the museum's different sections using the concept "LE Division".The ontology represents the museum's different sections and divisions, and each division has a static location and type.
• How can an actor's status be described within the museum?
The service ontology uses the concept "actor status" to determine the actor's situation such as "enjoying" or "having a heart attack".In addition, the ontology has the actor's location, age and type.
• What sort of events and activity can be found in the museum?
The service ontology defines events through the use of the concept museum context that includes the museum's available events.On the other hand, the activity concept is defined within the actor's context information which describes the actor's actions at a certain time such as (Mr. & Mrs. Ray) celebrating their anniversary.
The competency questions reflect the kind of queries that the service ontology is designed to answer, in particular, in relation to context-aware messaging for a museum.The competency questions reveal the extent of the ontology's information content for messaging purposes.
B. The Health Service Ontology
Context-awareness in healthcare allows for adaptation with a changing environment and patient preferences to provide adapted health-related services.In medical services, context is commonly used to accomplish two objectives; medical condition assessment and personalised healthcare services.A health care service can link the target patient's context information with the existing health agency in order to provide medical assistance.These services help manage medical tasks such as providing medical advice and assistance.
Health context refers to the information which describes the state of an actor who needs to perform urgent healthrelated decisions.An actor can be a patient, family member, health agent, health manager, and others.The patient's context information can be provided via a smartphone to medical services.Furthermore, using context-aware information in healthcare monitoring systems assists in providing medical services which may save more lives by being rapidly responsive to health problems.To provide such context health or medical messaging services, designing ontology of useful concepts is useful.In this section, we focus on applying the Mona-ServOnt service ontology for healthcare purposes.It supports contextaware messaging for health and medical care services.
We consider the following scenario where Mr. Bill and Mr. Don are elderly gentlemen who live alone in Melbourne city after their respective spouses passed away.They have had heart surgery in the previous month.Mr. Bill's case needs to be followed up and monitored 24 hours a day for the rest of his life.However, he has two sons and one daughter and they all have their own families and work so they cannot stay with him continuously.As a result, attaching sensors to Mr. Bill's body to monitor his pressure and temperature may help solve the problem.These sensors can be connected via a wireless network to a smartphone, although they might slow down the platform as identified in [18].Then, the smartphone can send the information to health or medical control units, which can then examine the incoming sensor data and produce a report about his current medical condition.After that, this information can be forwarded to a family member, health manager, health agent or an expert to interpret and act on.For example, Dr. Davie wants a report on Mr. Bill's and Mr. Don's statuses for the last three weeks, including their blood pressure, temperature and situational information.Some of this information can be measured by the sensors whereas some others have to be supplied by Mr. Bill manually by completing a document or voice recording.
Another case is where Mr. Bill wants advice from his doctor Dr. Davie or health agency about certain activities such as going for a run at the nearby park.In addition, Smith who is Mr. Bill's son wants to monitor his father's situation because he knows that Mr. Bill is at the bar with his friends, and he is concerned about his alcohol consumption on that night.Other alternatives are also setup for that night, to prepare for the case where Mr. Bill is extremely unfortunate and direct communication with the health management team and son are lost.His phone then finds and connects to another device and uses it as a proxy to send reporting messages back.Moreover, Mr. Don went to his friend's beach house for the weekend, and Dr. Davie wants to message Mr. Don to All these scenarios need to be addressed for a health management service.In order to support such messaging, we develop the Mona-ServOnt health service ontology for contextaware messaging in such health-related scenarios.It builds on the Mona-ServOnt core service ontology, and is used for medical scenarios.It aims to support messaging using context in health-related services.For example, it assists the health agent and the family members with information about patients anywhere and at anytime.It is built and developed in OWL-S using Protege (see Fig. 17).
The service ontology concepts can be expanded using concepts to capture a particular scenario.Another view of the main concepts in the service ontology is illustrated in Fig. 18.The ontology assists in defining the targets and the contents of the exchanged messages between actors.
The important aspects of the service ontology are described as follows: • Health management unit refers to the administrative unit that is responsible for managing and tracking the actor's context information including health information, reporting medical situations as well as providing messaging services to the actors.The services are defined according to the actor type.It may represent any • Medical situation context represents the context that assists in defining the available medical context information and includes medical status, medical events and medical tasks.The medical status describes the measurement of medical conditions which contains three levels of medical situation: high, medium and low according to the patient's context information.
The medical event describes the available medical procedure such as "check-up", "medical examination" and "having surgery".The medical task includes the list of medical actions that be generated by the health management unit such as make emergency call, generate report and contact patient.
• Region signifies the area where the actors are involved in messaging (that is, the relevant area over which context-aware messaging would be supported), and located.It has a static location, division and status.The region status describes the region condition depending on the region type such as 'crowded', 'busy' or 'raining'.
• Actor refers to the people involved in messaging, but for this domain it refers to the people involved in the health monitoring and management process.We use the following context information to describe an actor in the medical situation such as actor type which can be "patient" and/or their relatives, "health manager" and "health agent".Actors may have mutable roles depending on the requested task.We also have actor ID, age, status, activity and location.Moreover, if the actor is a patient, her medical information includes temperature, blood pressure and heart rate.In addition, 'actor status' is necessary to define the actor's condition such as "healthy", "sick" or "under supervision", whereas 'actor activity' describes the actor's action or movement as we will elaborate on later.
• POI refers to the geographical points where the actor is available during the requested task.It has two types: 'private space' such as office, friend's house or home, and the 'public space' can be a park, shopping centre, bar or hospital.It has a status to describe the situation of the POI such as "open", "closed" or "not available".Also, it has a static location within the region.
Fig. 19 describes the service ontology concepts and relations in more detail.The concepts and its relations help describe medical situations.It uses similar relations as described within the previous domain relations.For example, the social relations express the people's societal relations as described in previous service ontologies.It describes context information in a context-aware health management and monitoring service.We assume there are mechanisms to sense and obtain the patient's information, to interpret the data in order to issue the appropriate level of medical action.The ontology provides a way to capture shareable information about medical situations.In addition, the service ontology can be enlarged by adding more concepts to express a certain situation.The figure shows the service ontology using more concepts and sub-concepts which supports describing the knowledge of the previous medical scenarios for Mr. Bill.
According to the medical status level and the available medical tasks, the health management unit performs the appropriate action.For example, if Mr. Bill's current medical attention situation is "high", the health management unit will inform his health manager, the closest health assistance within his range and his relatives to obtain immediate support for Mr. Bill.Furthermore, if Mr. Bill's medical situation is "low", it might only require Mr. Bill to do some easy activities such as drink a lot of water or stay away from the sun; that is, the health management unit will message Mr. Bill about the right procedure or inform his relatives about his status.The spatial relations support linking information about the medical condition and actors who are available in different places near certain POIs.
We use the following competency questions method to evaluate the ontology: • How can a doctor or health agency refer to patients for messaging purposes?The service ontology organises actors according to their context information in varying ways.For example, it groups actors according to whether they share the same POI relations or group actors within the region as well as grouping actors using their context information.For example, Dr. Davie can send a message to all his patients in Melbourne city to inform them that he will be away for a month.
• How does a medical agent or the hospital staff refer to a particular patient who is about do wrong actions near certain POIs?For example, Dr. Davie can send a message to Mr. Don who is near the beach to stay away from the water or message Mr. Bill to only walk for a short distance because of his medical situation.
• How is a medical health situation described?
The service ontology offers a rich knowledge base about a patient using her context information such as her location, temperature, blood pressure and status to be shared with other actors such as a family member, health agent or her doctor.For example, Mr. Smith can view Mr. Bill's medical information at any time.
• What happens if a patient loses the connection with the health management unit during emergency?
The service ontology supports actors being to communicate with each other using cluster services via ad-hoc communication (for instance, using Bluetooth communication).For example, Mr. Bill's phone can detect a heart condition and will forward his medical request to anyone at the bar so they can arrange an ambulance for Mr. Bill.The flag-bearer which is an actor type can always be responsible for a particular group.
• What are the requested relations to describe situations for health management?There are several types of relations to support describing medical situations.For example, there are relations to describe medical situations within the region such as the medical relation.Also, a relation to describe the position of an actor with medical situations is necessary such as position relation.Moreover, we require relations that define social aspects of a patient in case of high emergency situations, e.g., to contact her family might use a social relation.These relations are available with the service ontology.
• What information is needed for a health management service that offers messaging services to support various medical tasks?
The service ontology supports describing medical tasks.Such task information serves as additional useful context information for messaging.
• Who is involved in the health management and monitoring process?
The service ontology represents different actors involved in the health monitoring and management process.For example, Mr. Don will be directed by the health management unit to a hospital nearest to the beach house, and his doctor and family members will all be notified.
• What can an actor know about other actor's condition?
The service ontology, through the use of the concept actor status and other context information, allows the actors to describe their situations and be viewed by others.
• How can a Doctor or health agency know about the current situation and action of a patient?
The doctor or health agency needs to find out the current activity of patients so that they can perform the right action.
IV. EVALUATION
The performance evaluation of peer-to-peer services within the implementation is non-trivial to investigate the robustness of the system.We examine whether the peer-to-peer service performs well enough in terms of the time it takes to forward a message from a flag-bearer (or tracker device) to other devices (called tracker devices), and receive an acknowledgment message from those devices.A set of smartphones was used to evaluate the peer-to-peer aspect.In particular, we conducted different sets of experiments to determine the average total
A. Experiment 1
In this experiment, we use one tracker device, and assign multiple values to the distance between tracker and tracker devices, and query length.For this configuration, the results of overall send-receive times, including both discovery times of roughly 10-12 seconds and transmission times for warning messages and acknowledgment messages, are summarised in Fig. 20 and 21.We can see that there is no significant difference between the total send-receive time after increasing the distance between the tracker devices and trackers (up to 12 metres in which case the performance degrades), and the query (i.e.message) length (we assume warning messages are succinct).
B. Experiment 2
We increased the number of devices arranged linearly from two (from flag-bearer/tracker to another device) to three where a device receives a warning message from the flag-bearer and then subsequently forwards it to a third device, and the third device, on receiving the warning message, returns an acknowledgment to the second device, which then, in turn, forwards it to the flag-bearer.The results for this experimental setup are summarised in Fig. 22, showing the send-receive times including the discovery time (of roughly 12s) in all devices.We can see that there is a significant difference between the total send-receive times after increasing the number of hops by only one.The reason for this is because, in the case of three devices, the middle device needs to maintain two Bluetooth connections and this severely degrades the performance (worse than double the case of the two devices -a non-linear increase).
Overall, we conclude from the aforementioned limited www.ijacsa.thesai.orgexperiments that two communicating devices need to be within 12 metres of each other (up to only six metres preferred), Bluetooth discovery time is very large compared to message transmission time (since we are mainly dealing with small messages) -transmission time is only around 0.5% of the discovery time, and the forwarding of only two hops can take substantially more time.However, the times are in the lower bounds of what is increasingly possible, as we see that improvements with Bluetooth (such as version 4.0) and newer, more capable devices could lead to improved performance.
C. Usability of Messages
To evaluate the usability of the messages, an eight-item questionnaire was devised and distributed to 50 participants, randomly chosen from the students and staff at La Trobe University, who willingly decided to take part in the survey.The participants were provided with a brief explanation (3 to 5 min) of a fire scenario.We assume that a real fire has started in the central cafe area of the university, where actors were available and two alert messaging styles were generated.The alert message interface as well as the normal text alert message is given in Fig. 23.
The participants were required to answer the following eight questions on a scale from 1 to 5 where 1 -very low, and 5 -very high.Table 1 shows the results, distributed amongst students and staff for a set of 50 surveyed actors.For example, Q1 has a mean score of 0.72 showing that; overall, participants rated the text alert message as low.However, the standard deviation shows that there is a huge difference between participants because some rank it as very high.Most of the participants gave a score of between 4 and 5 to Q2, Q3, Q4, Q5, Q6, Q7, and Q8, with a low standard deviation meaning that there are only small differences between the participants' answers.The survey result shows that most of the participants would prefer receiving the alert message instead of the normal messaging style (Q1 & Q2).The users considered the message was very easy to use (Q3) as well as useful when deployed (Q4 and Q5); the users were in favour of its use as a help through a smartphone when in a dangerous area (Q6 and Q7).Participants indicated that the alert message conveyed enough information (Q8); the lowest score given by the users was to question Q1 where a normal text message was provided.Fig. 24 shows the system usability according to the question loading/ranges.The results show that the participants favoured the concise short alert messaging style.They noted that the alert message is well organised, the danger information is clearly separated from the rescue information and also the information is easy to track, especially during updates.
V. CONCLUSION
The ontology based CAMSMBO methodology has been presented.Moreover, the Mona-ServOnt core service ontology has then been presented in the context of two service domains and the functionality evaluated using competency questions for each respective messaging domain and focused primarily on context-aware messaging.Moreover, six research questions have been answered.We can envision a future of easier ways to develop context-aware services when developers use the entire CAMSMBO methodology or some parts of it in their constructions.CAMSMBO, with its Mona-ServOnt, offers the approach of capturing, filtering and reasoning of information to yield a knowledge base for the developers to attain a better understanding of the context-aware service domain.
Fig. 2 .
Fig. 2. The process of understanding the spatial environment
Fig. 11 .
Fig. 11.Exchange messages in the messaging protocol
Fig. 16 .
Fig. 16.Guidance and notification ontology elaborated with more concepts
Fig. 17 .
Fig. 17.Concepts in health management ontology in OWL-S
Fig. 18 .
Fig. 18.Overview of the main concepts in health management ontology
Fig. 19 .
Fig. 19.Health service ontology using more concepts and sub concepts
Fig. 22 .
Fig. 22. Service time after increasing the number of devices
Fig. 24 .
Fig. 24.Message scores using the eight questions
TABLE I .
SUMMARY OF THE RESULTS OF THE CONDUCTED SURVEY Questions Mean Std Dev.Q1.Do you like Figure A? 0.72 1.678678 Q2.Do you like Figure B? 4.04 0.497826 Q3.The message is easy to understand.4.62 0.490314 Q4.The message is useful during a disaster.4.48 0.646498 Q5.I would use the service once it is deployed.4.6 0.534522 Q6.The message would help me to navigate through a hazardous situation.4.38 0.696639 Q7.It is a good idea to use a smartphone as an emergency guide. | 2019-02-28T21:15:14.016Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "81aac71f67d05caa5dc83da2887646d623e8d588",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume10No2/Paper_62-Service_Oriented_Context_Aware_Messaging_System.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "81aac71f67d05caa5dc83da2887646d623e8d588",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
55674452 | pes2o/s2orc | v3-fos-license | Thorium-232 fission induced by light charged particles up to 70 MeV
Studies have been devoted to the production of alpha emitters for medical application in collaboration with the GIP ARRONAX that possesses a high energy and high intensity multi-particle cyclotron. The productions of Ra-223, Ac-225 and U-230 have been investigated from the Th-232(p,x) and Th-232(d,x) reactions using the stacked-foils method and gamma spectrometry measurements. These reactions have led to the production of several fission products, including some with a medical interest like Mo-99, Cd-115g and I-131. This article presents cross section data of fission products obtained from these undedicated experiments. These data have been also compared with the TALYS code results.
Introduction
The irradiation of thorium by light charged particles like protons and deuterons leads to the production of several radionuclides among which radium-223 [1], bismuth-213 [2] and thorium-226 [3], alpha emitters having a great potential in oncologic therapy.The GIP ARRONAX is focused on the production of medically relevant radionuclides and possesses a multi-particle cyclotron [4].This accelerator has been used for our study on the Th-232(p,x) and Th-232(d,x) reactions.The main motivation was to study the production cross section of Pa-230 that decays to Th-226 [5] via U-230.After the irradiations, the activity values were determined by gamma spectrometry, and the associated spectra give information on the production of several fission products inside the thorium-232 target.The activity of each detectable and quantifiable fission product (FP) has been determined and their associated production cross section has been extracted.From these data, we determined the mass distribution of the FP and the sum of the fission product cross section values.A systematic comparison with the results of the TALYS code has been done.
Experimental set-up
The cross section data are obtained using the stacked-foils method [5,6], which consists of the irradiation of a set of thin foils, grouped as patterns.Each pattern contains a target to produce the isotopes of interest.Each target is followed by a monitor foil to have information on the beam intensity thanks to the use of a reference reaction recommended by the International Atomic Energy Agency [7].In our experiment, the monitor foil acts also as a catcher to stop the recoil nuclei produced in the target foil.A degrader foil is placed after each monitor foil to a e-mail: Vincent.Metivier@subatech.in2p3.frchange the incident beam energy from one target foil to the next one.Each foil in the stack has been weighed before irradiation using an accurate scale (± 10 −5 g) and scanned to precisely determine its area.The thickness is deduced from these measurements, assuming that it is homogeneous over the whole surface.In this work, we used 10 and 40 µm thick thorium foils, 10 to 25 µm thick titanium, copper and nickel monitor foils depending on the incident particle and its energy, and 100 to 1000 µm thick aluminium or copper degrader foils.These foils were irradiated by the proton (up to 70 MeV) and deuteron (up to 33 MeV) beams provided by the ARRONAX cyclotron.Proton and deuteron beams have, respectively, an energy uncertainty of ± 0.50 MeV and ± 0.25 MeV, as specified by the cyclotron provider using simulations.
The beam line is under vacuum and closed using a 75 µm thick kapton foil.The stacks were located about 6.8 cm downstream in air.The energy through each target and monitor foils has been determined in the middle of the thickness of the foil using the SRIM software [8].Energy losses in the kapton foil and air have been taken into account.Five stacks were irradiated with protons and five with deuterons, covering respectively, the energy range from 70 MeV down to 11 MeV and from 33 MeV down to 8 MeV.The use of several stacks allows us to minimize the energy uncertainty in our experiments.All along the stack, depending on the number of foils, the energy uncertainty increases up to ± 1.8 MeV due to the energy straggling.Irradiations were carried out for half an hour, with a mean intensity between 100 and 150 nA particles for proton beams and between 50 and 140 nA for deuteron beams.The recommended cross section values [7] of the Ti-nat(d,x)V-48 (all energies), Cu-nat(p,x)Co-56,Zn-62 (> 50 MeV), Ti-nat(p,x)V-48 (< 20 MeV) and Ni-nat(p,x)Ni-57 (20-50 MeV) reactions were used to get information on the beam intensity.
The activity measurements in each foil were performed using a high purity germanium detector with ND2016 low-background lead and copper shielding.Gamma spectra were recorded in a suitable geometry calibrated in energy and efficiency with standard Co-57, Co-60 and Eu-152 gamma sources.The full widths at half maxima were 1.04 keV at 122 keV (Co-57 γ ray) and 1.97 keV at 1332 keV (Co-60 γ ray).The samples were placed at a distance of 19 cm from the detector which is suitable to reduce the dead time and the effect of sum peaks.The dead time during the counting was always kept below 10%.The first measurements started the day after the irradiation (after a minimum of 15 hours cooling time) during one hour, for all target and monitor/catcher foils.A second series of measurements was performed one week after the end of irradiation, during a minimum of 24 hours (one day) and up to 60 hours.A third series of measurements was done for long half-life radionuclides and waiting for the decay of some radionuclides.Our data are then limited to γ emitter radionuclides with a half-life higher than few hours.FP have been detected and quantified from Zn-72 to Pm-151.The majority of them have been measured after the decay of their parents (filiation).
Cross section calculation
The cross section values are calculated using the wellknown activation formula, defined as a relative equation in which the knowledge of the beam current is no longer necessary thanks to the recommended reactions.The uncertainty is expressed as a propagation error calculation (see [9] for more details).
Catcher correction
In our experiments devoted to medical isotope production studies, only one catcher was placed and was necessary to collect the recoil nuclei after the target.However, in the case of fission, the FP are emitted both forward and backward direction from the target.The use of a catcher before the target should then be mandatory to collect all the nuclei.In order to correct it, the proportion of FP emitted backward the target has been estimated by kinematic and Monte Carlo calculations.
In our calculation, the projectile is sent on a thorium target followed by a catcher.The FP emission is computed in the centre of mass of the compound nucleus (Th+p or Th+d): the total kinetic energy is calculated according to the Viola et al. formula [10], both fragments are supposed to be emitted with an energy distributed according to the linear momentum conservation and distributed isotropically in the centre of mass of the reaction.The FP energy is then calculated in the laboratory system and its emission point randomly computed in the depth of the thorium target.Thanks to SRIM calculation [8], one can then determine if the FP remains in the target or exits it, forward or backward.Experimentally we know that FP from Zn-72 to Pm-151 have been detected.The proportion of FP in the target as well as backward and forward the target is determined in the case of Zn-72, I-131 and Pm-151.Fig. reffig1 shows these proportions as a function of the incident energy for proton beams.
Based on the results shown on Fig. 1, one can consider that approximatively the same amount of FP is collected in both catchers.This has been validated with a dedicated experiment with two catchers, forward and backward (see Fig. 4).Thus, the amount of FP activity measured in the (forward) catcher is simply multiplied by a factor 2 and added to the activity obtained in the target to determine the cross section values discussed hereafter.The small difference due to kinematics between forward and backward visible on Fig. 1 is neglected compared to the uncertainty on the experimental determination of the activity on the catcher.
The TALYS code
In this work, all the experimental cross section values are compared with the version 1.6 of the TALYS code released in December, 2013 [11].TALYS is a nuclear reaction program which simulates reactions induced by light particles on target nuclei heavier than carbon.It incorporates theoretical models to predict observables including cross section values as a function of the incident particle energy (from 1 keV to 1 GeV).A combination of models that best describes the whole set of available data for all projectiles, targets and incident energies have been defined by the authors and put as default in the code.In this way, a calculation can be performed with minimum information in the input file: the type of projectile and its incident energy, the target type and its mass.
Since there are some differences between experimental data and the results of the TALYS code using default models, we have defined a new combination of models, already included in the TALYS code.The description of the optical, preequilibrium and level density models have been found to have a great influence on the calculated production cross section values.Better results are, in general, obtained when proton and deuterons are used as projectile using the optical model described, respectively by [12] and [13].And for both projectiles, but also with alpha particles [9], when a preequilibrium model based on the exciton model including numerical transition rates with optical model for collision probabilities [14] and a model for the microscopic level density from Hilaire's combinatorial tables [15] is used.The results referenced as TALYS 1.6 Adj in Figs. 4 and 5 correspond to TALYS calculations performed with this new combination of models.
The mass distributions
The data depicted in Fig. 2 and Fig. 3 show the cumulative cross section values of the detected FP, respectively with protons and deuterons as projectiles, plotted as a function of the FP mass, A. The points correspond to our measurements and the dash lines are spline fits on these experimental values.The full lines to TALYS results with default models, selecting only FP that have been detected in our experimental conditions (presented at the end of the part 2.1).The apparent depletion of the heavier FP peak is due to these experimental conditions (γ spectrometry detectability).
Proton results (Fig. 2) show that the symmetric fission (highlighted around A = 110) becomes more probable than the asymmetric fission with the increase of the incident energy (above 30 MeV).This has already been observed with protons [16] and neutrons [17].Figure 2 shows saturation, even decrease of the amplitude of the symmetric fission above 57 MeV.This trend has to be confirmed and explained.The cross section values are similar in the case of proton and deuteron irradiations (see Fig. 3) considering the same incident beam energy.In all cases, the TALYS code with default models is not able to reproduce our experimental data leading to a too low production of fission fragments.
The Th-232 total fission cross sections
Figure 4 and Fig. 5 show the sum of the FP production cross sections measured in our experiments, for protons and deuterons as projectiles, respectively.These plots also show the TALYS code results (limited to the detected FP) and the TALYS Th-232 total fission cross section (including all FP).For both projectiles, protons (Fig. 4) and deuterons (Fig. 5), our experimental total fission value reaches 800 mb for the incident energy of 30 MeV.Both curves show the same trend.
Figure 4 presents also the results from an experiment made with two catchers; one backward and one forward the target.Two energy points (69.9 MeV and 57.1 MeV) are in agreement with values obtained when only one catcher is placed forward relative to the beam direction and its contribution is doubled.This confirms our approach for the catcher correction.
In Fig. 4 and Fig. 5 TALYS results using the default models are referred as 'Default' whereas those obtained using the combination of models described in the part 2.4 are refered as 'Adj.'.We found that TALYS Adj. is more able to reproduce the shape and the amplitude of the FP cross sections than TALYS with default models.In addition, the TALYS Adj.results allow estimating the proportion of FP that have been produced during the irradiations but not detected in our experimental conditions (i.e.cooling time, γ spectrometry) by just subtracting the TALYS result without and with our experimental constrains.
Conclusion
Th-232(p,x) and Th-232(d,x) reactions have been studied using the stacked-foils method.These reactions have led to the production of several fission products.The absence of catcher backward the targets in our stacked-foils experiments has been corrected after kinematic and Monte Carlo calculations.These corrections have been validated by a dedicated experiment using two catchers.
Even if our work come from undedicated experiments and give partial measurements, the increase of symmetric fission of Th-232 with the proton and deuteron incident energy has been observed.Total fission cross sections have then been estimated thanks to TALYS calculations.A special effort has been done on the determination of which models included in the TALYS code can better reproduce the experimental data than the default ones, considering our experimental conditions.Our new experimental results could help to contribute to the improvement of the theoretical models.Studies are still in progress on this part.
Figure 1 .
Figure 1.Fraction of fission products exiting the thorium target (40 µm thick), backward and forward, as a function of the incident proton energy.
Figure 2 .
Figure 2. Mass distribution of the fission products for protons as projectiles.
Figure 3 .
Figure 3. Mass distribution of the fission products for deuterons as projectiles.
Figure 4 .
Figure 4. Th-232 total fission cross section for protons as projectiles. | 2018-12-16T01:47:11.580Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "5ff11a6bbfc9fd989d1de122707d3b68a746f09e",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/15/epjconf-nd2016_04058.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ff11a6bbfc9fd989d1de122707d3b68a746f09e",
"s2fieldsofstudy": [
"Physics",
"Medicine"
],
"extfieldsofstudy": [
"Physics"
]
} |
211171658 | pes2o/s2orc | v3-fos-license | On the immanants of blocks from random matrices in some unitary ensembles
The permanent of unitary matrices and their blocks has attracted increasing attention in quantum physics and quantum computation because of connections with the Hong-Ou-Mandel effect and the Boson Sampling problem. In that context, it would be useful to know the distribution of the permanent or other immanants for random matrices, but that seems a difficult problem.We advance this program by calculating the average of the squared modulus of a generic immanant for blocks from random matrices in the unitary group, in the orthogonal group and in the circular orthogonal ensemble. In the case of the permanent in the unitary group, we also compute the variance. Our approach is based on Weingarten functions and factorizations of permutations. In the course of our calculations we are led to a conjecture relating dimensions of irreducible representations of the orthogonal group to the value of zonal polynomials at the identity.
Introduction and Results
With an n × n matrix A we can associate some quantities called immanants, which are labelled by partitions of n. The most important and most studied of them is by far the determinant. This is because the determinant is invariant under similarity transformations and can be expressed solely in terms of the eigenvalues (this is not true of the other immanants). Given the partition γ n, the corresponding immanant is given by where the sum is over the permutation group S n of n! elements, and χ λ (π) is the character of π in the irreducible representation of S n labelled by γ. The determinant corresponds to the partition with all parts equal to 1, which leads to the totally antisymmetric or alternating representation, for which χ (1 n ) (π) equals the sign of the permutation π.
The second most famous immanant is the permanent, which seems to have been introduced by Cauchy and is sometimes called the unsigned determinant. It corresponds to the one-part partition γ = (n), associated with the totally symmetric or trivial representation, for which χ (n) (π) = 1. Other immanants, introduced by Littlewood and Richardson in the same paper where they defined their famous rule [1], do not have special names.
The permanent has attracted some attention recently within the area of quantum physics, because it is involved in the description of the output state of several bosons scattered by a linear interferometer [2]. This process may be used to implement a (non-universal) platform for quantum computation [3] and to solve problems that are intractable using classical computers. In particular, the calculation of the permanent for a generic matrix is supposed to be a #P-hard problem [4] (see also recent experimental results in [5] and the discussion in [6]). Permanents of random matrices have been attacked from different directions [7,8,9,11].
In the quantum mechanical setting, when the interferometer is modelled by a chaotic cavity, as in [10], it is natural to replace the scattering matrix by a random unitary matrix, uniformly distributed in the unitary group U(N ) with respect to Haar measure. In the presence of time-reversal symmetry, the matrix is additionally taken to be symmetric, i.e. uniformly distributed in the symmetric space U(N )/O(N ) (also called the Circular Orthogonal Ensemble, COE), where O(N ) is the orthogonal group .
Motivated by this quantum physics context, in this work we address to problem of computing the average value of immanants of blocks inside random matrices in these three unitary ensembles (for simplicity, we do not discuss symplectic ensembles, but their analysis follows similar lines). When we write Imm γ (U ) with γ a partition of n, we understand that we are computing the immanant of the upper-left n × n block cut out from an N × N matrix U . The determinant and permanent of the block are denoted Det n and Per n , respectively. We may of course consider the whole matrix by taking n = N .
Some particular results have appeared before. Working in the context of the Hong-Ou-Mandel effect, Urbina et.al. [10] found the average of the squared modulus of the permanent and of the determinant, for the unitary group and for the COE. However, their COE calculation is for blocks whose row indices and columns indices are distinct, while ours is for a diagonal block. Also, Fyodorov [11], computed the average of a product of two permanent polynomials of the whole matrix, in the unitary group Per N (U − z 1 )Per N (U † − z 2 ) U (N ) . We rederive his result by a different method, and generalize it to sub-blocks of size n and to the orthogonal group.
We now briefly review some definitions related to representation theory in order to state our results. Detailed discussion of the concepts involved can be found in Section 2.
We need some families of monic polynomials, labelled by integer partitions. They are and where (i, j) refers to a box in the Young diagram representation of λ, and (hereλ is the conjugate partition). The first family has a parameter α. When α = 1 they are given by where d λ = χ λ (1) are dimensions of the irreducible representations of the permutation group and s λ are Schur functions, irreducible characters of U(N ). When α = 2 they are related to zonal polynomials, [N ] (2) The other family is also given by where o λ are irreducible characters of O(N ). An important role will be played by the function where C µ are the conjugacy classes of S n and ω λ (µ) are zonal spherical functions of the Gelfand pair (S 2n , H n ), the group H n being the hyperoctahedral. The function G λ,γ looks like an inner product between a permutation group character and a zonal spherical function, and hence looks somewhat unnatural, mixing up quantities that do not belong together. Nevertheless it appears in our results, in particular in the form of the special case We show in Section 2 that the function g λ is different from zero only if λ has at most 2 parts and (the calculation of g λ was communicated to us by Sho Matsumoto). Our first result is Proposition 1 Let Imm γ (U ) denote the γ-immanant of the n−block of the matrix U and U(N ) be the unitary group. Then, This generalizes the special cases of the permanent, |Per n (U )| 2 U (N ) = n!(N −1)! (N +n−1)! , and of the determinant, |Det n (U )| 2 . which appeared in [10].
For the unitary group, we also compute the next moment of |Per n (U )| 2 in our Proposition 2 Let Per n (U ) denote the permanent of the n−block of the matrix U and let g λ be given in (9). Then, [N ] (1) 2λ We turn to unitary symmetric matrices in our Proposition 3 Let Imm γ (V ) denote the γ-immanant of the n−block of the matrix V which belongs to COE(N ), the symmetric space U(N )/O(N ). Then, The expression (13) looks very similar to the expression (12). If we remember that every element of COE(N ) can be written as V = U U T with U ∈ U(N ), it will not be surprising that a quadratic result in COE(N ) resembles a quartic result in U(N ).
The determinant corresponds to γ = (1 n ), in which case we have and When n = N we get of course 1, as is to be expected. We also consider the orthogonal group, for which we show In the case of the determinant, we have the same result as for the unitary group, In the course of our calculations, we have come to consider the validity of a rather surprising identity, a relation between [N ] (2) λ (associated with zonal polynomials) and {N } γ (associated with irreducible representations of O(N )). This we formulate as our Conjecture 1 The following identity holds: We have checked this conjecture for all γ n with n ≤ 6. It is not clear to us how to interpret this relation within representation theory (it may be somehow related to the fact that zonal polynomials are orthogonal functions on the symmetric space U(N )/O(N )).
If Conjecture 1 is true, then it implies a result that looks similar to the analogous result for the unitary group, Eq.(11). Finally, we consider average permanent polynomials, Per n (U − z). The average value of this quantity is given by (−z) n for every random matrix ensemble considered above. For the unitary and orthogonal groups, the quadratic version can be computed from the previous results according to For the unitary group, the above expression reduces to n m=0 (z 1 z 2 ) n−m n!(N −1)! (n−m)!(N +m−1)! , which generalizes the n = N result of [11].
Results for U(N ), COE(N ) and O(N ) are proved in Sections 3, 4 and 5, respectively. Conjecture 1 is discussed in Section 5, along with a symplectic analogue. Permanent polynomials are discussed in Section 6. The main ingredients in our calculations are Weingarten functions and some character theoretic results related to enumerating some factorizations of permutations. These preliminaries are reviewed in Section 2.
Let S n be the group of all permutations acting on the set {1, ..., n}. To a given permutation π ∈ S n we associate its cycle type, the partition of n whose parts are the lengths of the cycles of π. Permutation (1 2 · · · n) has cycle type (n), while the identity permutation has cycle type (1 n ). The conjugacy class C λ contains all permutations with cycle type λ, and its size is with v j (λ) being the number of times part j appears in λ. Irreducible representations of S n are also labelled by partitions of n, and we denote by χ λ (µ) the character of a permutation of cycle type µ in the representation labelled by λ. The quantity d λ = χ λ (1 n ), the dimension of the representation, is given by Characters satisfy two orthogonality relations, The latter is generalized as a sum over permutations as and S (e) 5 , respectively. The matching on the left can be written as σ(t) with σ = (2 3)(6 8 9 7), while the matching on the right can be written as σ(t) with σ = (2 4 5 3)(8 10 9). The permutarion σ = ( 2 4 6)(8 10) acts only on even numbers and also produces the matching on the right.
A matching on the set {1, ..., 2n} is a collection of n disjoint subsets with two elements each ('blocks'). For example, we call the trivial matching. We will consider the group S 2n acting on matchings as follows: if block {a, b} belongs to matching m, then block {π(a), π(b)} belongs to π(m).
The notion of coset type is important in this context. Given a matching m, let G m be a graph with 2n vertices labelled from 1 to 2n, two vertices being connected by an edge if they belong to the same block in either m or t. Since each vertex belongs to two edges, all connected components of G m are cycles of even length. The coset type of m is the partition of n whose parts are half the number of edges in the connected components of G m . We denote by ct(π) the coset type of π.
Coset type is clearly invariant under multiplication by hyperoctahedral elements; double cosets are thus labelled by partitions of n, so that π and σ belong to the same double coset if, and only if, they have the same coset type. The double coset associated with the partition λ is denoted by K λ an its size is In Figure 1 we show the elements of {1, ..., 2n} arranged in two rows. The top row contains the odd numbers and the bottom row the even ones. The dashed lines indicate how they are paired in the trivial matching. The solid lines come from some other matching m and together both sets of lines make up the graph G m .
It will be important to consider two copies of S n inside S 2n . The first one acts on odd numbers in the top row of our diagram, while leaving the bottom row invariant, call it S It is easy to see that the coset type of π o and π e are equal to the cycle type of π: π ∈ C λ ⇒ ct(π o ) = ct(π e ) = λ.
Another important group is composed of n copies of S 2 , each generated by transpositions of the kind (2k − 1 2k) with 1 ≤ k ≤ n. We denote this commutative group with 2 n elements by S ⊗n 2 . Notice that every element of the hyperoctahedral group has a unique expression as h = π e π o ξ, with ξ ∈ S ⊗n 2 . The average is called a zonal spherical function. It is invariant under multiplication by elements of H n and hence depends only on the coset type of its argument. The simplest cases are ω λ (1 n ) = 1 and ω (n) (τ ) = 1. They also satisfy orthogonality relations, and α n
Factorizations
An equation of the kind π 1 · · · π r = 1, in the permutation group, is called a factorization of the identity. If we specify the cycle types of each factor, π i ∈ C α i then it is a classical result that the number of solutions to the equation is given by When we take the factors as elements of S 2n and control their coset types instead of their cycle types, π i ∈ K α i , there is a similar kind of expression for the number of solutions to equation (32), in which zonal spherical functions take the role previously played by characters [12]:
Unitary and orthogonal characters
The celebrated Schur functions s λ are orthogonal characters of the unitary group U(N ), Their value at the identity matrix is the dimension of the corresponding irreducible representation, λ .
Zonal polynomials are produced from averages of Schur functions over the orthogonal subgroup, Their value at the identity matrix is known to be λ .
The irreducible characters of the orthogonal group are denoted o λ , Their value at the identity matrix is known to be
Weingarten functions
Given τ ∈ S n and two sequences, j = (j 1 , . . . , j n ) and m = (m 1 , . . . , m n ), define the function which is equal to 1 if, and only if, m = τ ( j), i.e. the sequences match up to the permutation τ . By using these functions, we have [13, 14] where W N U is the Weingarten function of the unitary group, and it is given by On the other hand, given σ ∈ M n and i = (i 1 , i 2 . . . , i 2n ), define the function which is equal to 1 if, and only if, the elements of the sequence i are pairwise equal according to the matching σ(t).
For the orthogonal group we have [14,15] where W N O is the Weingarten function of the orthogonal group, given by Moreover, for the circular orthogonal ensemble, we have [16,17] For future reference, let us also define an interleaving operation on sequences:
The function g λ
The function g λ = G λ,(n) is defined as When λ has a single part, it follows from ω (n) (α) = 1 that g (n) = n!.
In general, its value can be found explicitly if we recall the relation between zonal polynomials and power sum symmetric functions [18], Taking x = (1, 1) we have p µ (x) = 2 (µ) and Z λ (1, 1) = 2 n g λ . On the other hand, from (2) we have If λ has more than two parts, the above product vanishes. If λ = (λ 1 , λ 2 ), then we get 3 Unitary group
[N ] If we restrict attention to the simplest immanant, the permanent, we are able to compute a higher moment. Let Per n (U ) denote the permanent of the n × n upper left block of the N × N unitary matrix U . For simplicity of notation, let Then, In terms of the Weingarten function, We can lift the action of a, b, c, d to S 2n in terms of the subgroups S we get where we just replaced some permutations by their inverses. There always exists some ρ ∈ S n such that a o = b o ρ o . Likewise we can write c o = d o θ o for some θ ∈ S n . This leads to As discussed in Section 2, σ 1 b e b o = h 1 and σ 2 d e d o = h 2 belong to the hyperoctahedral group, H n . Then, where we have used the explicit form of the unitary Weingarten function as defined in Eq.(43). We know that h 1 ∈Hn χ µ (h 1 ρ o h 2 θ o ) is different from zero only if µ = 2λ with λ n, in which case it is proportional to the zonal spherical function, by definition. Therefore, (1) 2λ .
The function ω λ depends only on the coset type of η = ρ o h 2 θ o . Keeping in mind that the coset type of ρ o equals the cycle type of ρ, as in Eq.(27), and the same holds for θ o and θ, we thus ask: how many triples (ρ o , θ o , h 2 ) are there with ρ ∈ C α 1 , θ ∈ C α 2 , and h 2 ∈ H n = K (1 n ) , such that their product has a given coset type, say α 3 ? This is the same as asking for the number of factorizations of the identity η −1 ρ o h 2 θ o = 1. The answer to this question is Using that |K 1 n | = 2 n n! and ω β (1 n ) = 1, we arrive at (1) 2λ (78) Using the orthogonality relation (31), we get (1) 2λ At this point we use to finally obtain P n (N ) = (2 n n!) 2 (2n)! λ n d 2λ [N ] (1) 2λ For example, For N 1 we can use that [N ] (1) 2λ ∼ N 2n and the orthogonality relation for zonal spherical functions (30), to get The last equation is obtained by setting x = 2 in the permutation group cycle index polynomial, π∈Sn x (π) = x(x + 1) · · · (x + n − 1). (87)
Circular orthogonal ensemble
For simplicity, in this Section we define We begin by expanding, In terms of Weingarten functions of the circular orthogonal ensemble, This leads to Again we may lift the action of a, b to S 2n in terms of the subgroups S The Weingarten function depends only on the coset type of a o σb −1 e . So, remembering that the coset type of a o equals the cycle type of a, and the same holds for b e and b, it is necessary to count the number of triples (a o , b e , σ) such that a ∈ C α 1 , b ∈ C α 2 , σ ∈ S ⊗n 2 and a o σb −1 e ∈ K α 3 . This is given by Using again orthogonality relation of zonal spherical functions, this leads to where is a generalization of Eq.(50) such that g λ = G λ,(n) .
The simplest examples are .
Following essentially the same calculation done at the end of Section 3.2, one can show that (100)
Orthogonal group
It is obvious that odd moments vanish for any immanant, (Imm γ (U )) 2n+1 O(N ) = 0, so the simplest non-trivial moment is the second. For simplicity, in this Section we define We start by expanding Using Weingarten functions of the orthogonal group, will lead to two ∆ functions that are ∆ σ [a( n) a( n)] and ∆ τ [ n ba( n)].
The first one requires that σ be the trivial matching. Defining π = ba, we have where Z λ (1 N ) = [N ] (2) λ are zonal polynomials. We put this result forth as a conjecture. If indeed true, it implies that Although we do not discuss symplectic ensembles in this work, we have a similar conjecture in that case. Let G λ,γ = µ n |C µ |ψ λ (µ)χ γ (µ), where ψ λ (µ) are twisted spherical functions of the Gelfand pair (S 2n , H n ) (see [17]), denote by N γ = n! dγ sp λ (1 N ) some polynomials in N that are proportional to dimensions of irreducible representations of Sp(2N ) and let Z λ (1 N ) = 2 n [N ] (1/2) λ be symplectic zonal polynomials. Then the conjecture reads 6 Permanent polynomials it is easy to see that Per n (U − z) can be written as Per n (U − z) = P ⊂{1,...,n} π∈S P i∈P U i,π(i) (−z) n−|P | , where P is summed over all subsets of {1, . . . , n}, the group S P contains all bijections of P into itself and |P | is the cardinality of P . Using this we have Per n (U − z 1 )Per n (U † − z 2 ) G = P 1 ,P 2 ⊂{1,...,n} (−z 1 ) n−|P 1 | (−z 2 ) n−|P 2 | π 1 ∈S P 1 π 2 ∈S P 2 E G (π 1 , π 2 ) (126) where E G (π 1 , π 2 ) = i∈P 1 j∈P 2 Taking into account that the lists i and j do not any repeated indices, we see that for both the unitary and orthogonal groups the above average is different from zero only if P 1 = P 2 = P . In that case we have π∈S P i∈P U i,π(i) = Per P (U ), the permanent of the block from U containing the indices in P . But the average value does not depend on the particular indices, so |Per P (U )| 2 G = |Per m (U )| 2 G if |P | = m. Since there are n m subsets of size m, this finishes the proof for all cases. | 2020-02-20T02:01:10.779Z | 2020-02-19T00:00:00.000 | {
"year": 2020,
"sha1": "0fcf332611e5d12afea58e2381f90a24868ce6ae",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.08112",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f69982ad4024a66860ef085cb027c80a362ebc51",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
3953685 | pes2o/s2orc | v3-fos-license | Epigenetic impacts of stress priming of the neuroinflammatory response to sarin surrogate in mice: a model of Gulf War illness
Background Gulf War illness (GWI) is an archetypal, medically unexplained, chronic condition characterised by persistent sickness behaviour and neuroimmune and neuroinflammatory components. An estimated 25–32% of the over 900,000 veterans of the 1991 Gulf War fulfil the requirements of a GWI diagnosis. It has been hypothesised that the high physical and psychological stress of combat may have increased vulnerability to irreversible acetylcholinesterase (AChE) inhibitors leading to a priming of the neuroimmune system. A number of studies have linked high levels of psychophysiological stress and toxicant exposures to epigenetic modifications that regulate gene expression. Recent research in a mouse model of GWI has shown that pre-exposure with the stress hormone corticosterone (CORT) causes an increase in expression of specific chemokines and cytokines in response to diisopropyl fluorophosphate (DFP), a sarin surrogate and irreversible AChE inhibitor. Methods C57BL/6J mice were exposed to CORT for 4 days, and exposed to DFP on day 5, before sacrifice 6 h later. The transcriptome was examined using RNA-seq, and the epigenome was examined using reduced representation bisulfite sequencing and H3K27ac ChIP-seq. Results We show transcriptional, histone modification (H3K27ac) and DNA methylation changes in genes related to the immune and neuronal system, potentially relevant to neuroinflammatory and cognitive symptoms of GWI. Further evidence suggests altered proportions of myelinating oligodendrocytes in the frontal cortex, perhaps connected to white matter deficits seen in GWI sufferers. Conclusions Our findings may reflect the early changes which occurred in GWI veterans, and we observe alterations in several pathways altered in GWI sufferers. These close links to changes seen in veterans with GWI indicates that this model reflects the environmental exposures related to GWI and may provide a model for biomarker development and testing future treatments. Electronic supplementary material The online version of this article (10.1186/s12974-018-1113-9) contains supplementary material, which is available to authorized users.
Background
A coalition of 34 countries deployed approximately 956,600 troops during the 1990-1991 Gulf War [1], with the majority,~700,000, from the USA. An estimated 25-32% of these veterans fulfil the requirements of a Gulf War illness (GWI) diagnosis [2]. GWI is an archetypal, medically unexplained, chronic condition characterised by persistent sickness behaviour, with neuroimmune and neuroinflammatory components.
Symptoms of GWI include fatigue, musculoskeletal pain, cognitive dysfunction, chemical sensitivities, loss of memory and sleep disruption, which can be characterised as 'sickness behaviour' [3,4]. 'Sickness behaviour' is normally a result of inflammatory response to illness or injury, which usually resolves itself over time after the initial insult is removed. Symptoms were reported within 6 months of the conflict [5][6][7].
Although the exact cause of GWI is still unknown, there is a consensus that exposure to environmental toxins is the most likely cause [8]. GWI symptoms are highly heterogeneous, and specific symptoms may be related to specific experiences: for example, Gulf War veterans exposed to nerve agents or oil well fires are at increased risk of brain cancer compared to other Gulf War veterans [9].
A leading hypothesis for the cause of GWI is that the high physical and psychological stress of combat interacted with exposure to acetylcholinesterase (AChE) inhibitors [1,4,[10][11][12][13][14][15][16][17][18][19]. Military personnel were exposed to a number of AChE inhibitors [1,20], including pyridostigmine bromide (PB), a reversible AChE inhibitor, as a prophylactic against nerve agents; sarin, soman, and related nerve agents, irreversible AChE inhibitors, which combatants were inadvertently exposed to after demolition of Iraqi supply depots, such as at Khamisiyah; organophosphate pesticides, irreversible AChE inhibitors, which were widely used to prevent pest-borne diseases and irritation [20]; permethrin, an insecticide which may inhibit AChE [10,21]; and DEET, an insect repellent and a weak AChE inhibitor which may enhance the activity of other AChE inhibitors [22]. For example, an estimated 95,000 deployed personnel were exposed to the plume from the Khamisiyah demolition, and approximately 250,000 may have been exposed to low levels of nerve agents during aerial bombardments earlier in the conflict [23]. Further, the number of nerve agent alarms heard is correlated with risk for GWI [24]. Accumulating research has indicated that deleterious health effects of exposures to psychophysiological stress [25,26] and environmental toxicants [27,28] involve epigenetic modifications that affect transcriptional regulation.
The overall objective of this study was to examine genome-wide epigenetic transcriptional modifications in the brain using an established mouse model of GWI [4,12,15,16]. Our previous research demonstrates that effects on neuroinflammatory pathways occur shortly after initial exposures. For example, pre-exposure with the stress hormone corticosterone (CORT) causes an increase in expression of specific chemokines and cytokines in response to diisopropyl fluorophosphate (DFP) [4], an irreversible acetylcholinesterase inhibitor [29] used here as a sarin surrogate. This corresponds well with the work in GWI study participants [30][31][32][33][34][35][36][37], which have shown immunological abnormalities, and a recent paper [38], which has shown specific immune-related biomarkers for GWI veterans. We hypothesized that epigenetic and transcriptomic changes upon initial exposures would identify gene pathways linked to poor health outcomes in GWI.
Animals
Adult male C57Bl/6J mice were purchased from Jackson Laboratory (Bar Harbor, ME, USA). A total of 79 animals were used for the analyses presented here. All procedures were performed under protocols approved by the Institutional Animal Care and Use Committee of the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health and the US Army Medical Research and Materiel Command Animal Care and Use Review Office. The animal facility was certified by AAALAC International. Upon receipt, the mice were housed individually in a temperature (21 ± 1°C) and humidity-controlled (50 ± 10%) colony room maintained under filtered positive-pressure ventilation on a 12-h light/ 12-h dark cycle beginning at 06:00 h. The plastic tub cages were 46 × 25 × 15 cm; cage bedding consisted of heattreated pine shavings spread at a depth of approximately 4 cm. Teklad 7913 irradiated NIH-31 modified 6% rodent chow, and water were available ad libitum.
Dosing
The dosing paradigm is presented in Fig. 1. Mice were given CORT in the drinking water (200 mg/L in 0.6% EtOH) for 4 days. This CORT regimen is known to be immunosuppressive as evidenced by decreased thymus weight [39]; thymus and spleen weights were confirmed to be decreased (> 20%) in similarly exposed animals [4,15]. On day 5, mice were given a single intraperitoneal injection of either DFP (4 mg/kg) or saline (0.9%).
Thus, there were four exposure groups: (1) saline: vehicle for 4 days, then saline injection on day 5; (2) CORT: CORT for 4 days with a saline injection on day 5;( 3) DFP: vehicle for 4 days with DFP injection on day 5; and (4) CORT + DFP: CORT for 4 days with a DFP injection on day 5.
Brain dissection and tissue preparation
Mice were killed by decapitation and the brains rapidly removed. The frontal cortex, consisting of the anterior portion of the cortex [4], and total hippocampus were dissected free-hand on a thermoelectric cold platform (Model TCP-2; Aldrich Chemical Co., Milwaukee, WI, USA) and immediately frozen at − 80°C.
Differential gene expression
Frontal cortex mRNA-seq data was generated on the Illumina HiSeq 2000 by Q Squared Solutions Expression Analysis LLC (Morrisville, NC, USA), paired-end, with a read length of 100 bp (four samples of saline, CORT and DFP, five samples of CORT + DFP). Hippocampus mRNAseq data was generated by Sickkids (Toronto, Ontario). Sequencing was carried out on the HiSeq 2500, paired-end, with a read length of 125 bp (n = 4 for all groups).
Trimmed files were aligned with the STAR aligner [43] (version 2.5.2a) in a two-pass mode. The GENCODE GRCm38.p4 assembly (mm10) and annotations were obtained from the GENCODE website [44,45] and used throughout. For the frontal cortex, there was an average of 33,547,161 reads per sample and 98.2% average mapped reads. For the hippocampus, there was an average of 35,429,647 reads per sample and 98.2% average mapped reads.
The resultant Bam files were analysed using the Genomi-cAlignments [46] and DEseq2 [47] (version 1.10.1) R packages, using the DEseq2 standard pipeline. A recent comparison study has identified this as an appropriate tool to use with replicates and when relatively large biological effects are expected [48]. Briefly, DESeq2 first fits a generalized linear model for each gene, with read counts modelled as a negative binomial distribution. An empirical Bayes approach is used for shrinkage of dispersion estimation, and the Wald test is used for significance testing, which is then adjusted for multiple corrections using the Benjamini and Hochberg method [49].
Samples were checked for similarity using Poisson dissimilarity matrix [50] with the R package PoiClaClu 1.0.2 and visualised with pheatmap 1.0.8.
Cell proportions
The R Bioconductor package DeconRNASeq [51] was used to estimate the proportion of different cell types within the sample from the RNA-seq data. Data enriched for specific CNS cell types were downloaded from the Gene Expression Omnibus (GEO) [52,53], Series GSE52564, which contains data from the Mus musculus cerebral cortex [54], to use as a reference of cell-typespecific gene expression. This RNA-seq data was trimmed and aligned and gene expression quantified as above. An expression signature for each of six cell types (astrocytes, neurons, oligodendrocyte precursor cells (OPC), myelinating oligodendrocytes (MO), microglia and endothelial cells) was obtained by finding those genes with a five-fold difference in expression in one cell type, compared to each of the others.
Using these expression signatures, the proportion of astrocytes, neurons, oligodendrocyte precursor cells (OPC), myelinating oligodendrocytes (MO), microglia and endothelial cells were estimated for each of the 17 cortex samples and 16 hippocampus samples using DeconRNA-Seq [51]. DeconRNASeq is based on a linear model of a sum of pure tissue or cell-type-specific reads of all cell types, weighted by the respective cell-type proportions. To estimate the proportions of known tissue types in a sample, DeconRNASeq solves a non-negative least-squares constraint problem with quadratic programming to obtain the globally optimal solution for estimated fractions. It is accurate down to cell types making up only~2% of the total cell populations [51].
DNA methylation modifications
DNA was extracted using the E.Z.N.A Tissue DNA Kit (VWR-Omega Bio-Tek). Bisulfite conversion was carried out using the Qiagen Epitect Fast Bisulfite Conversion Kit, and library preparation was performed using the Ovation NuGen RRBS Kit. Reduced representation bisulfite sequencing (RRBS) was carried out by the Princess Margaret Genomics Centre, part of the University Health Network, Toronto, on a NextSeq 500, using a single end, 70 base read length and multiplexed at 9-10 samples per flowcell. Samples for the cortex were 6 saline, 6 CORT, 6 DFP and 8 CORT + DFP, and for the hippocampus, 4 saline, 5 CORT, 3 DFP and 8 CORT + DFP.
RRBS fastq files were trimmed to remove adaptors and low-quality reads (q < 30) using TrimGalore version 0.4.1 [40] around Cutadapt [41] (version 1.9.1). Trimmed files were then aligned to the GENCODE GRCm38.p4 (mm10) assembly, using Bismark (v0.16.0) [55] wrapped around Fig. 1 Overview of exposure timeline. CORT + DFP exposed animals were given CORT in the drinking water for 4 days and injected with DFP on the 5th day, before being culled 6 h later bowtie2 (version 2.2.6) [56]. For the frontal cortex, the average reads per sample was 34,962,881 with 57% mapping efficiency, and for the hippocampus, the average reads per sample was 37,512,807, 63.6% mapping efficiency.
The resultant bam files were analysed with MethPipe (3.4.2) [57,58], using the suggested methods. The bisulfite conversion rate was > 98.9 for all samples. The methylation level for every cytosine site at single-base resolution is estimated as a probability based on the ratio of methylated to total reads mapped to that loci. Differential methylation was calculated by beta-binomial regression, with all batches and exposures included as factors, and CORT + DFP exposure set as the test factor. We examined both differentially methylated cytosines (DMCs) and differentially methylated regions (DMRs).
Chromatin accessibility
H3K27ac is a mark of active enhancers, strongly suggesting that genes with differential enrichment of H3K27ac will be differentially expressed [59,60]. Native chromatin immunoprecipitation sequencing (ChIP-seq) using an MNase digestion and alignment to the mm10 genome using Burrows-Wheeler Alignment was carried out by the Genome Sciences Centre, BC Cancer Agency. Samples were sequenced single end, 75 base read length on a Hiseq 2500 platform. The average read per sample was 131,727,189 with 98.9% average mapped reads. There were four samples per group, with each sample having immunoprecipitated and input DNA sequenced.
PePr (Python) and diffReps (Perl) packages were used for ChIP-seq analysis, as a recent paper by Steinhauser et al. [61] suggested that both are good tools when biological replicates are available. The results from each were compared and analysed to provide a conservative list of sites showing differential enrichment.
PePr (1.1.14) [62] was run according to the authors' suggested pipeline. PePr used a sliding window approach, shifting all reads toward their 3′ direction by half of the empirically estimated DNA fragment length and estimating window width based on the average width of the top pre-candidate peaks. The genome was then divided into consecutive widths that overlap by 50%, and the number of reads within each window was counted. This read count was then normalised based on total read count among ChIP and control sample and the relative average peak heights among ChIP samples. Read counts were modeled across replicates and between groups with a local negative binomial model. Genomic regions with less variable read counts across replicates were ranked more favourably than regions with greater variability, thus prioritizing consistently enriched regions [62]. Narrow peaks were assumed, in line with previous literature on H3K27ac (e.g. [63]). diffReps (1.55.6) [64] was run according to the authors' suggested pipeline. Bam files first had to be converted to bed files, using Bedtools (v2.26.0) [65]. Unlike PePr, diffReps used a set window size of 1000 bp for narrow peaks with a step size of 100 bp. The genome was pre-screened to remove regions with low read count to improve power and decrease computational time. Normalisation was carried out using the read count for a particular window over read count across all samples. An exact negative binomial test was used for differential analysis, which used biological replicates. p values were adjusted by the Benjamini-Hochberg method [49]. Peaks were annotated to genes using region_analysis [64].
Identifying genes unique to the CORT + DFP exposure
For several computational tools, e.g. DESeq2, PePr and dif-fReps, only direct comparisons between any two exposure groups (1v1) could be made, rather than the multifactorial comparisons available for differential DNA methylation modifications with MethPipe (RRBS). Therefore, in these cases, a series of 1v1 comparisons were made to conservatively estimate which genes were differentially expressed (RNA-seq) or enriched for H3K27ac (ChIP-seq). To use the RNA-seq data as an example, the 1v1 comparisons were carried out as:
Genes differentially expressed between CORT and
CORT + DFP and then include only the subset of genes which were not differentially expressed between saline and DFP.
(a) Changes between CORT and CORT + DFP may be due to DFP or the combination of CORT + DFP; removing those differentially expressed between saline and DFP removes those which are due to DFP alone 2. Genes differentially expressed between DFP and CORT + DFP and then include only the subset of genes which were not differentially expressed between saline and CORT.
(a) Changes between DFP and CORT + DFP may be due to CORT or the combination of CORT + DFP; removing those differentially expressed between saline and CORT removes those which are due to CORT alone 3. Intersect of the genes which appear in both list 1 and list 2 (a) Both should be genes differentially expressed due to the combined CORT + DFP exposure 4. Those genes which appear in list 3 and are differentially expressed between saline and CORT + DFP (a) Final check, as they should be different in the CORT + DFP exposure vs saline This provides a list of changes unique to the combined CORT + DFP exposure, which are not seen in either CORT or DFP exposure alone. Note that this is conservative, as genes with low but significant differential expression in either CORT or DFP alone, but a larger change in the combined CORT + DFP exposure, will be lost.
Enrichment analysis
For the sets of significant genes identified by RNA-seq, RRBS and ChIP-seq, as well as those genes that were significant in two or more of these analyses, gene set enrichment analyses were carried out. The R package clusterProfiler 3.4.4 [66] was used for Gene Ontology Biological Process (GO BP) [67,68] and KEGG pathway [69,70] enrichment analysis, with p and q value cutoffs of ≤ 0.05. Reactome pathway analyses were carried out using the ReactomePA 1.20.2 R package [71], with a p value cutoff of ≤ 0.05. All three packages reference the latest versions of their respective databases. These significantly enriched annotations were then visualised with the 'enrich-Map' function of DOSE 3.2.0 R package [72], with parameters altered to aid legibility with different numbers of enriched annotations.
Overlap between gene sets
The overlap between gene sets was visualised using the UpSetR R package [73]. This provides similar information to a Venn diagram, but in a way which makes proportions clear. Significance of overlap was determined by permutation analysis: random gene sets of the same size as our observed gene sets were taken from the same annotations, and the overlap between the two random gene sets was recorded. This was repeated 1,000,000 times, and the number of occurrences of an overlap equal to or larger than our observed overlap was divided by 1,000,000 to give an empirical p value.
Differential gene expression
Samples were clustered using a Poisson dissimilarity matrix to determine if samples from the same exposure group showed similar expression profiles. As can be seen in Additional file 1: Figure S1 and Additional file 2: Figure S2, the samples largely clustered by exposure group. The only sample that appeared to be out of place was a CORT + DFP sample in the frontal cortex that appeared intermediate between CORT and DFP alone.
In the frontal cortex, the RNA-seq analysis identified 206 GENCODE genes (204 with unique entrez IDs) that were uniquely differentially expressed in the CORT + DFP exposure group compared to all other groups (Additional file 3: Table S1). Enrichment analysis showed 12 enriched KEGG pathways (Additional file 4: Figure S3; Additional file 5: Table S2) and 24 enriched GO BP annotations ( Fig. 2; Additional file 6: Table S3). These annotations formed several broad groups related to immune response, including chemokine production, oxidative stress and steroid biosynthesis.
In the hippocampus, 667 GENCODE genes (637 with unique entrez IDs) were uniquely differentially expressed in the CORT + DFP exposure group (Additional file 7: Table S4) compared to all other groups. Enrichment analysis showed 19 enriched KEGG pathways (Additional file 8: Figure S4, Additional file 9: Table S5) and 294 enriched GO BP annotations (Fig. 3, Additional file 10: Table S6). Similar to the frontal cortex, these annotations were grouped into several clusters (Fig. 3), including immune-related annotations (e.g. I-kappaB and NF-kappaB signalling), annotations related to nervous system differentiation, and development.
The two analyses revealed two overlapping KEGG annotations: cytokine-cytokine receptor interaction and rheumatoid arthritis and nine GO BP annotations (Additional file 11: Table S7). There are 32 genes differentially expressed under CORT + DFP priming and exposure found in both the cortex and hippocampus RNA-seq data (Additional file 12: Figure S5; Additional file 13: Table S8). Enrichment analysis of these 32 genes showed 31 enriched KEGG pathways, including annotations such as rheumatoid arthritis and cytokine-cytokine receptor interaction (Additional file 14: Table S9) and 333 GO BP annotations, including positive regulation of steroid biosynthetic process, positive regulation of chemokine production and regulation of I-kappaB kinase/NF-kap-paB signaling (Additional file 15: Table S10).
DNA methylation modifications
We next examined DNA methylation in the frontal cortex and hippocampus using RRBS to identify DNA methylation modifications associated with the exposures. The frontal cortex RRBS data showed 297 differentially methylated cytosines corresponding to 60 differentially methylated regions. Once these regions were annotated to genes (53 entrez IDs; 60 GENCODE; Additional file 16: Table S11), there was no significant enrichment for any KEGG or GO BP annotations. The hippocampus RRBS data showed 926 differentially methylated cytosines corresponding to 192 differentially methylated regions and annotated to 98 GEN-CODE genes (95 unique entrez IDs; Additional file 17: Table S12). Enrichment analysis was carried out for KEGG pathways or GO BP annotations, showing three significant GO BP enrichments: norepinephrine metabolic process (n = 3, p = 0.048, q = 0.045), cilium morphogenesis (n = 7, p = .048, q = 0.045) and cilium organization (n = 7, p = 0.049, q = 0.046). It is interesting to note in relation to the acetylcholinesterase action of DFP that a CpG site within the acetylcholinesterase gene (Ache) was significantly differentially methylated in the hippocampus (chr5: 137291317; adjp = 0.0296).
Given our hypothesis that DNA methylation modifications contribute to long-term changes in gene expression as a function of GWI exposures, this apparent lack of large, coordinated changes in DNA methylation was unexpected but could have at least two explanations. First, DNA methylation is thought to be relatively stable, and therefore, there may not have been an opportunity for substantial methylation changes to have occurred only 6 h after DFP exposure. A second possibility is that any changes were confounded by the number of different cell types within the brain. Consequently, methylation changes from any single cell type, especially cells making a small proportion of the tissue, may be lost in the 'noise'. To investigate this second possibility, we used RNA-seq data to estimate the proportions of cells in our two tissues.
Cell proportions
The estimated average proportion of each of our five cell types of interest for each exposure group is shown in Fig. 4.
In the rat cortex, neurons make up~40% of cells [74] and 44% of whole mouse brain [75]. In the human and mouse cortex, microglia make up~5% of cells [76,77]. These reports are in line with our estimates of~40-50% of cells being neurons and~4-6% of cells being microglia. This, therefore, may indicate that enriching for specific cell types, such as microglia, may enhance our ability to detect cell-type-specific methylation modifications due to these exposures. For example, currently only~1:25 RRBS counts will come from microglia.
An interesting incidental finding in the cortex was that CORT exposure, with or without co-exposure with DFP, was associated with an increase in the proportion of neurons and a decrease in the proportion of myelinating oligodendrocytes (MOs) in the frontal cortex (Fig. 4b). As we would not expect neurogenesis to occur in the frontal cortex, this suggests that the increase in the proportion of neurons is driven by a decrease in the absolute number of myelinating oligodendrocytes. A reduced number of Fig. 2 Frontal cortex RNA-seq significantly enriched gene ontology biological process annotations. Gene ontology biological process annotations significantly enriched in genes which were differentially expressed in the frontal cortex of CORT + DFP exposed mice, with groups of similar annotations highlighted oligodendrocytes would be in line with previous work in rats where CORT was shown to reduce the proliferation of oligodendrocytes [78,79]. We emphasize that these are estimated cell proportions; however, the results indicate that stereology to confirm this will be important in future studies.
Chromatin accessibility
H3K27ac is a mark of regions of the genome that are being actively transcribed [59,60]. Our RNA-seq data showed enrichment for genes involved in histone modification, suggesting that changes in chromatin accessibility may play a role in the response to the exposures. H3K27ac ChIP-seq provides an additional layer of epigenetic regulation, which may respond more quickly than methylation. It also allows an indirect examination of current transcription in the largest cell population, as H3K27ac is found at actively transcribed regions. In ChIPseq (and RRBS), every locus gives a single signal: either it is enriched for H3K27ac or it is not (or is methylation or is not). However, in RNA-seq, every locus could produce none, one or hundreds of RNA molecules, meaning that a small cell population with large changes in gene expression could mask the signal from a large population with small changes in gene expression. Therefore, using ChIP-seq will allow indirect examination of potential gene expression changes in neurons.
PePr identified 3294 GENCODE genes (3023 entrez IDs; Additional file 18: Table S13) with differential enrichment of H3K27ac, whereas diffReps identified 1518 GENCODE genes (1465 entrez IDs; Additional file 19: Table S14). The overlap between these two analyses was 563 GENCODE genes (557 entrez IDs; Additional file 20: Table S15) which were used for further analysis. However, gene annotation enrichment for each of the two gene sets (PePr and diffReps) demonstrated a large overlap in enriched annotations, suggesting that they are both finding changes in similar pathways but that the individual pathway members they find are different (85% of diffRep and 69% of PePr KEGG pathways (74) are found in both; 68% of diffReps and 54% of PePr GO BP annotations (521) are found in both).
The enrichment data indicated a clear bias towards neuronal-linked annotations, including neuronal morphology and synapse-related annotations (Additional file 21: Figure S6 and Fig. 5; Additional file 22: Table S16 and Additional file 23: Table S17). Of particular interest are the observed enrichment of GO BP annotations 'cognition' and 'learning or memory' , both of which are observed to be disrupted in GWI study participants, 'Circadian entrainment' which may relate to observed sleep disruption, and 'response to steroid hormone' , which likely relates to CORT (a steroid hormone involved in the response to stress).
These findings demonstrate that there are potential changes in neuronal-related gene expression in the frontal cortex, as was also seen in the hippocampus RNA-seq, Fig. 3 Hippocampus RNA-seq significantly enriched gene ontology biological process annotations. Top 50 gene ontology biological process annotations significantly enriched in genes that were differentially expressed in the hippocampus of CORT + DFP exposed mice, with groups of similar annotations highlighted highlighted by the fact that 33 genes were found in both the ChIP-seq frontal cortex analysis and the hippocampus RNA-seq analysis.
Overlap between genes found in different analyses
As shown in Fig. 6, there is not a large overlap in genes found between any of our analyses. However, this disparity may be partly explained from the aforementioned difference between mRNA and DNA, whereby one locus can produce many mRNA molecules, but DNA either has a modification or does not. This is reflected by the fact that the largest percentage overlap is between those genes found with RRBS and ChIP-seq, as these are both examining DNA modifications: 12% of genes found in frontal cortex RRBS, and 16% in hippocampus RRBS, are also found in the frontal cortex ChIPseq, whereas this is only 1% and 5% for frontal cortex and hippocampus RNA-seq respectively. Similarly, 15.5% of genes found in the frontal cortex RNA-seq are also found in the hippocampus RNA-seq.
Because of the very few annotations enriched in genes differentially methylated in either of our tissues, we see very little overlap between annotations in methylation and ChIP-seq. However, we do see annotations and pathways enriched in both our ChIP-seq and RNA-seq data (Additional file 24: Figure S7 and Additional file 25: Figure S8).
Discussion
To our knowledge, this is the first transcriptome-and epigenome-wide study to examine evidence for transcriptional, chromatin and DNA methylation modifications in a model of GWI. A previous study [12] examined 84 microRNAs (MiRNAs) and global but not gene-specific changes in DNA methylation and hydroxymethylation in rats subjected to restraint stress and a protocol of PB, DEET and permethrin to simulate troops' chemical exposures. This study found differential expression of two miRNAs and in increase in global methylation in the hippocampus. Although this was an important step forward, our study was able to examine the whole transcriptome, a histone modification associated with chromatin accessibility and DNA methylation at a genome-wide, single cytosine level.
Overall, our results represent several interesting findings. First, as expected, there was a large change in the expression of immune-related genes in both the frontal cortex and hippocampus, building upon previous findings in this model [4]. Second, many genes associated with synaptic function are changed in their activity, as shown by our frontal cortex H3K27ac ChIP-seq data and hippocampus RNA-seq. These changes in gene expression seem to be subtler than those found for immune-related genes (lower expression), but differential expression of genes related to synaptic function in mice is associated with impaired memory and cognition, consistent with impairments reported by GWI suffers. Interestingly, long-term potentiation-and depression-related genes are enriched in the ChIP-seq data. Finally, we see evidence of not just a change in gene activity but of a suggested change in cell proportions. This is in line with previous work with CORT [78,79].
It is possible that microglia are responsible for this large change in immune-related gene expression, but we acknowledge that other cells, such as astrocytes, also express cytokines and chemokines (e.g. Kim et al. [80]). However, this is usually at a much lower level than that in microglia: in genes significantly differentially expressed uniquely with CORT + DFP in both the frontal cortex and hippocampus the six genes with the largest fold change are expressed in microglia (Additional file 26: Table S18). Therefore, although other cell types may be contributing to cytokine and chemokine expression, it is highly unlikely that microglia are not the cell type driving this change. Similarly, we attribute many of the transmembrane transporter-related annotations we see in the frontal cortex ChIP-seq data and hippocampus RNA-seq data to neurons; however, many of these transporters are also expressed in glial cells [54]. For this reason, future work should be carried out to isolate or enrich specific cell populations, allowing these predictions to be tested.
Our findings of altered cholinergic neurotransmitter expression after DFP exposure are in line with previous studies of low dose sarin exposure in rats [81,82]. Specifically, changes in M1 and M3 acetylcholine receptor expression were seen in the frontal cortex in a dose-dependent manner when the animals were maintained under hyperthermic conditions (i.e. a stressor; Henderson et al. [81,82]). In relation to this, we see choline-related annotations at all levels we analysed: our H3K27ac ChIP-seq analyses show changes in cholinergic synapse-related genes and differential binding related to Chrm2, the gene coding for M2; the KEGG annotation, 'choline metabolism in cancer' , is enriched in our hippocampus RNA-seq genes and our H3K27ac ChIP-seq genes; finally, in our hippocampus RRBS, we see altered methylation of a CpG site within Ache, the gene coding for acetylcholinesterase.
These findings not only link to rat models but also to GWI subjects, where differences in prefrontal cortex working memory have been previously found and have been attributed to the cholinergic system [83]. This again connects with our findings from ChIP-seq where both 'learning and memory' and 'cholinergic synapse' annotations are significantly enriched. Therefore, pre-exposure with CORT may not only be potentiating the immune response to AchE inhibitors but also causing longer term changes to the cholinergic signalling system in line with previous animal models. Human studies have been more varied, with evidence both for [84,85] and against [86] long-term changes in the cholinergic system, perhaps related to the heterogeneity of symptoms or to differences in tissues being examined. Recent work by Locker et al. [15] has shown that increased neuroinflammatory response in this model is not directly related to AChE inhibition but instead may result from changes in other aspects of signaling (e.g. epigenetic alterations in gene expression).
In relation to the potential reduction in myelinating oligodendrocytes in the cortex, this may have an effect on some of the phenotypes seen in GWI: reduced oligodendrocytes have been linked to major depressive disorder (MDD), functional consequences in neurons and mood-related symptoms in rats [87]. As this change in cell proportion would affect myelinating cells, it could also contribute to the reported alterations in white matter in GWI veterans [20,[88][89][90].
Our RNA-seq data suggests both immune dysfunction and oxidative stress. Previous reports have shown immune dysfunction and oxidative stress in other disorders with similar phenotypes to GWI such as myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) [91]. However, whether the immune response is causing oxidative stress or oxidative stress is causing an immune response is less clear. This is directly relevant, as oxidative stress contributes to the toxicity of AchE inhibitors [29] and has been suggested as a cause of GWI [1,92].
One of the six genes (Additional file 26: Table S18) we find strongly differentially expressed in both the frontal cortex and hippocampus, Tlr2, has previously been linked to sickness behaviour when expressed in hypothalamic microglia [93] and was reported to show increased expression in a model of GWI [94]. Therefore, it is of great interest in terms of the sickness-like behaviour that defines GWI. TLR activation by immune insult appears to increase Tlr2 expression [95]. We can speculate that the change in Tlr2 expression we see is upstream of the change in Il1b and TNF, as specific activation of TLR2 increases their expression [93,96]. Further, TLR stimulation increases SLC15A3 expression in dendritic cells, which in turn regulates TNF and IL-1β expression [97]. At the very least, these six, consistently, strongly upregulated genes make up a robust core of immune-and microglia-related genes for further investigation (Additional file 26: Table S18).
Our results correspond very well with those of Broderick et al. [98] in blood samples from GWI study participants, who found changes to NF-KappaB-related genes (which are enriched in our hippocampus RNA-seq data and in the genes significant in both the cortex and hippocampus RNA-seq), and in pathways under the broad theme of neuronal development and migration, which we see in our hippocampus RNA-seq and our cortex ChIP-seq. They also saw ligand-receptor interactions supporting neurotransmission, which again we see strongly in our cortex ChIP-seq data. Further, our KEGG pathway analysis of the transcriptomics data highlighted rheumatoid arthritisrelated gene enrichment. Interestingly, a study in veterans with GWI also highlighted the possibility of medication used to treat rheumatoid arthritis also being used to treat GWI [99]. The same study identified the TNF-alpha pathway (which we see in our transcriptomics) and the estrogen pathway (which we saw in our ChIP-seq) as potential drug targets [99]. Therefore, data from our model corresponds to data identified from veterans with GWI and could be used as a model to test these compounds as potential therapeutics.
A recent paper described eight potential blood biomarkers for GWI, the strongest being a 9.27-fold increase in CaMKII protein in the blood of these veterans [100]. In our study, Camk2b was found to be differentially methylated in the hippocampus and have differential H3K27ac in the cortex.
Further linking our study with human data was the finding that plasma CRP levels are increased in the blood of GWI veterans [38]. CRP is often used as a biomarker of IL-6-mediated inflammation [38,101], and IL-6 related annotations were enriched in the genes differentially expressed in both the cortex and hippocampus (Additional file 15: Table S10). This annotation was due to four genes, Il1b, Tlr2, Il1a and Tnf, four of our six most strongly and consistently differentially expressed genes.
We see minimal overlap in genes with significant differential methylation between the hippocampus and frontal cortex (two genes, Bcar3 and Tmem242). This is likely due, at least partly, to the two factors outlined above: cellular heterozygosity and a short time point after treatment. Of the two consistent genes, Bcar3 is involved in cell proliferation, which when overexpressed in breast cancer conveys estrogen resistance, and Tmem242 is a transmembrane protein with very little current annotation. We cannot detect any coordinated methylation changes (denoted by enriched annotations in our significant genes) in the frontal cortex and very few in the hippocampus. As mentioned above, we detect differential methylation of one CpG site within Ache, the gene coding for acetylcholinesterase. A number of the genes found to be differentially methylated can be linked to GWI. Examples include Lims1, which is known to be regulated by TNF and is involved in cell growth and survival [102]; Sesn1, Aplnr, Pxn and Actn1, which have been linked to ME/CFS [103,104]; Col5a3, which was identified in a rat model of GWI in the hippocampus [12]; and Slc1a2, which has been linked to ALS, a disorder GW veterans are at increased risk of [105,106]. This may indicate that networks of genes linked to GWI are only just beginning to be methylated, or that different groups of genes are methylated in different cell types. However, until these coordinated networks are elucidated, investigation of individual genes could lead to spurious associations.
Conclusions
We are able to show a range of changes in the transcriptome of this well-established mouse model, many of which reflect gene expression seen in veterans with GWI. Further, we see alteration in H3K27ac, showing potential chromatin configuration changes, which could lead to epigenetic effects with long-lasting implications. We also find differences in DNA methylation, although these are less easily interpretable than the transcriptome and H3K27ac changes.
Additional research is needed to assess whether effects of these epigenetic and transcriptional modifications on long-term health outcomes are cumulative and/or are potentiated by later exposures (e.g. infection). Notably, however, our findings reveal gene pathways known to be involved in long-term adverse health effects in GWI veterans. These results suggest that epigenetic and transcriptional regulation during the initial exposure period likely contribute to pathological outcomes in GWI. It would be important to examine these modifications in peripheral tissues from GWI veterans to ascertain whether biomarkers could be developed to predict future health outcomes.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. The datasets generated and/or analysed during the current study are not publicly available as they are being used for follow-up studies. They will be made publicly available after future manuscripts are published.
Authors' contributions POM, JPO, DBM and GB contributed to the conception and design of this study. JPO, DBM, LTM, KAK and JVM were involved in treatment of the mice and collection of tissue. BH, WCV, GB and DGA were involved in analysis of sequencing data and statistical analysis. DGA and POM drafted the manuscript, with all other authors critically revising the manuscript for publication. All authors read and approved the final manuscript.
Ethics approval and consent to participate All procedures were performed under protocols approved by the Institutional Animal Care and Use Committee of the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health and the US Army Medical Research and Materiel Command Animal Care and Use Review Office. The animal facility was certified by AAALAC International.
Competing interests
The authors declare that they have no competing interests. | 2018-03-18T18:09:05.319Z | 2018-03-17T00:00:00.000 | {
"year": 2018,
"sha1": "937a2361b9b3308d8ae6fe2e6c8f5d7b086da869",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-018-1113-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "937a2361b9b3308d8ae6fe2e6c8f5d7b086da869",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248807711 | pes2o/s2orc | v3-fos-license | Nature-Based Early Childhood Education and Children’s Social, Emotional and Cognitive Development: A Mixed-Methods Systematic Review
This systematic review synthesised evidence on associations between nature-based early childhood education (ECE) and children’s social, emotional, and cognitive development. A search of nine databases was concluded in August 2020. Studies were eligible if: (a) children (2–7 years) attended ECE, (b) ECE integrated nature, and (c) assessed child-level outcomes. Two reviewers independently screened full-text articles and assessed study quality. Synthesis included effect direction, thematic analysis, and results-based convergent synthesis. One thousand three hundred and seventy full-text articles were screened, and 36 (26 quantitative; 9 qualitative; 1 mixed-methods) studies were eligible. Quantitative outcomes were cognitive (n = 11), social and emotional (n = 13), nature connectedness (n = 9), and play (n = 10). Studies included controlled (n = 6)/uncontrolled (n = 6) before-after, and cross-sectional (n = 15) designs. Based on very low certainty of the evidence, there were positive associations between nature-based ECE and self-regulation, social skills, social and emotional development, nature relatedness, awareness of nature, and play interaction. Inconsistent associations were found for attention, attachment, initiative, environmentally responsible behaviour, and play disruption/disconnection. Qualitative studies (n = 10) noted that nature-based ECE afforded opportunities for play, socialising, and creativity. Nature-based ECE may improve some childhood development outcomes, however, high-quality experimental designs describing the dose and quality of nature are needed to explore the hypothesised pathways connecting nature-based ECE to childhood development (Systematic Review Registration: CRD42019152582).
Introduction
The foundations of good cognitive, social and emotional health are established from an early age [1]. However, according to a global report by the World Health Organization (WHO), evidence suggests that cognitive, social, and emotional outcomes are often low in This systematic review used a novel mixed methods approach. The qualitative studies enabled a better understanding of the phenomenon of nature-based ECE and the quantitative studies were used to understand the impact on children's development. This mixed method approach combines the strengths and limitations of research enquires [19].
Materials and Methods
This systematic review is part of a larger research project synthesising evidence on the association between nature-based ECE and children's overall health and development [19]. Findings for other outcomes (i.e., physical health outcomes) have been published in the Journal of Physical Activity and Health. This systematic review was registered to the International Prospective Register of Systematic Reviews (CRD42019152582) in October 2019, and the protocol was published to BMC Systematic Reviews in September 2020 [10]. It follows the reporting guidance provided in the Adapted PRISMA for reporting systematic reviews of qualitative and quantitative evidence [20].
Eligibility Criteria
The selection criteria followed the PI(E)COS (Population, Intervention or Exposure, Comparison, Outcomes and Study design) framework [21].
Population: Children aged 2-7 years attending ECE and who have not started primary or elementary school education were included. The typical age of children attending ECE, accounting for global differences, is aged 2-7 years. To assess eligibility, the mean age, range, or median reported in the study was used. Studies that only included children with a disease or condition, such as autism, a physical disability, attention deficit hyperactivity disorder, autism, etc., were excluded.
Exposure/Intervention: To be eligible, ECE settings, such as nature-preschools, forest kindergartens, forest schools, etc. [9], had to integrate nature into their environment. This may have included a spectrum of nature exposure, including: (i) children spending most of the ECE day outdoors in highly naturalised areas; (ii) interventions that enhance the amount and diversity of nature in the playgrounds; (iii) associations of specific natural elements (e.g., hills, trees, grass, vegetation, etc.) in the ECE setting; and (iv) the introduction of a garden-based exposure. Studies were excluded if the exposure was traditional ECE where children typically spend more time indoors and their environment was predominately manmade structures such as slides, swings, or climbing frames.
Comparison: Traditional ECE was identified if children typically spent more time indoors and had fewer opportunities in environments that were not predominantly naturebased. These outdoor areas tended to integrate a small amount of nature with little variety, and incorporated manmade structures, such as swings, slides, and climbing frames.
Outcomes: Any child-level outcome related to children's social, emotional and cognitive health, wellbeing, and development were included. Studies were excluded if they assessed non-child level outcomes, such as the impact on practitioners, changes (i.e., outcomes) to the ECE settings, and studies using unvalidated questionnaires (for both quantitative and qualitative designs).
Study designs: Quantitative and qualitative primary research designs were included. For qualitative studies to be eligible, they had to explore parent, practitioner, and/or child perceptions on children's social, emotional and cognitive health, wellbeing, and development when the child attended nature-based ECE. For quantitative studies to be eligible, outcomes had to be measured when children attended nature-based ECE. For example, cross-sectional and case-control studies that measured outcomes when children were attending nature-based ECE; longitudinal, quasi-experimental, and experimental studies with at least two time points (before and after), and retrospective studies if outcomes were assessed when the child was attending nature-based ECE. Qualitative studies were excluded if they did not have: a comparator (i.e., exposure, control group, pre/post), addressed questions that were suitable for qualitative enquiry, and/or made a useful contribution to this review (see Quality appraisal of included studies).
Dissertation and Theses Database (ProQuest), Open Grey (www.opengrey.eu, accessed on 8 December 2019) were searched for grey literature, and Directory of Open Access Journals (www.doaj.org, accessed on 8 December 2019) to capture dissertations and reports. The first 10 pages of Google Scholar were searched and checked, and websites of related organisations and professional bodies involved in nature-based ECE were searched for relevant publications. Finally, in August 2020, citation lists of eligible studies published from 2019 onwards were screened to capture published evidence that may have been missed in the initial searches.
Two authors (AJ and AM) and an information scientist (VW) constructed the search strategies. A comprehensive search strategy was developed by reviewing keywords and related terms in relevant systematic reviews and publications. Co-authors who have expertise in fields related to nature, child health, wellbeing and development, education, and systematic review methodology reviewed and refined the draft searches. The strategy was tested and refined until a finalised search strategy was developed. Search strategies were adapted for each database and other web searches. The literature search was not restricted by year of publication or language. A draft search strategy for MEDLINE can be accessed in the published protocol [10] or Supplementary File S1. References were imported to Endnote and duplicates were removed by one reviewer (AJ).
Selection Procedure
Each title and abstract were screened by one reviewer (AJ, PM, RC, IF, SI, FL, BJ, VW) with a random 10% screened in duplicate independently (AM) to reduce bias. Using Covidence (www.covidence.org/, accessed on 14 January 2020) software, full-text articles were screened by two researchers independently. A third reviewer resolved any disagreement (AM) in instances when reviewers disagreed. Where multiple publications were reported for the same study, they were combined and reported as a single study.
Data Extraction
Data from eligible studies were extracted by one reviewer (AJ) and cross-checked by another reviewer (AM, PM or HT).
For quantitative studies, the following information was extracted: • Outcomes and results (summary of key themes derived from data extractor and author).
Primary study authors were not contacted to obtain missing information due to constraints on time and the large volume of studies.
Quality Appraisal of Included Studies
The quality of all included studies was assessed at study level by two reviewers independently (AJ, AM, PM, HT), and disagreements were resolved through discussion. The Effective Public Health Practice Project (EPHPP) Quality Assessment Tool [22] was used to assess the quality of quantitative studies. The EPHPP tool is a commonly used quality appraisal tool in public health that assesses quality across a variety of quantitative study designs [22]. Minor modifications were made to the tool to ensure its relevancy for the present review, for example, defining the target population, specifying confounders of interest, and enhancing the overall rating of the paper (see Supplementary File S2).
The Dixon-Woods (2004) checklist [23] was used to assess the trustworthiness of eligible qualitative studies, which provides a set of prompts that were designed to appraise aspects of qualitative methodology. Studies were excluded from the review if the research questions were assessed to be unsuited to qualitative inquiry (question 2) or if the paper was assessed as not making a useful contribution to the review question (question 7) (see Supplementary File S2).
Data Synthesis
Data synthesis was done in three stages. Firstly, for quantitative studies a meta-analysis and sensitivity analysis (where studies with a high risk of bias were removed) was originally planned, however, calculating an overall effect size estimate could not be performed because only a small number of studies could be combined, studies were heterogenous (as interpreted by the I 2 statistic) and/or studies did not provide the appropriate data to support standardising.
As a meta-analysis could not be conducted, a Synthesis Without Meta-analysis (SWiM) based on effect direction was performed [24]. The effect direction plot was used to summarise findings at both a study level and an outcome level. Study level effect direction plots are presented in instances where an outcome is reported in two or more studies with the same exposure category (explained below). Study level effect directions are then synthesised to provide a summary effect direction at an outcome level where study quality (as rated using the EPHPP tool), design, and sample size of studies are considered. For example, if three studies were synthesised for the summary effect direction plot and one study was rated as moderate quality (according to the EPHPP tool), was of a controlled before and after design, and/or had a larger sample size, then this study would hold a greater weighting compared to studies with lower quality, poorer study designs, and small sample sizes. The synthesis by effect direction addresses the question of whether there is evidence of a positive or negative association. In addition, a narrative summarising the effect direction at a study level was conducted where an outcome could not be grouped in the effect direction plot, i.e., because the outcome was only measured in one study within the same exposure category.
Outcomes were grouped into three domains after study selection: (i) social and emotional development; (ii) cognitive development; and (iii) nature connectedness. Social and emotional development presents two sub-domains: social and emotional, and play. Exposure categories were also generated after study selection based on exposure descriptions by study authors in the eligible studies. These were: (i) nature-based ECE, (ii) ECE natural playgrounds, (iii) natural elements within ECE, and (iv) garden-based interventions. Table 1 provides an overview of these exposure categories.
Nature-based ECE
This category represents studies with a higher exposure to nature. These ECE settings would integrate rich and diverse natural elements in their environment and children would spend most of their ECE time outdoors. Examples of a typical nature-based ECE environment may include: wooded areas, forest, trees, hills etc. ECE practitioners would be present and may lead on formal and informal educational activities that involve/incorporate nature.
ECE natural playgrounds
This category includes studies that utilize interventions that enhanced the nature in the playground or where natural playgrounds were compared to traditional playgrounds. Children would typically spend less time outdoors in nature in these studies.
Natural elements within ECE
This category represents a lower exposure to nature and included studies (mostly cross-sectional in design) that looked at the association of specific natural elements, such as trees, vegetation, hills, grass, etc., or specific features or quality of the playground on specific health outcomes.
Garden-based interventions
This category represents studies that included an intervention with a garden component and was delivered within an ECE setting.
Sub-group analyses to investigate differential associations (by age, gender, duration spent in ECE, etc.) were initially planned, however, the eligible studies did not provide sufficient detail to enable us to conduct sub-group analyses.
Secondly, for qualitative studies, we conducted a thematic analysis of author reported conclusions and participant quotes grouping data into higher and lower order themes. One reviewer (AJ) analysed the data inductively, generating themes that were discussed with another two reviewers (AM, PM) who checked themes and clustering against quotes (both authors' conclusion and participant quotes, were reported).
Finally, using a conceptual matrix [19], we integrated the syntheses of both qualitative and quantitative studies. Findings from the synthesis of quantitative studies were mapped against the themes from qualitative studies identifying confirmative and contradicting findings. Findings from the qualitative synthesis were also used to hypothesise potential mechanisms of why or how quantitative results might have occurred.
Certainty of Quantitative Evidence
To assess the certainty of the evidence across studies at an outcome level, the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework was used [25]. The risk of bias, precision, consistency, and directness was assessed where two or more studies reporting on the same outcome were grouped. Based on these assessments, the certainty of evidence was upgraded or downgraded to provide an overall rating of very low, low, moderate, or high [25] for each outcome. Given the absence of randomised controlled trials (RCTs), the start rating was always low, however, as per GRADE guidance, ratings could be upgraded. Publication bias could not be assessed as there were not enough studies grouped per outcome. Figure 1 presents the summary of results from the systematic literature search. After duplicates were removed, 31,098 articles remained, of which 29,729 irrelevant titles and abstracts were excluded leaving 1370 full-text articles to be screened. 1224 full-text articles were excluded for reasons detailed in Figure 1. Seventy qualitative studies were removed because they did not have a comparator (i.e., exposure, control group, pre/post) as did a further 11 studies after having their trustworthiness assessed. A total of 59 unique studies (representing 65 individual papers) met the inclusion criteria (including the physical outcome domain which is presented in another paper), of which, 36 studies (representing 40 individual papers) included a social and emotional, cognitive, and/or nature connectedness outcome (26 quantitative; 9 qualitative; 1 mixed-methods).
Characteristics of the Eligible Studies
Characteristics of included studies are described Supplementary File S4.
Study Designs
Of the quantitative studies (n = 27, including 1 mixed method study), most study designs were cross-sectional (n = 8) and controlled cross-sectional (n = 7) and there were fewer uncontrolled before and after (n = 6) and controlled before and after (n = 6). Ten studies were included in the qualitative thematic analysis (including 1 mixed method study). Of the qualitative studies, the majority of studies (n = 4) used observational methods only [40,44,45,51] or (n = 2) observation and interviews (structured or semistructured) [39,52]. One study used interviews and teacher case studies [58], n = 1 used a combination of focus groups, interviews, observation, and artefact collection [49], n = 1 used a combination of interviews and surveys [61] and n = 1 used a combination of photo preference, drawings from children and interviews [37].
Exposures
The preliminary synthesis of eligible studies indicated that the included quantitative and qualitative studies could be grouped into four different exposure categories (as described in Table 1): nature-based ECE (n = 20), ECE natural playgrounds (n = 10), natural elements within ECE (n = 4) and garden-based interventions (n = 2). When a comparison group was included in a study, the comparison tended to be traditional ECE or a traditional playground. In these conditions, the comparison group was typically characterised by children spending more time indoors with limited time outdoors in a playground predominately comprising of manmade structures such as swings and slides. There were instances where the comparison group also received some nature-based exposure, but this exposure was still less than in the experimental group.
More detailed information on the various exposures of the included studies can be found in Supplementary File S4.
Sample Size and Participant Characteristics
The total sample size of the combined quantitative and qualitative studies was 3383. Sample sizes were small across the 36 eligible studies; only three studies had a sample size greater than 200 [35,57,64], of which two were uncontrolled before and after studies [35,64] and one was controlled cross-sectional [57]. The sample size in the qualitative studies ranged from 12 [44,61] to 75 participants [45]. The age range of participants was always 2-7 years, one study included female participants only [26] and socioeconomic status (SES) was generally moderate to high, however, SES was infrequently reported.
Quality of Included Quantitative Studies
The quality of each quantitative study, as assessed by the EPHPP tool, can be found in Supplementary File S3. Of the eligible studies, only two studies were of moderate quality [46,48] with the remaining rated as weak. Of the two studies rated as moderate quality, they represented the nature-based ECE (n = 1) [46] and ECE natural playgrounds (n = 1) [48] exposures. Figures 2-4 present the quality rating across the quantitative studies by assessment item per outcome category. Typically, items were rated weak because of selection bias, study design, issues around confounding, and transparency over whether the researchers or outcome assessors were aware of the research questions (blinding). A weak rating for study attrition (withdrawals and dropouts) was provided in 67% of before and after studies. Data collection methods tended to be valid and reliable. Figure 5 presents the findings of the trustworthiness of the included qualitative studies. On one occasion research questions (1) [58], sampling (3a) [49], data collection (3b) [44], and analysis (3c) [44] were not clearly described, analysis (4c) was not appropriate to the research question [44], and claims were not supported by sufficient evidence (5) [40]. It was also unclear if, sampling (4a) [49] and data collection (4b) [44] were appropriate to the research question on one occasion.
Findings per eligible study can be viewed in Supplementary File S5.
Social and emotional outcomes (n = 13 studies)
(1) Nature-based ECE (n = 7 studies, n = 388 children) Table 2 presents the results of the effect direction plot for social skills, social and emotional development, attachment (child's ability to develop and maintain secure and positive connections with others), initiative (child's ability to use independent thought and action), and lower behavioural problems where these outcomes were reported in more than one study. For social skills (including prosocial behaviour, social responsibility), two studies demonstrated a positive association between social skills and nature-based ECE; one was significant (p = 0.03, η 2 = 0.11) [47] and the other was non-significant (mean diff: 0.32; 95% CI: −0.95, 1.59) [33]. The other study reported a positive, but non-significant association between social skills and traditional ECE (η 2 = 0.08, p > 0.05) [26]. Similarly, social and emotional development was found to be positively, but not significantly associated with nature-based ECE in two studies [33,59]. However, another study demonstrated a non-significant association between nature-based ECE (p = 0.013) with traditional ECE (i.e., social, emotional development scores were better in children who attended traditional ECE) [56]. Findings were inconsistent in the two studies that assessed attachment [29,30,56]. One study reported that attachment scores were better in the traditional group compared to children who attended nature-based ECE at post-test (p = 0.058) [56] and the other study found improvements in attachment scores from baseline to follow-up in children who attended nature-based ECE [29,30]. Similarly, findings were inconsistent for initiative. One study reported that initiative scores were better in the traditional group compared to children who attended nature-based ECE at post-test (p = 0.187) [56] and the other study found significant improvements (p = 0.01) in initiative scores from baseline to follow-up in children who attended nature-based ECE [29,30]. Finally, three studies reported fewer behaviour problems in children who attended traditional ECE compared to nature-based ECE at post-test; in one study these differences were significant (η 2 = 0.17, p < 0.05) [26] and non-significant in another (p = 0.11, η 2 = 0.03) [47]. The other study reported fewer behavioural problems in children who attended nature-based ECE compared to traditional ECE (mean diff: −0.23; 95% CI: −1.49, 1.03) [33].
In addition (not reported in the effect direction plot), one study found that total protective factors (3 sub-scales: initiative, self-regulation, and attachment measured using the Devereux Early Childhood Assessment for Pre-schoolers, Second Edition) as reported by the child's parent and teacher significantly improved from baseline to follow-up in children who attended naturebased ECE [29,30]. Another study found no association between the frequency of nature-based experiences and belief in the importance of nature-based ECE for social development [28].
(2) ECE natural playgrounds (n = 3 studies, n = 868 children) All three eligible studies assessed outcomes related to social skills and interactions; two examined the effect of incorporating nature in the playground [35,48] and the other compared free play in ECE playgrounds with green space to indoors [60]. One study reported a significant improvement (p = 0.03) in social behaviour from baseline to follow-up and another reported a positive association between social interactions and free play in ECE playgrounds with greenspace [48,60]. However, another study reported significantly (p < 0.05) more negative interactions between teacher and child [35]. Findings also suggested that children's emotional and behavioural outcomes, measured using the strengths and difficulties questionnaire (5 sub-scales, emotional symptoms, conduct problems, hyperactivity, peer relationships, and prosocial behaviour), significantly improved (p = 0.036) from baseline to follow-up [48] and stress was lower in playgrounds with green space compared to indoor free play [60].
Natural elements within ECE (n = 2 studies, n = 252 children) One study found that nature present in the ECE playground was a statistically significant predictor (regression coefficient = 0.004, p < 0.05) of emotional wellbeing (measured using the Leuven Well-being Scale) [50]. However, the other study reported that children's stress levels were lower in higher quality environments (i.e., large spaces, vegetation, trees, etc.) compared to low quality environments [55].
Garden-based interventions (n = 1 study, n = 336 children) The one study utilising a garden-based intervention found a significant (all p = 0.000) and positive effect on emotional intelligence and prosocial behaviour from baseline to followup [64].
2.
Play (n = 10 studies) (1) Nature-based ECE (n = 4 studies, n = 257 children) Table 3 presents the results of the effect direction plot for play interaction, play disruption, and play disconnection in eligible studies where these outcomes were reported in more than one study. For play interaction, two studies demonstrated a positive association with nature-based ECE (i.e., play interaction was better in children who attended nature-based ECE). One study reported a mean difference of 0.86 (95% CI: −2.04-6.35) [41] and the other reported a significant difference (p < 0.001, η 2 = 0.12) at post-test between the nature-based and traditional ECE [27,30]. However, one study found that play interaction was better in children who attended traditional ECE, but these group differences were nonsignificant (η 2 = 0.11, p > 0.05) [26]. Findings for play disconnection and disruption were mixed with one study showing a positive association (i.e., play disruption/connection was lower in nature-based ECE) [27,30] and the other study showing a negative association) [26]. The study that demonstrated a positive association found that children at post-test in the nature-based ECE had significantly lower play disruption (p < 0.001, η 2 = 0.19) and disconnection (p < 0.001, η 2 = 0.12) scores compared to traditional ECE [27,30]. Whereas, the other study found that children at post-test in the traditional ECE had significantly lower play disruption (p < 0.001) and disconnection (p < 0.01) scores compared to nature-based ECE [26].
Additionally (not presented in the effect direction plot), overall play development (measured using the Kuno Beller Developmental Tables) and pretend to play (consisting of imaginative play, use of make-believe, play enjoyment, amount of emotion expressed in play, and use of make-believe in dramatic play) was higher in children who attended nature-based ECE compared to traditional ECE [26,59].
(2) ECE natural playgrounds (n = 5 studies, n = 347 children) One intervention study observed children's play behaviour at baseline and then again at follow-up after the playground was modified to include more nature elements, such as vegetation, boulders, rock, and loose parts [48]. Findings suggested improvements in playing with natural elements, risky play, solitary play, child-teacher interactions, prosocial behaviour, and fewer antisocial behaviours and lack of engagement observed in their play from baseline to follow-up [48]. Of these findings, prosocial behaviours (OR: 2.81; 95% CI: 1.17-6.91, p< 0.05) and playing with natural elements (OR: 7.29; 95% CI: 1.53-38.09, p< 0.05) significantly improved from baseline to follow-up [48]. The other eligible studies compared play in natural versus traditional playgrounds and findings across some studies indicated children engaged in more creative and imaginative play in natural ECE playgrounds [36,42,43,62]. One study reported that dramatic play was significantly higher in children who play in natural ECE playgrounds compared to manufactured ones [36]. Another study demonstrated that children in a natural ECE playground engaged in sociodramatic play for a longer duration compared to children from traditional ECE playgrounds [43]. They were also more likely to engage in object substitutions, explicit metacommunication (nonverbal cues such as tone of voice, body language, etc.), and imaginative transformations [43]. Functional and constructive play was also higher among children who played in natural ECE playgrounds compared to children who played in traditional playgrounds, but creative and imaginative play was lower [42]. However, another study noted functional and imaginative play was higher among children who played in traditional ECE playgrounds compared to children who played in natural ECE playgrounds [62].
Natural elements within ECE (n = 1 study, n = 36 children) One study measured cognitive play (consisting of functional, constructive, exploratory, dramatic, games with rules) across three different playground types: natural, mixed, and manufactured) [37]. The authors found that the natural area afforded greater dramatic, exploratory, and constructive play compared to the mixed and traditional zones [37]. [33] Controlled cross-sectional 20/13 Weak --
Summary effect direction (favours control)
Abbreviations: E = experimental; C = comparison; ECE= Early childhood education. GRADE-(assesses the certainty of evidence at an outcome level): ⊕ = Very low. Effect direction: Study level: = positive association with nature-based ECE; = negative association with nature-based ECE. Controlled before & after studies-difference between experimental and control group at follow-up (unless stated). Uncontrolled before & after studies-change since baseline (unless stated). Controlled cross sectional-difference between experimental and control (unless stated). Cross-sectional-positive, negative or no association. Summary: = studies show a positive association with nature-based ECE; = studies show a negative association with nature-based ECE; = conflicting findings. Summary effect direction considers study quality, design (i.e., controlled before and after weighted more than cross-sectional) and sample size. [41] Controlled cross-sectional 15/15 Weak --
Summary effect direction
Abbreviations: E = experimental; C= comparison; ECE = Early childhood education. GRADE-(assesses the certainty of evidence at an outcome level): ⊕ = Very low. Effect direction: Study level: = positive association with nature-based ECE; = negative association with nature-based ECE. Controlled before & after studies-difference between experimental and control group at follow-up (unless stated). Uncontrolled before & after studies-change since baseline (unless stated). Controlled cross sectional-difference between experimental and control (unless stated). Cross-sectional-positive, negative or no association. Summary: = studies show a positive association with nature-based ECE; = studies show a negative association with nature-based ECE; = conflicting findings. Summary effect direction considers study quality, design (i.e., controlled before and after weighted more than cross-sectional) and sample size.
Findings per eligible study can be viewed in Supplementary File S5.
3. Cognitive (n = 11 studies) (1) Nature-based ECE (n = 7 studies, n = 438 children) Table 4 presents the results of effect direction plot for attention and self-regulation in eligible studies where these outcomes were reported in more than one study. For attention, two studies demonstrated a positive, but non-significant association with children who attended nature-based ECE [27,30,33], and one study demonstrated a non-significant negative association (i.e., children in the traditional setting had better attention scores) [47]. Across three studies, there was a positive association between self-regulation and nature-based ECE. Two studies demonstrated a significant association [29,30,47] and one demonstrated a non-significant association [56].
There were a number of additional findings not reported in the effect direction plot because outcomes could not be grouped together. At post-test, one study reported that there was a small and non-significant effect on working memory (p = 0.19, η 2 = 0.02) and a non-significant effect on inhibition (p = 0.76, η 2 = 0.00) between the intervention and control group [47]. There was no significant differences between the nature-based ECE and control groups for overall executive function score (p = 0.60, η 2 < 0.01) [30,32]. In another study, cognitive development was lower and teacher perception of language development was higher in children who attended nature-based ECE, however, the differences between the nature-based ECE and control groups were non-significant [59]. One study reported that there were no significant differences in the communication scores at post-test for children who attended nature-based ECE compared to the control group (p = 0.694) [56]. At post-test, total learning behaviours (consisting of attention, competence motivation and attitudes) was higher in children who attended nature-based ECE compared to traditional ECE, but this was non-significant (p = 0.12, η 2 = 0.02) [27,30]. Kindergarten readiness (counting, rhyming, recognition) was lower in children who attended nature-based ECE compared to the control, but these differences were non-significant (p > 0.05, η 2 = 0.16) [26]. There were non-significant differences in curiosity scores in children who attended nature-based ECE compared to the control group [30]. Finally, there were significant improvements (all p < 0.001) in creativity (consisting of fluency originality, and imagination) from baseline to follow-up in children who attended nature-based ECE [31].
(2) ECE natural playgrounds (n = 1 study, n = 16 children) The one eligible study in this exposure category assessed visual spatial scores (an indicator of children's direct attention) to determine if there were differences in children who had engaged in free play in ECE playground with green spaces compared to children who were indoors [60]. It was found that children who had been exposed to ECE playgrounds with green space had higher visual spatial accuracy scores compared to the control [60]. (3) Natural elements within ECE (n = 1 study, n = 198 children) One eligible study in this exposure category assessed attention in children who attended ECE setting with high-quality environments (i.e., large spaces, vegetation, trees, etc.) versus those who had a low-quality environment [53]. The authors found that the two domains of attention: hyperactivity (p = 0.069) and inattention (p < 0.05) were associated (i.e., better) in schools with high-quality environments, and inattention was significantly associated [53]. (4) Garden-based interventions (n = 2 studies, n = 391 children) One study reported significant improvements (p < 0.01) in all sub-categories of scientific from baseline to follow-up (47). The other study reported that delay gratification (self-regulation) and visual motor integration did not significantly improve from baseline to follow-up [38]. = conflicting findings. Summary effect direction considers study quality, design (i.e., controlled before and after weighted more than cross-sectional) and sample size.
Findings per eligible study can be viewed in Supplementary File S5.
4.
Children's connectedness to nature (n = 9 studies) (1) Nature-based ECE (n = 9 studies, n = 792 children) Table 5 presents the results of effect direction plot for nature relatedness/biophilia, environmentally responsible behaviour, and awareness of nature/environment in eligible studies where these outcomes were reported in more than one study. For nature relatedness (or biophilia), five studies were positively associated with nature-based ECE [46,47,54,57,65]. Four of these studies demonstrated a significant association [46,54,57,65] and one demonstrated a non-significant association [47]. One study demonstrated no difference in nature relatedness scores between nature-based and traditional ECE [34]. For environmentally responsible behaviour, two studies demonstrated a positive, but non-significant association with traditional ECE (i.e., environmentally responsible behaviour was better in children who attended traditional ECE) [46,47]. However, another study reported that environmentally responsible behaviour was significantly better in children who attended nature-based ECE compared to traditional ECE [57]. Finally, in two studies, awareness of nature was positively associated with children who attended nature-based ECE compared to children who attended traditional ECE) [57,59].
Additionally (findings not reported in the effect direction plot), there were improvements in knowledge and skills of nature in children who attended nature-based ECE [63] and awareness of the surrounding environment was also higher in children who attended nature-based ECE [59]. = conflicting findings. Summary effect direction considers study quality, design (i.e., controlled before and after weighted more than cross-sectional) and sample size.
Main Findings-Qualitative Studies (n = 10 Studies)
Ten studies were included in the thematic analysis, of which, six studies involved naturebased ECE, three studies were ECE natural playgrounds, and one was natural elements within ECE (study characteristics of qualitative studies can be found in Supplementary File S4). Studies tended to use direct observation and interviews (predominately with educators) to collect qualitative data. Findings from the thematic analysis are presented in Figure 6 and show four higher order themes. Theme 1. Natural settings provide more affordances compared to traditional settings.
Theme 1 indicated the importance of the natural environment for affording opportunities to enhance a range of outcomes. This theme included seven subthemes relating to the different affordances that nature provides compared to traditional settings. Seven of the included studies noted that the natural environment enabled children to diversify their play (subtheme 1.1), such as imaginative, risky, exploratory, and active play [37,39,40,44,45,52,61]. Diversifying play is not only important for children's total movement but also their social interactions, creativity, and learning. The importance of play can be described in the following quote.
"The children also invent themselves; when they have stimulus for their eyes, children invent it [activity] without your help. And it should be like this; some part should be like this. But you need to have stimulus. It's not enough to have a brown yard and a climbing frame. So, it [green yard] added somehow; they definitely had good games. They pretended that they had a campfire, they got the stones as sand pretended that they were on a trip. And their imagination was in use there, and when children use their brains, natural tiredness arises, and it did them good, a lot of good. Then rest comes naturally, and you have a good appetite and we're in the positive cycle. So they could use their imagination, and we encouraged them. We didn't prohibit them, we just advised them not to rip anything." [61] The next subtheme (1.2) highlighted that in two studies natural settings afforded children with higher levels of risk compared to traditional settings [49,52]. This relates to the above subtheme as risky play is a type of play important for children's development. Risky play is characterised by play that is "thrilling and exciting and where there is a risk of physical injury" [52]. Although this may seem potentially dangerous to children, this type of play is important for children's development [66]. "I like playing in the fallen logs and trees on the playground; it is so much fun, but a bit scary too! I like the big pile of sticks and logs that we made-it is for another fort that is going to be really high off the ground." [49] Similarly, related to subtheme 1.1, three studies also noted that natural settings afforded more variation (the space and elements) to support children to use and increase their imagination and creativity (subtheme 1.3) [37,39,49].
"I like being outside with my friends. We make shelters and we make up different games, like getting trapped on an island, or being on a boat and making our escape! I like doing science outside too-like different experiments, especially when the sun is out." [49] The fourth subtheme (1.4) relates to the importance of social interactions. Four studies demonstrated that natural settings enabled peers and teachers to have prosocial interactions in relation to encouraging play [39,44,49,51].
"The children are shouting 'X . . . can't you catch us? Please catch us, try to catch us . . . '. The staffs join the situation and run after the children. The children are shouting 'Catch me . . . can't catch me' . . . There is excitement and the staff are running after the children, catching them and holding them before releasing them. The staff have high energy, the children focus on the adults, avoiding being caught. The adults show empathy, holding and hugging the child when it is caught. The game is exciting and creates enthusiasm. A high level of physical activity is created, by climbing up, sliding down, running around and hiding in the tower to escape capture by the adults. They run at high speed and the children's body language shows that they are very much engaged in the game." [51] Three studies highlighted that natural settings increased child-initiated learning and students perceiving themselves as capable learners compared to traditional settings (subtheme 1.5 and 1.6) [37,44,58].
"[CogG] has poor concentration, sees herself as the baby, finds it difficult to sit and listen to story. She is extremely lacking in confidence . . . shy . . . she won't look at you indoors. With child-led learning she is totally engrossed and remains on task. Outside is the best learning environment for her . . . she remains on task. When outside she will come over and say 'I like this' and 'I like doing that', 'this is my favourite place'." [58] Finally, three studies highlighted that children increased contact with nature enabling them to increase their knowledge of nature (subtheme 1.6) [39,44,61].
"Especially about the forest floor mat, I remember that our children kept asking, 'what is it' and 'what's growing there' and explored it very carefully; they were almost lying on their stomachs there. Especially the older ones, and they had a lot of questions about it." [61] Theme 2. Natural and traditional settings provide similar affordances.
Despite Theme 1 indicating the importance of natural affordances for a range of outcomes, some studies also reported that natural and traditional settings provided similar affordances. The one subtheme noted that the opportunity for and frequency of risky play was similar in both natural and traditional ECE settings [52]. This subtheme relates to the findings reported on risky play in subtheme 1.2. Taken together, children will seek risk irrespective of playground type, however, the natural environment affords greater risk (Theme 1, subtheme 1.2) [52].
"Comparing the two play environments, they both seem to include an extensive number of affordances for risky play. At both preschool playgrounds, there are opportunities for play in great heights such as climbing, jumping down, and balancing and as well as opportunities for play with high speed such as swinging, sliding/sledding, running, and bicycling." Taken from authors conclusions [52] Theme 3. Children's preferences of setting types.
Two studies reported that the natural environment is more diverse and engaging and preferred by children compared to traditional settings (subtheme 3.1) [49,51]. It appears when children were outdoors in nature it afforded them the opportunity to play in a diverse environment with their friends and this combination provided enjoyment.
"I like going outside and playing! I like playing with my friends, Sydney and Megan. We play hide and seek on the playground and hide in the forest in the logs and trees. I like outside because it's so fun and I really like to play. Sometimes I play with my sister too; I like all the colours outside and all the space." [49] However, another study suggested that mixed areas (combining both natural with traditional elements) were preferred by children [37]. Two studies indicated the importance of the natural playground in helping children invigorate and/or restore their energy for the diverse types of play children engage in [39,61]. For example, the natural environment for some children provided them with more energy to continue playing, however, other children may feel the requirement to nap, thus restoring their energy to engage in more play [39,61].
"Now it's become very difficult to finish playing. They would rather continue, and those who need to take a nap, they've had a nice, long time outdoors and nice games, so they fall asleep more easily, and it affects their energy in the afternoon. Some children have very long days here. They come in the morning and stay until five o'clock; they seem to be somehow energetic and lively in the yard. This is new for us. The contrast to the previous yard is so great that the effects can be seen here very quickly." [61]
Synthesis of Quantitative and Qualitative Findings
Of the outcomes assessed in quantitative studies, initiative, behavioural problems, play disruption and play disconnection did not emerge as themes from the qualitative studies. Supplementary File S6 shows the matrix relating themes from the qualitative evidence synthesis with the findings from the quantitative evidence synthesis. The matrix indicates where findings from the two data sources were confirmatory or conflicting. Themes not presented in the matrix could not be directly linked to the results of the quantitative synthesis. However, these themes were considered for generating hypotheses on how or why observed quantitative results occurred.
Social, Emotional and Environmental Development
Social and emotional. From the quantitative studies, findings suggested that children who attended nature-based ECE had improved social skills and social and emotional development. From the qualitative synthesis, this might be achieved through three possible mechanisms: children diversify their play, they have increased creativity and imagination, and engage in prosocial interactions with peers and teachers. When children diversify their play, those who attended nature-based ECE have more play interactions in comparison to traditional ECE which could facilitate the acquisition of greater prosocial skills. Through sociodramatic and symbolic play, children create and share common narratives with their peers, acting out and enacting imaginary situations, which may play an important role in fostering imagination and creativity, understanding social complex structures, and improving social skills. The quantitative analysis suggested improvements in children's social skills, interaction, and development. However, it was unclear whether children had improved attachment (a child's ability to promote and maintain positive connections with others), and one study noted negative child-teacher interactions in children who engaged in natural playgrounds. See Figure 7 for an illustration of the pathway on how nature-based ECE could influence social skills.
Play. From the quantitative synthesis, children who attended nature-based ECE had more play interactions in comparison to traditional ECE. This is supported in the qualitative synthesis as play interaction might be achieved through children diversifying their play, increased risk, increased creativity and imagination, and prosocial interactions with peers and teachers. It was noted that natural settings enable children to engage in a diverse range of play types, including risky play, solitary play, dramatic play, sociodramatic play, functional and constructive play, etc. These diverse play types will provide continuous opportunities for children to enhance their play interactions. Similarly, it was noted that nature-based ECE provides children with higher levels of risk, which enable risky play, a type of play that plays a pivotal role in managing health and wellbeing. As mentioned previously, sociodramatic and symbolic play where children may act out imaginary situations with their peers also improved imagination and creativity. Finally, peers and teachers may facilitate play opportunities through a wider range of opportunities (affordances) provided by natural settings. See Figure 8 for the pathway on how nature-based ECE could influence play interaction.
Cognitive Development
From the quantitative studies, findings suggested that self-regulation (ability to understand and manage behaviour) is better in children who attend nature-based ECE. The qualitative evidence raised potentially important factors and processes that may help explain the outcomes: children diversify their play, engage in higher levels of risk, see themselves as capable learners, and have prosocial interactions with peers and teachers. Through diverse types of play, including risky, sociodramatic, and symbolic play, children learn to negotiate through different situations, experiment individually and collectively different ways to solve problems, which in turn creates a playful disposition of self-regulation. Similarly, when children see themselves as capable learners by developing different cognitive skills through nature-based ECE it might be linked to self-efficacy and self-confidence, which in turn may foster self-regulation. See Figure 9 for the pathway on how nature-based ECE could influence self-regulation. It is important to note that self-regulation is a complex construct, influenced by many different factors. The pathway presented in Figure 9, may explain some possible causal pathways of self-regulation, pertinent to nature-based ECE. However, the likelihood is that there will be a multitude of other factors influencing self-regulation that are beyond the scope of this review.
Nature Connectedness
Evidence from the quantitative studies suggested that children who attended naturebased ECE had higher levels of biophilia and awareness/knowledge of nature. Using evidence from the qualitative synthesis, this might be achieved through four possible mechanisms: children diversify their play, they have increased contact with nature, childinitiated learning, and capable learners, and peers and teachers have prosocial interactions. Through engaging in different types of play in natural settings, children are exposed to different natural elements that they could use in their play which in turn might increase connection with and awareness of nature. Children seeing themselves as capable learners where they develop different cognitive skills such as attention, and where children take the lead on what they want to learn may also improve biophilic tendencies and awareness of nature by using and learning about the natural elements at their disposal. Finally, peers and teachers can help support awareness of nature and biophilia by enabling children to explore the natural environment. See Figure 10 for the hypothesised pathway on how nature-based ECE influences biophilia and awareness of nature.
Discussion
This systematic review aimed to understand whether nature-based ECE is associated with children's social, emotional, and cognitive development. Based on very low certainty of evidence, findings from the effect direction plot indicated positive associations between nature-based ECE and children's self-regulation, social skills, social and emotional development, nature relatedness (or biophilia), awareness of nature, and play interaction. There were inconsistent associations between nature-based ECE and children's attention, attachment, initiative, environmentally responsible behaviour, and play disruption and disconnection. Finally, there was a negative association between nature-based ECE and children's behavioural problems (i.e., there were higher behavioural problems in children who attended nature-based ECE). The qualitative synthesis noted that nature-based ECE afforded opportunities for children to engage in diverse types of play, use their imagination and creativity, have prosocial interactions, and have increased contact with nature in comparison to traditional ECE.
Health outcomes of attending nature-based ECE are likely to be impacted through a number of plausible mechanisms. Nature-based ECE settings are inherently unique and exposure to nature alone could impact certain child health outcomes directly, or indirectly through interconnecting mediators; for example, the rich and diverse natural environment may provide affordances [17] for children to engage in a range of play types that facilitate social interaction, creativity and imagination and/or physical activity which may impact a range of health outcomes. These possible pathways have been presented in the mixedsynthesis of this systematic review and could provide researchers with future research questions in relation to nature-based ECE.
One such pathway, which is the common factor across all outcomes featured in this review, is play. It is suggested that nature-based ECE provides children with an immersive nature experience where children interact with different natural spaces and elements that afford opportunities to engage in different types of play. In a recent systematic review of nature play on children's health and wellbeing, the authors reported increases in six different types of play (functional, constructive, exploratory, dramatic, imaginative, and symbolic) in nature settings compared to a comparison [18]. It is generally agreed that engaging in different types of play has an impact on children's development across several outcome domains [67,68]. However, limited research exists on the plausible mechanisms by which different types of play improve different health and development outcomes in childhood [67]. For example, it might be that when children play, they are engaging in physical activity and physical activity improves several childhood health outcomes. However, not all play is physically active, and even play which is sedentary in nature (e.g., sociodramatic and symbolic play) could have improvements in other outcomes. Furthermore, other mediators could feature on the pathway, such as play facilitating social connections with other children. The potential added benefit of nature-based ECE is that exposure to nature could afford greater opportunities for engaging in diverse types of play. For example, in the same review noted above [18], the authors also highlighted that nature-play was also likely to improve cognitive, social, and emotional related outcomes. The findings are similar to the present study, however, the authors did not make any reference to the possible mechanisms by which outcomes may have occurred [18].
As previously noted, it is likely that nature-based ECE affords greater opportunities for play, which facilitates social interactions. This review highlighted that both social skills and social and emotional development were positively associated with nature-based ECE, however, inconsistent associations were found for attachment and initiative. Although few studies have explored the relationship between nature-based ECE and social and emotional outcomes, there are several conceptually similar systematic reviews that have synthesised the impact of nature on these outcomes. Dankiw and colleagues' review supported the position that nature encouraged a diversity of play, where three out of four included studies demonstrated improvements in aspects of social outcomes [18]. Similarly, both studies included in their narrative synthesis noted improvements in domains of emotional outcomes [18]. Another systematic review of the effect of exposure to nature (including non-ECE settings) and children's social and emotional development found less favourable findings for establishing and maintaining relationships [12]. From seven observational studies and four experimental studies, a total of 36 analyses were included in the synthesis, of which, 19.4% found positive associations between nature and children's ability to maintain relationships [12]. Despite these efforts to understand the impacts of nature on children's social and emotional outcomes it remains unclear what the plausible pathways are that impact children's social and emotional outcomes, and the interconnecting role of play and social development on other physical and cognitive outcomes. One hypothesis presented in the mixed-synthesis is that nature-based ECE may provide specific construct, composition, and quality that affords children opportunities to engage in diverse types of play. Through play, greater social connections are facilitated as when children play they must cooperate with other children, solve problems and create rules for their play [69]. Problem solving during play may impact short-term social and emotional skills, such as empathy, attachment, and emotional flexibility [69,70]. Understanding the role of nature-based ECE in facilitating the unique dynamics in children's play may be important to understanding children's longer-term social and emotional development as well as other related outcomes.
One such outcome that could be impacted by the interaction between nature, play, and social connections is self-regulation. The mixed synthesis suggested that nature-based ECE settings could develop self-regulation through a number of plausible mechanisms, including diversifying play (particularly free play, active play, risky play, and sociodramatic play) [67] and social interactions. Self-regulation relates to the child's ability to understand and manage behaviour and predicts a child's social, emotional, and behavioural development [71]. During play, children have control over their own activity, set challenges, and regulate their own behaviour, and the key features of play (uncertainty, flexibility, novelty, and open-ended) create the conditions for the development of self-regulation [67,72]. In the present study, based on very low certainty of evidence, self-regulation was positively associated with children who attended nature-based ECE. In other words, children who attended nature-based ECE demonstrated better self-regulation than those who attended traditional ECE. These findings have been echoed in conceptually similar fields. In a study that aimed to explore whether active play during recess was associated with children's (n = 51, 4.8 years) self-regulation and academic achievement [73], the authors noted a positive association between active play and self-regulation (β = 0.43, p = 0.001), and that self-regulation mediated the relationship between active play and academic achievement (math β = 0.18, p = 0.03; emergent literacy β = 0.20, p = 0.035) [73]. Although supportive of our hypothesised pathway, design issues (e.g., small sample size and cross-sectional design) mean we should be cautious when inferring causality. Children being exposed to nature could also have an impact on self-regulation development. In a quasi-experimental study, the authors aimed to explore whether the frequency and duration of "green" schoolyards impacted children's self-regulation development [74]. Findings indicated that higher frequency and duration in green schoolyards were associated with greater improvements in self-regulation [74]. However, the frequency was significant for girls only and duration was significant in autumn for females only [74]. The effect of nature-based ECE on children's cognition, through self-regulation, is an area that may need greater research emphasis to understand the pathways in which cognitive outcomes are likely to improve.
As mentioned, nature-based ECE is likely to be play-promoting in that the rich, diverse natural space and elements are likely to afford different types of play. Engaging in different types of play will also encourage more social interactions and physical activity; this might be one pathway to improving cognitive outcomes. Alternatively, simply being in nature, particularly of high frequency and duration [70,74], may in itself enhance selfregulation through its interdependencies with attention. Self-regulation is a complex construct that branches across numerous fields of psychology. It can relate to both the conscious and automatic processes in which individuals monitor, manage, and control their behaviours, thoughts, emotions, and interactions with the environment, including task performance but also including social interactions [75]. Importantly, underpinning a child's ability to self-regulate is their capacity to maintain focused attention [76]. More specifically, we can draw upon the work of Stephen and Rachel Kaplan and their insights into the restorative benefits of nature, including its capacity to restore directed attention and improve concentration [77,78]. The natural world provides young children with the ability to separate themselves from the other environments in their lives, it provides stimulating, absorbing, immersive, and enjoyable stimuli that can hold one's attention without any effort being expended (e.g., a fascination with insects, sticks, or water). With these factors being present to varying degrees in nature-based ECE, the restorative benefits of nature may afford young children the optimal conditions to develop their self-regulatory processes [74,79]. Despite these possible links between nature, self-regulation, and attention, the present study noted inconsistent associations between nature-based ECE and attention based on very low certainty of evidence. Similarly, a systematic review of exposure to nature and children's (conception-12 years) socioemotional development found that 17% of analyses from observational studies had positive associations with attention and 50% reported positive associations in experimental studies [12] which echo the inconsistent findings of this study. This might also indicate the need for more robust experimental studies to optimise the chance of accurately detecting the impact-if one exists-of nature-based ECE on children's attention and other cognitive outcomes.
There were also a number of outcomes related to children's connectedness to nature, including nature relatedness and awareness of nature that were positively associated with nature-based ECE. Inconsistent associations were found for environmentally responsible behaviour. Most of the eligible studies measured outcomes over a short period, so it might be the awareness and nature-relatedness outcomes are likely to produce change over a short period of time. Whereas environmentally responsible behaviour, which perhaps relates more to knowledge is a longer-term outcome and requires longitudinal exploration using research designs such as cohort studies (prospective and retrospective).
Strengths and Limitations of the Review
This is the first comprehensive systematic review of both quantitative and qualitative evidence that aimed to understand the association between nature-based ECE and children's social, emotional, and cognitive development, and children's, parent's, and/or practitioner's perceptions of nature-based ECE on these outcomes. This systematic review was registered to PROSPERO in October 2019 and subsequently, a peer reviewed protocol was published to BMC Systematic Reviews [10]. Throughout each step of the systematic review, a steering group comprised of experts from policy, research, and practice was consulted to ensure both rigour and relevancy of the review process and findings. We conducted a thorough search for research by searching nine databases, websites, contacting relevant national and international stakeholders, dissertations, and theses, and there were no restrictions on publication year or language. Finally, we aimed to ensure the rigour of this review by following the recommended systematic review procedures; both full-text articles and study quality was assessed independently by two reviewers and one reviewer completed data extraction which was checked by a second reviewer.
Despite the numerous strengths of this review, there were limitations that were mitigated accordingly. For example, the large number of articles retrieved meant that title and abstract screening could not be completed in duplicate, however, a second reviewer checked 10% to minimise errors. We also excluded studies that recruited solely children with a chronic disease, mental health condition, and or/learning disability as these populations were beyond the scope of this review. However, there is a theoretical case that such populations might benefit even more from nature-based ECE, and this should be synthesised in a future review. Minor modifications to the EPHPP tool to define the target population, specify confounders of interest, and enhance the overall rating of the paper were made to ensure it was relevant for the current review. Although unlikely, we cannot be sure whether these small modifications of words and terms impacted the validity and reliability of the EPHPP tool. In addition, despite these minor modifications, the EPHPP tool rated most of the eligible studies as weak, particularly across the selection bias, blinding, and attrition (before and after studies only) domains. As noted in a conceptually similar systematic review looking at immersive nature experiences on children's health, it is important to highlight that in this field these tools (EPHPP and others) which assess study quality may provide weak ratings in certain domains, e.g., blinding, where it is difficult to conduct research practices that are the gold standard [11]. Finally, we assessed the certainty of evidence using GRADE as recommended, however, the authors recognise the limitation of using this tool as it is unlikely that randomised controlled trials (RCT) could be used in this field to evaluate the effect of nature-based ECE on child health outcomes [80]. GRADE attaches more weighting to RCTs designs, with all other study designs starting at a rating of low; meaning that there is no variation in ratings between the eligible studies despite some of these studies reflecting the "best available evidence" in this field.
Strengths and Limitations of the Evidence
Thirty-six studies reported social, emotional, and cognitive development, of which, 12 studies used an experimental design (6 uncontrolled before after; 6 controlled before and after) and consisted of a good geographical spread across most continents. Studies also tended to use valid and reliable measures for assessing outcomes (data collection) and qualitative studies (n = 10) demonstrated trustworthiness. However, most studies were cross-sectional in design (n = 15) and tended to have small sample sizes, both of which limit our ability to draw conclusions on the findings. Of the 27 eligible quantitative studies (including one mixed methods), only two studies were rated moderate, and the remaining were rated weak. In most instances, weak ratings were provided because of study design (cross-sectional or controlled cross-sectional), selection bias, blinding, confounders and attrition. Finally, no studies were conducted in low-income countries.
Future Directions
To improve the evidence base and thus enhance our ability to draw stronger conclusions, research must improve the study quality, description of exposure, and longitudinal impacts.
All but two studies were rated weak. The reasons for weak rating were described previously, but predominately centred on eligible studies primarily being of cross-sectional design, which were rated weak for that domain according to the EPHPP tool. Research efforts must move from cross-sectional designs to more robustly designed experimental studies with control groups that are adequately powered. This would have significant implications for the field and help researchers understand the complex mechanisms in which nature-based ECE improves childhood health and development.
Simultaneously, if efforts are made to support robust study designs as outlined above, a thorough description of the nature exposure, such as time, frequency of visits, and quantifying nature needs to be included [81]. Nature potentially impacts several health outcomes across a number of possible mechanisms, by describing the extent of nature exposures within nature-based ECE will support the identification of the specific pathways that nature-based ECE is likely to impact social, emotional, and cognitive development [81]. This will enable the field to understand how much nature is required within naturebased ECE settings to produce an impact on developmental outcomes beyond children's normative developmental trajectories.
None of the eligible studies assessed the longitudinal impacts of attending naturebased ECE despite the likelihood that nature-based ECE would have longitudinal impacts on social, emotional, and cognitive outcomes beyond the developmental norm for children 2-7 years. Longitudinal studies are critical as they enable the field to understand the possible mechanisms in which outcomes would improve, the timing and degree of improvements and/or harms, and whether improvements are sustained as children transition into primary/elementary education where they are likely to see a reduction of exposure to nature.
Conclusions
Findings from this systematic review suggested, based on low certainty of evidence, that there were positive associations between nature-based ECE and children's self-regulation, social skills, social and emotional development, nature relatedness (or biophilia), awareness of nature, and play interaction. There were inconsistent associations between nature-based ECE and children's attention, attachment, initiative, environmentally responsible behaviour, and play disruption and disconnection. A negative association between nature-based ECE and children's behavioural problems was found.
This systematic review hypothesised that a nature-based ECE, with a rich and diverse environment, affords children the opportunities to engage in diverse types of play which is likely to impact social, emotional, and cognitive developmental outcomes. To test this hypothesis and understand more about the mechanisms in which nature-based ECE impacts children's social, emotional, and cognitive development, the evidence base must move to higher quality study designs that adequately describe the nature exposure more precisely and with robust methods and over a longer duration. This will elevate the current evidence base and inform research, policy, and practice on the complex pathways in which nature-based ECE impacts children's social, emotional, and cognitive development.
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-05-16T15:01:36.768Z | 2022-05-01T00:00:00.000 | {
"year": 2022,
"sha1": "930a58b0f190f43519b23ea9c61f6f4572cc2233",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/10/5967/pdf?version=1652601834",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8a87d0781c55f995c127b8f586be8a31cb42dc2",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
117940659 | pes2o/s2orc | v3-fos-license | Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics
We present the redshift-space generalization of the equal-time angular-averaged consistency relations between $(\ell+n)$- and $n$-point polyspectra of the cosmological matter density field. Focusing on the case of $\ell=1$ large-scale mode and $n$ small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of $N$-body simulations, we find that our squeezed bispectrum relation is valid to better than $20\%$ up to $1h$Mpc$^{-1}$, for both the monopole and quadrupole at $z=0.35$, in a $\Lambda$CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within $20\%$ up to $1h$Mpc$^{-1}$, at $z=0.35$ for the monopole. On larger scales, $k = 0.2 h\mathrm{Mpc}^{-1}$, it still holds within the statistical accuracy of idealized simulations of volume $\sim8h^{-3}\mathrm{Gpc}^3$ without shot-noise error.
We present the redshift-space generalization of the equal-time angular-averaged consistency relations between (ℓ + n)-and n-point polyspectra of the cosmological matter density field. Focusing on the case of ℓ = 1 large-scale mode and n small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of N -body simulations, we find that our squeezed bispectrum relation is valid to better than 20% up to 1hMpc −1 , for both the monopole and quadrupole at z = 0.35, in a ΛCDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within 20% up to 1hMpc −1 , at z = 0.35 for the monopole. On larger scales, k = 0.2hMpc −1 , it still holds within the statistical accuracy of idealized simulations of volume ∼ 8h −3 Gpc 3 without shot-noise error.
I. INTRODUCTION
Accurate understanding of the nonlinear gravitational dynamics is a key for observational projects that measure the statistical properties of the cosmic structures on large scales. The typical scales of interest in these projects range from the weakly to strongly nonlinear regimes [1,2]. While perturbation theory is expected to be applicable as long as the nonlinear corrections are subdominant [3,4], a fully nonlinear description would be helpful to extract cosmological information out of the measured statistics over a wider dynamic range. The analytical description also becomes more complicated when one models higher-order statistics. Although an increasing number of analytical techniques to calculate the power spectrum or the two-point correlation function have been proposed, based for instance on resummations of perturbative series expansion or effective approaches [5][6][7][8][9][10][11][12][13][14][15][16], few of them have been applied to the bispectrum or even higher orders.
Consistency relations between different polyspectra are then very useful to have an accurate description of the higher-order statistics once one has a reliable model for the lowest-order one, the power spectrum. Alternatively, they can be used to test analytical models, numerical simulations, or the underlying cosmological scenario (e.g., the impact of modified gravity or complex dark energy models). Based on the assumption of Gaussian initial conditions and gravitational dynamics governed by general relativity, these relations hold at the nonperturbative level and provide a rare insight into the nonlinear regime of gravitational clustering.
The most generic consistency relations are "kinematic consistency relations" that relate the (ℓ + n)-density correlation, with ℓ large-scale wave numbers and n smallscale wave numbers, to the n-point small-scale density correlation, with ℓ prefactors that involve the linear power spectrum at the large-scale wave numbers [17][18][19][20][21][22][23][24][25]. These relations, obtained at the leading order over the large-scale wave numbers k ′ j , arise from the equivalence principle, which ensures that small-scale structures respond to a large-scale perturbation (which at leading order corresponds to a constant gravitational force over the extent of the small-size object) by a uniform displacement. Therefore, these relations express a kinematic effect, due to the displacement of small-scale structures between different times. This also means that (at this order) they vanish for equal-time statistics, as a uniform displacement has no impact on the statistical properties of the density field observed at a given time. Because they derive from the equivalence principle these relations are very general and also apply to baryons and galaxies. However, in a standard cosmology they provide no information at equal times (apart from constraining possible deviations from Gaussian initial conditions and general relativity).
To obtain non-vanishing results for equal-time statistics, one must go beyond this kinematic effect. This implies studying the response of small-scale structures to non-uniform gravitational forces, which at leading order and after averaging over angles correspond to a largescale gravitational curvature. As proposed in [26] and [27], this is possible by using an approximate symmetry of the gravitational dynamics (associated with the com-mon approximation Ω m /f 2 ≃ 1, where Ω m is the matter density cosmological parameter and f = d ln D + /d ln a is the linear growth-rate), which allows one to absorb the change of cosmological parameters (hence of background curvature) by a change of variable. These relations again connect the (ℓ + n)-point polyspectra, with ℓ large-scale modes and n small-scale modes to the n-point polyspectrum, when an angular averaging operation is taken over the ℓ large-scale modes, which also removes the kinematic effect. These consistency relations no longer vanish at equal times but they are less general than the previous relations. Indeed, galaxy formation processes (cooling, star formation,..) introduce new characteristic scales that would explicitly break this symmetry. The lowestorder relation, which applies to the matter bispectrum, has been explicitly tested in [28] using a large ensemble of cosmological N -body simulations, see also [29][30][31] for related discussions and comparisons with simulations or halo models.
The aim of this paper is to generalize this analysis, presented in [26] and [28] in real space, to redshift space, where actual observations take place. This is only a first step towards a comparison with measures from galaxy surveys, because we do not consider the important issue of galaxy bias in this paper (i.e., to translate our results in terms of the galaxy distribution one would need to add a model that relates the galaxy and matter density fields). However, this remains a useful task as redshiftspace statistics are well-known to be difficult to model because small-scale nonperturbative effects have a nonnegligible impact up to rather large scales [4,32,33], for instance through the fingers-of-god effect [34]. Therefore, it is even more important than for real-space statistics to build tools that hold beyond the perturbative regime. This paper is organized as follows. First, in Sec. II we introduce the statistics of the redshift-space density field and its response to the initial conditions. Then, in Sec. III we describe the dynamical equations of the system that we consider here and show their symmetry that is valid under the approximation Ω m /f 2 ≃ 1. Using these results, we finally derive the angular-averaged consistency relations in Sec. IV. We focus on the lowestorder version of these relations, i.e. the bispectrum, in Sec. V, where we present our results in terms of the multipole moments of the spectra. We also introduce a simple ansatz to estimate new derivative terms in the relation from observables, to simplify its form and facilitate the connection with practical situations. In Sec. VI, the consistency relations are checked both perturbatively and nonperturbatively using analytical calculations. We then exploit numerical simulations to give a further test of the relations in Sec. VII. We finally summarize our findings in Sec. VIII.
II. MATTER DENSITY CORRELATIONS
In this paper, we assume that the nonlinear matter density contrast, δ(x, t) = [ρ(x, t) −ρ]/ρ, is fully defined at any time by the initial linear density contrast δ L0 (i.e., decaying modes have had time to vanish), and that the latter is Gaussian and fully described by the linear power spectrum P L0 (k), where we denote with a tilde Fourier-space fields. The matter density contrast can also be written in terms of the particle trajectories, x(q, t), where q is the Lagrangian coordinate of the particles, as where we discarded a Dirac term that does not contribute for k = 0. This expression follows from the conservation of matter, ρdx =ρdq, which yields [1 + δ(x)]dx = dq.
Using the Gaussianity of the linear density field δ L0 , integrations by parts allow us to write the correlation between ℓ linear fields and n nonlinear fields in terms of the response of the latter to changes of the initial conditions [23,26] .
This exact relation, which only relies on the Gaussianity of the initial conditionδ L0 , holds for any nonlinear field δ, which is not necessarily identified with the nonlinear density contrast. It is also the basis of the consistency relations between (ℓ + n)− and n−point polyspectra, in the limit k ′ j → 0, when one can write the right-hand side in terms of δ (k 1 , t 1 )..δ(k n , t n ) multiplied by some deterministic prefactors or operators [17-23, 26, 27].
In this paper, we extend the analysis presented in [26] for the real-space density field to the redshift-space density field. Because of the Doppler effect associated with the peculiar velocities, the radial position of cosmological objects (e.g., galaxies) is not exactly given by their redshift, interpreted as a distance within the uniform background cosmology. For instance, receding objects appear to have a slightly higher redshift than the one associated with their actual location, and one is led to introduce the redshift-space coordinate s defined as [33,35,36] where e r is the radial unit vector along the line of sight (we use a plane-parallel approximation throughout this article), v r the line-of-sight component of the peculiar velocity v, andȧ = da/dt the time derivative of the scalefactor a(t). Then, the redshift-space density contrast δ s can be written in terms of the Lagrangian coordinate q of the particles as [33,36,37] where we again discarded a Dirac term that does not contribute for k = 0. This is the same expression as Eq. (3) for the real-space density contrastδ(k), except that x(q, t) in the exponential is replaced by s(q, t), and it again follows from the conservation of matter, [1 + δ s (s)]ds = dq. Then, the redshift-space generalisation of Eq.(4) reads as . (7) As in [26,28], we focus on the relations obtained for ℓ = 1 by performing a spherical average over the angles of the large-scale wave number k ′ . This removes the leading order contribution, associated with a uniform displacement of small-scale structures by larger-scale modes, that vanishes for equal-time statistics (t 1 = .. = t n ) [17][18][19][20][21][22][23]. One is left with the next-order contribution, which does not vanish at equal times [26][27][28], and is associated with the change to the growth of small-scale structures in a perturbed mean density background, modulated by the larger scale modes. In configuration space, this means that we consider angular-averaged quantities of the form [26,28] (8) which read in Fourier space as where W (x ′ ) [and its Fourier transformW (k ′ )] is a largescale spherical window function. Using Eq. (7), we obtain and a similar relation forC n W , where .. ε0 is the statistical average with respect to the Gaussian initial conditions δ L0 , when the linear density field is modified as (11) where C L0 is the linear density contrast real-space correlation function. In the large scale limit for the window function W , which corresponds to the limit k ′ → 0, the integral over x ′ is independent of the position x in the small-scale region, at leading order in the ratio of scales, and the initial linear density contrast is merely shifted by a uniform amount, This corresponds to a change of the background mean densityρ, which means that we must obtain the impact of a small change ofρ, hence of cosmological parameters, on the small-scale correlation δ s (s 1 , t 1 )..δ s (s n , t n ) .
III. APPROXIMATE SYMMETRY OF THE COSMOLOGICAL GRAVITATIONAL DYNAMICS
On scales much smaller than the horizon, where the Newtonian approximation is valid, the equations of motion read as [38] ∂δ ∂t Here, we use the single-stream approximation to simplify the presentation, but our results remain valid beyond shell crossing. Linearizing these equations over {δ, v}, one obtains the linear growth rates D ± (t), which are the independent solutions of [4,38] D + 2HḊ − 4πGρD = 0.
Then, it is convenient to make the change of variables [5,26,39,40] with and the equations of motion read as Here, Ω m (t) is the matter density cosmological parameter as a function of time, which obeys 4πGρ = (3/2)Ω m H 2 . As pointed out in [26], within the approximation Ω m /f 2 ≃ 1 (which is used by most perturbative approaches [4]), all explicit dependence on cosmology disappears from the equations of motion (19)- (21). This means that the dependence of the density and velocity fields on cosmology is fully absorbed by the change of variable (17). Then, a change of the background density, as in Eq. (12), can be absorbed through a change of the time-dependent functions {a(t), D + (t), f (t)}, which enter the change of variables (17) [26].
Here, we used the single-stream approximation to simplify the presentation, but the results remain valid beyond shell crossing, as the dynamics of particle trajectories, x(q, t), follow the equation where ϕ is the rescaled gravitational potential (21). This explicitly shows that they satisfy the same approximate symmetry. Therefore, our results are not restricted to the perturbative regime and also apply to small nonlinear scales governed by shell-crossing effects, as long as the approximation Ω m /f 2 ≃ 1 is sufficiently accurate (but this also means that we are restricted to scales dominated by gravity).
IV. ANGULAR-AVERAGED CONSISTENCY RELATIONS
As described in [26], the impact of a small uniform change of the matter density background can be obtained by considering two universes with nearby background densities and scale factors, {ρ(t), a(t)} and {ρ ′ (t), a ′ (t)}, with (23) Here and in the following, we only keep terms up to linear order over ǫ. Then, writing the Friedmann equations for the two scale factors a(t) and a ′ (t) and linearizing over ǫ, we find that ǫ(t) must satisfy the same equation (16) as the linear growing mode [38]. Thus, we can write For our purposes, the universe {ρ(t), a(t)} is the actual universe, with the zero-mean initial condition δ L0 , to which is added the uniform density perturbation (12). To recover zero-mean density fluctuations, we must shift the background by the same amount. Thus, this new background {ρ ′ (t), a ′ (t)} is given by where we used the last relation (23), which gives at linear order over δ and ǫ, δ L = δ ′ L + 3ǫ.
Because both frames refer to the same physical system, we have r ′ = r = a ′ x ′ = ax,ρ ′ (1 + δ ′ ) =ρ(1 + δ), where r = r ′ is the physical coordinate. Thus, we have the relations (26) where we used Eq. (23) and only kept terms up to linear order over ǫ. In particular, we can check that if the fields {δ ′ , v ′ , φ ′ } satisfy the equations of motion (13)- (15) in the primed frame, the fields {δ, v, φ} satisfy the equations of motion (13)- (15) in the unprimed frame, with the gravitational potential transforming as φ ′ = φ − a 2 (ǫ + 2Hǫ)x 2 /2. This remains valid beyond shell crossing: if the trajectories x ′ (q, t) satisfy the equation of motion in the primed frame, the trajectories x(q, t) = (1 − ǫ)x ′ (q, t) satisfy the equation of motion in the unprimed frame.
From the definition (5) and Eq. (26), we obtain the relation between the redshift-space coordinates, using a ′ = (1 − ǫ)a and H ′ = H −ǫ. Then, using for instance the expressions (3) and (6), the real-space and the redshift-space density contrasts in the actual unprimed frame, with the uniform overdensity ∆δ L0 = 3ǫ 0 , can be written as [26] where we disregarded a Dirac factor δ D (k) that does not contribute for k = 0. In Eqs.(28)-(29), the subscript "ǫ 0 " recalls that we consider the formation of large-scale structures in the actual universe to which is added the small uniform overdensity ∆δ L0 = 3ǫ 0 . The physical meaning of the expression (28) directly follows from the mapping (26) and the independence on cosmology of the equations of motion (19)- (21), within the approximation Ω m /f 2 ≃ 1. It means that in the primed universe, with the slightly higher background densityρ ′ = (1 + 3ǫ)ρ (focusing for instance on the case ǫ > 0), comoving distances x ′ show the small isotropic dilatation (26) [because the higher background density yields a higher gravitational force and a smaller scale factor a ′ (t)], whence an isotropic contraction of wave numbers k ′ , while the linear growth factor D + (t) is also modified. Moreover, the approximate symmetry discussed in Sec. III implies that all time and cosmological dependence can be absorbed through the time coordinate D + , if we work with the rescaled field {δ, u, ϕ} of Eq. (17). This is denoted by the rescaled time coordinate D +ǫ0 in the right-hand side of Eq. (28), where the subscript "ǫ 0 " recalls that we must take into account the impact of the modified background onto the linear growth factor.
For the redshift-space density contrast (29) two new effects arise, as compared with the real-space density contrast (28). First, the mapping s ↔ s ′ is no longer isotropic because of the peculiar velocity component along the line of sight, see Eq. (27), which also leads to an anisotropic relationship k ↔ k ′ . Second, in addition to the time coordinate D + , the redshift-space density contrast involves the new quantity f (t). This follows from the definition (5), which can be written in terms of the rescaled velocity field u of Eq.(17) as This shows that in addition to the rescaled term u r , which only depends on time and cosmology through the time coordinate D + , within the approximate symmetry of Sec. III, the line-of-sight component explicitly involves a time and cosmology dependent factor f (t), which must be taken into account in Eq. (29). Then, to derive the angular-averaged consistency relations through Eq.(10), we simply need to use Eq. (29) to obtain the derivative of the redshift-space density contrast with respect to ǫ 0 , and next to use Eq. (25). This yields where we disregarded a Dirac factor that does not contribute for wave numbers k = 0. As found in [26,41], the derivative of the linear growth factor reads as ∂D +ǫ0 This corresponds to D ′ + = D + +(13/7)D 2 + ǫ 0 for the linear growing mode in the primed frame, while a ′ = a − D + aǫ 0 and H ′ = H −Ḋ + ǫ 0 . From the definition (18), we obtain Therefore, Eq. (31) gives Of course, when we set f to zero, we recover the expression of the derivative with respect to ǫ 0 of the real-space density contrastδ [26]. In configuration space, this reads as ∂δ s (s) Next, from Eqs. (10) and (25), we obtain as for the real-space correlations [26], The counter terms of the form −1/n j s j ensure that all expressions are invariant with respect to uniform translations [by explicitly setting the small-scale region at the center of the large-scale perturbation (11)]. They are irrelevant for equal-time statistics, t 1 = .. = t n , where factors of the form i s i · ∂ ∂si δs 1 ..δ s n are already invariant with respect to uniform translations.
The comparison with Eq.(8) gives, after writing the correlations in terms of Fourier-space polyspectra, where Ω k ′ is the unit vector along the direction of k ′ and δ K i,j the Kronecker symbol. The subscript k ′ → 0 recalls that this relation only gives the leading-order term in the large-scale limit k ′ → 0, whereas the wave numbers {k 1 , .., k n } are fixed and may be within the nonlinear regime. Here we denoted with a prime the reduced polyspectra, defined as (38) where we explicitly factor out the Dirac factor associated with statistical homogeneity. In particular, this means that δs (k 1 )..δ s (k n ) ′ can be written as a function of the n − 1 wave numbers {k 1 , .., k n−1 } only.
On large scales we recover the linear theory [4,35] where µ ′ is the cosine of the wave number k ′ with the line of sight, as in Therefore, Eq.(37) also gives When all times are equal, t ′ = t 1 = .. = t n ≡ t, this simplifies as The lowest-order equal-time consistency relation obtained from Eq.(41) corresponds to n = 2, that is, the bispectrum built from the correlation between two smallscale modes and one large-scale mode. We define the bispectrum as in Eq.(38), In contrast with the real-space bispectrum, B(k 1 , k 2 , k 3 ), which only depends on the lengths of the three wave numbers {k 1 , k 2 , k 3 } thanks to statistical isotropy, the redshift-space bispectrum also depends on angles because the velocity component along the line of sight breaks the isotropy. Then, Eq.(41) yields Here we used the symmetries of the redshift-space power spectrum to write P s (k) as a function of k and µ 2 . In Eq.(43), the power spectrum is written as a function of time through the functions D + and f , that is, In particular, in the linear regime we have the well-known expression where P L0 is the linear real-space power spectrum today. When f = 0, the relation (43) recovers the real-space consistency relation, as it should.
B. Multipole expansion
The consistency relation (43) is written for a given value of k and µ. In practice, rather than considering the redshift-space power spectrum over a grid of µ, one often expands the dependence on µ over Legendre polynomials. Thus, we write the nonlinear redshift-space power spectrum as where L ℓ (µ) is the Legendre polynomial of order ℓ. Only even orders contribute to this expansion because P s is an even function of µ. Substituting into Eq. (43), we obtain For the first two multipoles, 2ℓ = 0 and 2ℓ = 2, this yields and C. f-derivative In practice, we cannot directly measure the derivative with respect to f of the redshift-space power spectrum, because the time derivative combines the derivatives with respect to D + and f . Therefore, the expression (43) can only be applied to analytical models, where the dependences on D + and f are explicitly known. To obtain an expression that can be applied to numerical or observed power spectra, we must write the derivative with respect to f in terms of observed time or space coordinates. Since the redshift-space power spectrum must coincide with the real-space power spectrum when either f or µ 2 vanishes, each factor f (resp. µ 2 ) must appear in combination with a power of µ 2 (resp. f ). Here we make the ansatz that the dependence on f and µ 2 only appears through the combination f µ 2 , which is exact at the linear order (45) (but at higher orders terms of the form f µ 4 , f µ 6 , ..., might appear). This gives This allows us to write Eq. (43) as In practice, we only measure the dependence of the power spectrum with respect to time t, or scale factor a(t), and wave number coordinates {k, µ}. Then, writing ∂/∂t =Ḋ + ∂/∂D + +ḟ ∂/∂f , and using Eq.(50), we obtain which gives Using the approximation Ω m /f 2 ≃ 1, we might simplify Eq.(53) by writingḟ ≃Ḋ + D+ [−2 + f /2 + 3f 2 /2]. However, this introduces an additional source of error, and at redshift z = 0.35, this gives a 15% error onḟ . We checked numerically that this can lead to violations of the consistency relations by factors as large as 3 or as small as 0.5. Therefore, we keep the expression (53) in the following. [The impact of the approximation Ω m /f 2 ≃ 1 is greater on the explicit factorḟ in Eq.(53) than on the consistency relation itself, which also relied on this approximation, because the factorḟ is evaluated at the observed redshift whereas the consistency relation involves the behavior of the growing modes over all previous redshifts, following the growth of density fluctuations, which damps the impact of late-time behaviors.]
Multipole expansions
We can again write the relations (51) and (53) in terms of the multipole expansion (46). For the first two multipoles, Eq.(51) leads to and while Eq.(53) leads to and As compared with Eqs. (48) and (49), these relations involve all multipoles P s 2ℓ in the right hand sides, because the substitution (50) gives rise to factors µ 2 ∂/∂µ 2 rather than the factor (1 − µ 2 )∂/∂µ 2 that appeared in Eq. (43). In practice, it is not possible to measure or compute all multipoles and one must truncate these multipole series at some order ℓ max . This implies an additional approximation onto these relations (54)-(57).
VI. EXPLICIT CHECKS
The angular-averaged consistency relations (37)- (41) are valid at all orders of perturbation theory and also beyond the perturbative regime, including shell-crossing effects, within the accuracy of the approximation Ω m /f 2 ≃ 1 (and as long as gravity is the dominant process).
We now provide two explicit checks of the angularaveraged consistency relations (37)- (41). First, we check these relations for the lowest-order case n = 2, that is, for the bispectrum, at lowest order of perturbation theory. Second, we present a fully nonlinear and nonperturbative check, for arbitrary n−point polyspectra, in the one-dimensional case.
A. Perturbative check
Here we briefly check the consistency relations for the lowest order case, n = 2, given by Eq.(43) at equal times, at lowest order of perturbation theory. At this order, the equal-time redshift-space matter density bispectrum reads as [4] where "2 perm." stands for two other terms that are obtained from permutations over the indices {1, 2, 3}, and the kernels Z 1 and Z 2 are given by and where k = k 1 + k 2 . In the small-k ′ limit we obtain with k 1 = k−k ′ /2 and k 2 = −k−k ′ /2. Here we used the fact that Z 2 (k 1 , k 2 ) vanishes as |k 1 + k 2 | 2 for |k 1 + k 2 | → 0, whereas P L0 (k) ∼ k ns with n s 1.
[If this is not the case, that is, there is very little initial power on large scales, we must go back to the consistency relation in the form of Eq.(37) rather than Eq. (40). However, this is not necessary in realistic models.] Expanding the various terms over k ′ , as substituting into Eq.(61), and integrating over the angles of k ′ , we obtain On the other hand, the right-hand side of Eq.(43) reads at the same order over P L as Collecting the various terms we can check that we recover Eq. (65). Therefore, we have checked the angular-averaged redshift-space consistency relation (41) for the bispectrum, at leading order of perturbation theory, within the approximate symmetry Ω m /f 2 ≃ 1 discussed in Sec. III. In this explicit check, the use of this approximate symmetry appears at the level of the expression (58) of the bispectrum, which only involves the linear growing mode D + and the factor f as functions of time and cosmology. An exact calculation would give factors that show new but weak dependencies on time and cosmology (and that are unity for the Einstein-de Sitter case) [4]. These deviations from Eq.(58) are usually neglected [for instance, when the cosmological constant is zero, they were shown to be well approximated by factors like (Ω −2/63 m − 1) that are very small over the range of interest [42]].
For future use in Sec. VII B 1 below, in terms of the angular monopole and quadrupole, Eq.(65) gives at lowest order of perturbation theory: and The explicit check presented in Sec. VI A only applies up to the lowest order of perturbation theory. Because the goal of the consistency relations is precisely to go beyond low-order perturbation theory, it is useful to obtain a fully nonlinear check. This is possible in one dimension, where the Zel'dovich solution [43] becomes exact (before shell crossing) and all quantities can be explicitly computed. Because of the change of dimensionality, we also need to rederive the 1D form of the consistency relations. We present the details of our computations in App. A, and only give the main steps in this section.
In the 1D case, the redshift-space coordinate (5) now reads as where u is the rescaled peculiar velocity defined in Eq.(A4), in a fashion similar to Eq. (17), and the redshiftspace density contrast (6) now writes as where we again discarded a Dirac term that does not contribute for k = 0 and q is the Lagrangian coordinate of the particles. As in the 3D case, to derive the 1D consistency relations we consider two universes with close cosmological parameters and expansion rates, a ′ (t) = [1 − ǫ(t)]a(t). Again, from the "1D Friedmann equations" we find that ǫ(t) = ǫ 0 D + (t). Next, a uniform overdensity ∆ L0 can be absorbed by a change of frame, with ǫ 0 = ∆δ L0 . Then, to obtain the consistency relations, we need the impact of the large-scale overdensity ∆ L0 on small-scale structures, which at lowest order is given by the dependence of the small-scale density contrastδ s (k) on ǫ 0 . As shown in App. A 2, this reads as As expected, this takes the same form as the 3D result (34), up to some changes of numerical coefficients. This leads to the equal-time redshift-space consistency relations (see App. A 3) Here we no longer need to average over the directions of the large-scale wave number k ′ , because at equal times the leading-order contribution associated with the uniform displacement of small-scale structures by large-scale modes vanishes [17][18][19][20][21][22][23]. Indeed, because of statistical homogeneity and isotropy, equal-time polyspectra are invariant through uniform translations and cannot probe uniform displacements. Therefore, 1D equal-time statistics directly probe the next-to-leading order contribution (72), which truly measures the impact of large-scale modes on the growth of small-scale structures.
In the 1D case, the Zel'dovich approximation is exact until shell crossing [26,43] and it yields for the redshiftspace nonlinear density contrast (70) the expression (see The expression (73) is exact at all orders of perturbation theory, but it no longer holds after shell crossing (which is a nonperturbative effect). On the other hand, we can define a 1D toy model by setting particle trajectories as equal to Eq.(A17). This system is no longer identified with a 1D gravitational system, and it only coincides with the latter in the perturbative regime, but it remains well defined and given by Eqs. (A17) and (73) in the nonperturbative shell-crossing regime. Then, using the expression (73) we can explicitly check the 1D consistency relations (72). We present in App. A 5 two different checks. First, in App. A 5 a, we check Eq.(71) by explicitly computing the impact on the nonlinear density contrast (73) of a small change ∆δ L0 to the initial conditions. Second, in App. A 5 b, we directly check the consistency relations (72) by explicitly computing the correlations δ L (k ′ )δ s (k 1 )..δ s (k n ) ′ k ′ →0 and δs (k 1 )..δ s (k n ) ′ and verifying that they satisfy Eq.(72).
These two different checks allow us to check both the reasoning that leads to the consistency relations, through the intermediate result (71), and the final expression of these relations. They also explicitly show that they are not restricted to the perturbative regime. In particular, they extend beyond shell crossing, as seen from the toy model defined by the explicit expression (73) (i.e., where one defines the system by the Zel'dovich dynamics, even beyond shell crossing, without further reference to gravity).
As for the real-space consistency relations [26], it happens that in this 1D model (73) the 1D consistency relations (72) are actually exact, that is, they do not rely on the approximation κ ≃ κ 0 , where κ defined in Eq.(A8) plays the role of the 3D factor Ω m /f 2 encountered in Eqs.(19)- (21). This is because the redshift-space density contrast (73) truly only depends on cosmology and time through the two factors D + and f , even at nonlinear order. In contrast, in the 3D gravitational case, beyond linear order new functions of cosmology and time appear (for cosmologies that depart from the Einstein-de Sitter case) and they can only be reduced to powers of D + and f within the approximation Ω m /f 2 ≃ 1. On the other hand, if we consider the actual 1D gravitational dynamics even beyond shell crossing, where it deviates from the expression (73), then the 1D consistency relations (72) are only approximate in the nonperturbative regime, as they rely on the approximation κ ≃ κ 0 , while remaining exact at all perturbative orders.
Unfortunately, it is not easy to build 3D analytical models that can be explicitly solved and suit our purposes. The 3D Zel'dovich approximation again provides a simple model for the formation of large-scale structures and the cosmic web. However, it cannot suit our purposes because it does not apply to the dynamics of the 3D background universe itself. Indeed, as can be seen from their derivation in Sec. IV, the consistency relations precisely derive from the fact that a large-scale almost uniform density perturbation can be seen as a local change of the cosmological parameters (i.e., the background density). This is also apparent through the fact that the deviation ǫ(t) between the two nearby universes (23) obeys the same evolution equation (16) as the linear growing mode of local density perturbations. This is no longer possible for the 3D Zel'dovich approximation, which is not an exact solution and cannot be extended to the Hubble flow itself. In contrast, in the 1D universe the Zel'dovich approximation is actually exact (before shell crossing) and it applies both at the level of the background and of the density perturbations. An alternative dynamics, which is exact at the background level and provides analytical results on small nonlinear scales, is the spherical collapse model. However, this yields a very different density field than the actual one, as there is a single central density fluctuation that breaks statistical homogeneity and density correlations are no longer invariant through translations. Therefore, although it should be possible to obtain some consistency relations for this model, they would have a rather different form and this 1D spherical model would be even farther from the actual universe than the 1D statistically homogeneous model studied in this section.
VII. SIMULATIONS
The angular-averaged consistency relations (37)- (41) are valid at all orders of perturbation theory and also beyond the perturbative regime, including shell-crossing effects, within the accuracy of the approximation Ω m /f 2 ≃ 1 (and as long as gravity is the dominant process). We have explicitly confirmed them either perturbatively or nonperturbatively, but the latter is limited to the onedimensional case.
It would thus be of great importance to further check these relations in three dimensions nonperturbatively. We here exploit a series of N -body simulations for this purpose. As can be seen in the following, they are also useful to understand the possible breakdown of the relations and test the validity of the ansatz employed in the measurement in practical situations. We first summarize how we can evaluate the derivative terms in the consistency relations. We then present the numerical results for the bispectrum together with a brief description of the simulations themselves.
A. Derivatives from numerical simulations
The consistency relation (43) involves derivatives with respect to D + and f . They can be obtained at once within the framework of an explicit analytic model for the matter density polyspectra. However, in this paper we do not use these relations to check a specific analytical model. Instead, we wish to use numerical simulations to test these relations (which are only approximate because of the approximation Ω m /f 2 ≃ 1). Nevertheless, we can also measure separately the derivatives with respect to D + and f from the simulations.
The redshift-space coordinate s can be written in terms of the comoving coordinate x and peculiar velocity v as in Eq. (30). As explained in Sec. III, within the approximation Ω m /f 2 ≃ 1 that is used to derive the consistency relations, all time dependence can be absorbed in the linear growing mode D + (t) with the change of variables (17). This means that the fields {δ, u, ϕ} are only functions of time through D + , as well as the displacement field Ψ(q, t) = x − q, where q is the Lagrangian coordinate of the particles. Thus, for a given realization defined by the linear density field δ L0 (q) (normalized today or at the initial time of the simulation), the redshift-space coordinate s depends on the functions D + (t) and f (t) as Therefore, a small change ∆f of the factor f corresponds to a change of the redshift-space coordinate s(q) of the particles given by: On the other hand, from the equations of motion (19)-(21), a change ∆ ln D + of the linear growing mode leads to a change of the particle velocities and coordinates ln D + → ln D + + ∆ ln D + : x → x + ∆ ln D + u, whence, Thus, to obtain the partial derivative of the power spectrum with respect to f or ln D + , we modify the particle redshift-space coordinates by Eqs.(75) or (77), for a small value of ∆f or ∆ ln D + , and we compute the associated power spectrum. Taking the difference from the initial power spectrum and dividing by ∆f or ∆ ln D + gives a numerical estimate of ∂P s /∂f or ∂P s /∂ ln D + .
B. Numerical results
We are now in a position to present the consistency relations measured from simulations. Before that, let us briefly describe the simulations used here. They are the ones performed in [9]. Employing 1024 3 dark matter particles in a periodic cube of (2048 h −1 Mpc) 3 , the gravitational dynamics was solved by a public simulation code Gadget2 [44] starting from an initial condition set at z = 15 by solving second-order Lagrangian perturbation theory [39,45,46]. The cosmological model used was a flat-ΛCDM model consistent with the five-year observation of the WMAP satellite [47]: Ω m = 0.279, Ω b = 0.165Ω m , h = 0.701, A s = 2.49 × 10 −9 and n s = 0.96 at k 0 = 0.002Mpc −1 . This whole process was repeated 60 times with the initial random phases varied to have a large ensemble of random realizations.
The consistency relations have already been examined and presented in real space in [28]. There it was found that the relation was recovered within the numerical accuracy at z = 1, while discrepancy of several percent level was found at z = 0.35. It was further discussed that this is presumably due to the breakdown of the approximation Ω m /f 2 ≃ 1; we could indeed confirm that the relations better hold in supplementary simulations done in the EdS background, but with exactly the same initial perturbations. We focus here on the lower redshift, z = 0.35, at which the consistency relations are the most nontrivial.
Full consistency relations
We first consider the redshift-space consistency relations in their full form (48)- (49), with both derivative operators ∂/∂D + and ∂/∂f .
Having already presented the methods we employ to measure the derivative terms in the previous subsection, the post-processing for the simulation outputs is exactly the same as in [28] except that we now consider the particle positions in redshift space. The matter density field is constructed with the Cloud-in-Cells (CIC) interpolation on 1024 3 mesh cells and subsequent computations are based on the fast Fourier transform. The change in the particle coordinates corresponding to a slight change in ln D + is also computed based on the calculation on the same mesh cells for ∂φ/∂r and then interpolated to the positions of particles using the CIC kernel (see Eq. 77).
The monopole and the quadrupole moments of the relation for the bispectra, Eqs. (48) and (49), are respectively shown in the left and the right panel of Fig. 1. In each panel, we fix the value of the hard wave mode k, and plot the ratio of the two sides as a function of the soft mode k ′ . The error bars are estimated based on the scatter among the 60 independent realizations. They thus correspond to the error level expected for an ideal survey with a volume of ∼ 8 h −3 Gpc 3 when we can ignore the shot noise contamination. Overall, the ratio is close to unity for both monopole and quadrupole. From this figure, we basically confirm the relations at the nonperturbative level in the three dimensional dynamics.
The dashed lines in Fig. 1 show the ratio of the measured bispectrum to its tree-order predictions (67) and (68). For the monopole, this lowest-order perturbative prediction fares reasonably well as it only underestimates the nonlinear results by 30%, on these scales. However, it is already less accurate than our result (48), which takes into account higher-order and nonperturbative nonlinear corrections (at the price of the approximation Ω m /f 2 ≃ 1). For the quadrupole, the lowest-order perturbative prediction does not appear in the panels at k ≥ 0.4hMpc −1 because in these cases it is out of range and actually gives the wrong sign. This change of sign is likely due to the fingers-of-god effect, which is not captured by perturbation theory. Indeed, it is well known that higher-order multipoles are increasingly sensitive to small-scale nonlinear contributions, as finger-ofgod effects impart a strong angular dependence to the bispectrum [32]. In contrast, our result (49) remains consistent with the simulation data within 20%. This shows that we test the consistency relations in a nontrivial regime, beyond the reach of standard perturbation theory. Thus, the trade-off between the error introduced by the approximate symmetry of Sec. III and the advantage of taking into account all nonlinear contributions, at both perturbative and nonperturbative levels, is beneficial. This is particularly true for complex statistics such as the redshift-space quadrupole that are very sensitive to small-scale highly-nonlinear effects, which are difficult to include in analytical modelings.
However, when we look into each panel more closely, we can find that the data points are slightly off from unity. For the monopole moment, the ratio tends to be larger than unity at k > ∼ 0.4 hMpc −1 . On the other hand, unity is within the statistical error level for the quadrupole moment, though the central values are larger (smaller) than unity on k < ∼ 0.4 hMpc −1 (k > ∼ 0.8 hMpc −1 ). In most of the cases, the deviation from unity is at most 20%, and this is meaningful only when we measure the ratio very precisely; an ideal survey with a volume of ∼ 8 h −3 Gpc 3 can detect the deviation from unity only for the monopole moment on small scales.
These deviations are somewhat greater than those found in [28] in real-space, which only reached 7% at k = 1hMpc −1 . This is not surprising, because it is well known that redshift-space statistics are more sensitive to small nonlinear scales, for instance through the fingers of god effect, and low-order perturbation theory has a smaller range of validity. Then, we can expect a greater violation of the redshift-space consistency relations because the breakdown of the approximation Ω m /f 2 ≃ 1 has a stronger impact on higher perturbative orders. Indeed, absorbing the time and cosmological dependence by D + (t) and f (t) is exact at linear order whereas higher orders involve new functions D We then work on the supplemental simulations done in the EdS background, to understand the cause of this small discrepancy, just as in our previous real-space paper [28]. Note that our consistency relations in an Einsteinde Sitter cosmology also involve the approximate symmetry described in Sec. III, even though Ω m /f 2 = 1 in the EdS background. Indeed, what matters is not that Ω m /f 2 be unity in the reference cosmology, but that Ω m /f 2 remain (approximately) constant as we vary the background curvature around the reference cosmology. Nevertheless, the comparison between EdS and ΛCDM results provides a simple estimate of the impact of our approximation, because the difference between these two cosmologies arises from the change of reference point along the Ω m /f 2 curve.
The results from four realizations of such simulations are shown in Fig. 2. Although the scatter of the data points are larger than in Fig. 1, the systematic departure from unity in the previous figure is clearly reduced. We thus conclude that the small violation of the consistency relations for the bispectrum can be explained by the breakdown of the approximation Ω m /f 2 ≃ 1 (more precisely, of constant Ω m /f 2 for nearby background curvatures), in agreement with the discussions above. Now, we come back to the original ΛCDM simulations and apply the ansatz that all the f and µ dependences appear through the combination f µ 2 . This allows us to replace the f -derivatives, ∂/∂f , by µ 2 -derivatives, ∂/∂µ 2 , as in Eq. (50). This gives the approximated consistency relations for the bispectrum, Eqs. (54) and (55), respectively for the monopole and quadrupole moment, which we plot in Fig. 3. In contrast with the exact form of the consistency relations, given by Eqs. (48) and (49) and displayed in Fig. 1, the right-hand side now involves an infinite summation over all Legendre multipoles of the redshift-space power spectrum. Here, we truncate these series at order P s 2 (crosses) or P s 4 (pluses). The difference between the two symbols is negligible at k = 0.2 and 0.4 hMpc −1 for the monopole and k = 0.2 hMpc −1 for the quadrupole moment, where the ratio itself is roughly consistent with unity. As we move to smaller scales, the two symbols become more distinct. In those cases, adding the higher-order term (i.e., P s 4 ) does not help to restore the relations, suggesting that the ansatz is not a good approximation at the corresponding scales. The plot suggests that the quadrupole moment is more sensitive to the higher-order term and thus the ansatz works less accurately than for the monopole moment. This is naturally expected since the quadrupole moment is impacted more strongly by higher-order corrections [see e.g. [48], where we can see how much higherorder perturbative corrections affect the first two moments. These corrections have terms µ 2m f n , where m and n can be different.].
3. f µ 2 ansatz and further reduction to ∂/∂a operator The situation is basically the same after we further apply the ansatz to replace the derivative with respect to D + by a derivative with respect to time or the scale factor, as in Eq. (52). As compared with the form displayed in Fig. 3, this involves an additional approximation, which relies on the same f µ 2 -ansatz, because the full time derivative, or scale-factor derivative, ∂/∂a, combines both theoretical derivatives ∂/∂D + and ∂/∂f . Therefore, to replace the operator ∂/∂D + by ∂/∂a we must once again use the f µ 2 -ansatz to remove the new ∂/∂f terms generated by the change of variable from D + to a. Figure 4 shows the results of Eqs. (56) and (57). The plus symbols are now out of the plotted range for the quadrupole moment on small scales. The relation for the monopole moment is more robust against this approximation ansatz on large scales, especially at k = 0.2 hMpc −1 , and we can safely apply the consistency relation here in the simplified form (56). Except for this case, the ratio is affected significantly by the ansatz and the order at which we truncate the infinite summation on the right-hand side. Nevertheless, we note that by truncating at order P s 2 we obtain a good agreement, better than 20% up to k = 1hMpc −1 , for the monopole. For the quadrupole, the deviation can reach up to 40%.
Therefore, even with the current ansatz, we can still examine how the ratio behaves in the observations and compare it with the simulation results. Since the data points obtained with the truncation at order P s 2 (i.e., cross symbols) are less noisy and moreover stay around unity after applying the ansatz, the easiest check of the true gravitational dynamics is to apply the same ansatz and truncate the moments at this order. We would need a more involved ansatz for the estimation of the derivative terms from observations to extend the applicable range of these consistency relations, and we leave this to a future study.
VIII. SUMMARY
In this paper, we have generalized the equal-time angular-averaged consistency relations for the cosmic density field originally developed in real space by [26] to redshift space, in which the actual observations are taking place. These relations express the squeezed limit of (n+1)-point correlation functions or polyspectra, with n small-scale modes (that can be in the nonlinear regime) and one large-scale mode (in the linear regime at a much larger scale than all other n wave numbers), in terms of the n-point correlation of the small-scale modes. These relations can be generalized to (n+ ℓ) correlations, with ℓ large-scale modes, as in [26], but we focused here on the case of one large-scale mode. The explicit forms that we have obtained rely on an approximate symmetry of the dynamics, Ω m /f 2 ≃ 1. However, within this approximation they are valid at a fully nonlinear level. Thus, they hold at all orders of perturbation theory and also in the nonperturbative regime, beyond shell crossing. In particular, they include both the large-scale Kaiser effect [35], associated with the infall of matter within large-scale gravitational wells, and the fingers-of-god effect [34], associated with the virial motions inside collapsed halos.
We have found that, because the mapping from real to redshift space involves the velocity component along the radial direction, the form of these consistency relations is slightly more complex than in real space, as it involves two types of time derivatives. The first is a derivative with respect to the linear growing mode D + (t), which also appeared in the real-space case. The second is a derivative with respect to the linear growth rate, f (t) = d ln D + /d ln a. This differential operator, ∂/∂f , did not appear in the real-space case and it arises from the scaling of the peculiar velocity field (i.e., through the change of variable from v to u, where u is the rescaled velocity field that makes use of the approximate symmetry of the dynamics). This feature makes it more difficult to use these relations for observations, because at best we can only measure one time derivative, ∂/∂t, which combines both ∂/∂D + and ∂/∂f , and we cannot measure these two derivatives separately. However, these relations can still be used to check analytical models or numerical simulations, where we can explicitly compute these two derivatives.
Next, we have tested these consistency relations both analytically and numerically. First, at leading order of perturbation theory, we have checked the lowest-order consistency relation, which expresses the squeezed limit of the bispectrum in terms of the nonlinear power spectrum of the small-scale modes. Second, in a fully nonlinear and nonperturbative analysis, we have checked all these consistency relations at all orders, in the simpler one-dimensional case, where we can use the exact Zel'dovich solution of the dynamics.
We have also tested the lowest-order consistency relations, relating the nonlinear bispectrum and power spectrum, with numerical simulations. We find a reasonably good agreement at z = 0.35. Projecting the angular dependence of the redshift-space polyspectra onto Legendre polynomials, we find a good agreement for the monopole up to k 0.4hMpc −1 and we detect a small deviation of at most 20% for k ≤ 1hMpc −1 . For the quadrupole, we do not detect significant deviations (but the statistical error bars are slightly larger). In the case of an Einstein de Sitter cosmology, we find that these deviations are greatly reduced and our numerical data agree with theoretical predictions. Therefore, the small deviations found in the ΛCDM cosmology can be explained by the finite accuracy of the approximation Ω m /f 2 ≃ 1.
The typical magnitude of these deviations is larger and extends over a wider wave number range than for the realspace consistency relations [28]. This is consistent with the observation that the nonlinearity in the cosmic velocity field is more sensitive to the local nonlinear structure on small scales, as small-scale effects can easily propagate to larger scales through the nonlinear mapping from real to redshift space. Indeed, it is well known that the perturbation-theory prediction of the matter power spectrum is more difficult in redshift space [4]. Then, because nonlinear effects are likely to amplify the breakdown of the approximation Ω m /f 2 ≃ 1, violations of the consistency relations due to breakdown of this approximate symmetry are indeed expected to be greater in redshift space.
On the other hand, we find that our results for the bispectrum provide a significant improvement over lowestorder perturbation theory, especially for the quadrupole where the perturbative prediction even gives the wrong sign for k ≥ 0.4hMpc −1 . This is a signature of the strong impact of small-scale nonlinearities onto redshift-space statistics, which are usually difficult to model analytically. This shows that we test the consistency relations in a nontrivial regime, beyond low-order perturbation theory. It also shows the interest of these nonlinear relations, as the inaccuracy introduced by the approximate symmetry Ω m /f 2 ≃ 1 is more than compensated by the account of higher-order and nonperturbative nonlinear contributions. This can be even more beneficial for statistics such as the redshift-space quadrupole that are sensitive to highly nonlinear effects that are difficult to model.
To make the connection with observations, or to simplify the form of these consistency relations, we also tested a simple ansatz that allows to remove the new operator ∂/∂f . This relies on the approximation that f and µ 2 only enter the redshift-space power spectrum through the combination f µ 2 (this is exact at linear order, in the Kaiser effect). A first step allows us to remove the operator ∂/∂f , which only leaves the operators ∂/∂D + and ∂/∂k in multipole space. The drawback is that the righthand side of each consistency relation now involves an infinite series over multipoles of all orders. We find that this approximation gives rise to an additional source of discrepancy between the numerical data and the analytic predictions, especially for the quadrupole. Moreover, the result depends on the order at which we truncate the multipole series in the right-hand side. It turns out that better results are obtained when we truncate at the lowest order P s 2 . This suggests that the f µ 2 -ansatz does not faithfully describe higher perturbative or nonperturbative orders.
In a second step, we use once more the f µ 2 -ansatz to replace the operator ∂/∂D + by the full time-derivative, or scale-factor derivative ∂/∂a. As could be expected, we find that this further increase the deviation from the numerical simulation data and the dependence on the truncation order, especially for the quadrupole.
Nevertheless, we find that using the f µ 2 -ansatz and truncating at order P s 2 we obtain an agreement that is better than 20% up to k = 1hMpc −1 for the monopole. For the quadrupole the deviation can reach 40%. Although these limitations make the accessible range of these consistency relations rather narrow, we can still make use of them to predict the higher-order polyspectra. There is substantial recent progress on the redshiftspace clustering, but the calculations are mostly limited to the power spectrum (e.g., [33,[48][49][50][51][52]). Using the relations developed here, one can compute, for instance, the angular-averaged bispectrum in redshift space by substituting these formulae for the power spectrum. Since the relations approximately hold down to very small scales, albeit not perfectly, they can be useful in estimating the covariance matrices of the redshift-space observables [we need the trispectrum to compute the matrix for the power spectrum]. Indeed, the accuracy required for the covariance matrices might not be as demanding as that for the spectra themselves. A study along this line is undergoing now, and we wish to present the results elsewhere in near future.
A more complex issue is the problem of biasing, when we wish to connect measures from galaxy surveys with theoretical predictions. In principle, the approximate symmetry Ω m /f 2 ≃ 1 that we used to obtain explicit expressions no longer applies once we take into account galaxy formation physics. Indeed, baryonic processes (cooling, star formation, ....) involve new characteristic scales that explicitly break the symmetry of the dynamics. Then, a priori it is no longer possible to absorb the time-dependence of the dynamics by a simple rescaling that only involves the linear growing mode. Therefore, the relations we have obtained are not guaranteed to apply to the galaxy density field itself, by making the naive replacement δ → δ g . One should rather use these relations as constraints on the matter density field, and given a supplementary model that relates the galaxy field to the dark matter density field, derives the consequences onto the galaxy density field. Of course, this would depend on the model that is used to describe galaxy formation and introduce an additional approximation. We leave such a study for future works.
Here we generalized the 1D gravitational dynamics to the case of a time-dependent Newton's constant G(t). This allows us to obtain ever-expanding cosmologies, similar to the 3D Einstein-de Sitter cosmology, for power-law cases G(t) ∝ t α with −2 < α < −1 [and a(t) ∝ t α+2 , Linearizing these equations, we obtain the evolution equation of the linear modes of the density contrast. It takes the same form as the usual 3D equation (16),D + 2H(t)Ḋ − 4πG(t)ρ(t)D = 0, but with a time-dependent Newton's constant and the 1D scale factor a(t).
In a fashion similar to the change of variables (17), we make the change of variables and we obtain the rescaled equations of motion where we introduced the factor κ(t) defined by Thus, κ(t) plays the role of the ratio 3Ω m /(2f 2 ) encountered in the 3D case in Eqs.(19)- (21). Then, the 3D approximation Ω m /f 2 ≃ 1 used in the main text corresponds in our 1D toy model to the approximation κ ≃ κ 0 . That is, we neglect the dependence of κ on the cosmological parameters and time, and the dependence on the background is fully contained in the change of variables (A4). [The generalization to the case of a time-dependent Newton's constant is not important at a formal level, because it does not modify the form of the equations of motion. However, it is necessary for this approximate symmetry to make practical sense, so that we can find a regime where κ is approximately constant. This corresponds to cosmologies close to the Einstein-de Sitterlike expansion a(t) ∝ t α+2 , in the case G(t) ∝ t α with −2 < α < −1.] The fluid equations (A5)-(A7) only apply to the singlestream regime, but we can again go beyond shell crossings by using the equation of motion of trajectories, which reads as where ϕ is the rescaled gravitational potential (A7). This is the 1D version of Eq. (22) and it explicitly shows that particle trajectories obey the same approximate symmetry, before and after shell crossings.
Next, the change of frame described in Eq.(26) becomes and at linear order over both δ and ǫ we have δ L = δ ′ L +ǫ. This means that the background density perturbation ǫ is again absorbed by the change of frame, with ǫ 0 = ∆δ L0 . The redshift-space coordinate s now transforms as Then, as in Eq. (29), the redshift-space density contrast in the actual unprimed frame, with the uniform overdensity ∆δ L0 , writes as where we disregarded the Dirac factor that does not contribute for wave numbers k = 0. Therefore, the derivative of the redshift-space density contrast with respect to ǫ 0 reads as As shown in [26], the derivative of the linear growing mode is ∂D + /∂ǫ 0 = D 2 + , which means that D ′ + = D + + ǫ 0 D 2 + . Then, using a ′ = a − ǫ 0 D + a and the definition (A4) for f and f ′ , we obtain f ′ = f + f (ǫ +ǫ/H), whence
Zel'dovich solution
In the 1D case, the Zel'dovich approximation is exact until shell crossing [26,43]. It corresponds to taking for the particle trajectories the linear prediction, Therefore, the redshift-space coordinate (69) writes as (using v = aẋ) and the redshift-space nonlinear density contrast (70) as Eq.(73).
Check of the 1D consistency relations
a. Impact of a large-scale perturbation on the nonlinear redshift-space density contrast To check the validity of the 1D consistency relations from the exact solution (73), we simply need the change of the nonlinear redshift-space density contrastδ s (k) when we make a small perturbation ∆δ L0 to the initial conditions on much larger scales. Let us consider the impact of a small large-scale perturbation ∆δ L0 to the initial conditions. Here we also restrict to even perturbations, ∆δ L0 (−k ′ ) = ∆δ L0 (k ′ ), as the consistency relations studied in this paper apply to spherically averaged statistics, which correspond to the ±k ′ averages in the 1D relations (A15)-(A16). Then, expanding Eq.(73) up to first order over ∆δ L0 , and over powers of k ′ , we obtain Here the limit k ′ → 0 means that we consider a perturbation of the initial conditions ∆δ L0 (k ′ ) that is restricted to low wave numbers, k ′ < Λ, with a cutoff Λ that goes to zero (i.e., that is much smaller than the wave numbers k and 2π/q of interest). On the other hand, from the expression (73) we obtain at once the exact result The comparison with Eq.(A20) gives k ′ → 0 : ∆δ s (k) = D + dk ′ ∆δ L0 (k ′ ) ∂δ s (k) ∂ ln D + The consistency relations (A15)-(A16) and (72) only rely on the expression (71), which also reads (at linear order over ǫ 0 ) as ∆δ s (k) = ǫ 0 D + ∂δ s (k) ∂ ln D + Since we have ǫ 0 = ∆δ L0 = dk ′ ∆δ L0 (k ′ ), we recover Eq.(A22). This provides an explicit check of Eq.(71), hence of the 1D consistency relations.
b. Explicit check on the redshift-space density polyspectra Instead of looking for the impact of a large-scale linear perturbation on the nonlinear density contrast, as in Sec. A 5 a, we can directly check the consistency relations in their forms (A15) or (72). Considering for simplicity the equal-time polyspectra (72), we define the mixed polyspectra, formed by one linear density contrast and n nonlinear redshift-space density contrasts, E s n (k ′ ; k 1 , .., k n ; t) ≡ δ L (k ′ , t)δ s (k 1 , t)..δ s (k n , t) Making the changes of variable q 1 = q ′ 1 + q n , .., q n−1 = q ′ n−1 + q n , the argument of the last exponential does not depend on q n . Then, the integration over q n yields a Dirac factor δ D (k ′ + k 1 + .. + k n ), that we factor out by defining E s n = E s ′ n δ D (k ′ + k 1 + .. + k n ), with a primed notation as in Eq. (38), and we replace k n by −(k ′ + k 1 + .. + k n−1 ). Finally, in the limit k ′ → 0 we expand the terms e −ik ′ qj up to first order over k ′ , and we obtain k ′ → 0 : E s ′ n = P L (k ′ )(1 + f ) Then, we can explicitly check from the comparison with Eq.(A26) that we have the relation and we recover the consistency relation (72). | 2015-03-20T10:36:01.000Z | 2015-03-20T00:00:00.000 | {
"year": 2015,
"sha1": "05e0b2aca7629d6f52bd22728d3ec1e43879ce71",
"oa_license": null,
"oa_url": "https://hal-insu.archives-ouvertes.fr/insu-03644665/file/PhysRevD.92.123510.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05e0b2aca7629d6f52bd22728d3ec1e43879ce71",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
253404025 | pes2o/s2orc | v3-fos-license | Synthesis, Structure and Photochemistry of Dibenzylidenecyclobutanones
A series of symmetrical dibenzylidene derivatives of cyclobutanone were synthesized with the goal of studying the physicochemical properties of cross-conjugated dienones (ketocyanine dyes). The structures of the products were established and studied by X-ray diffraction and by NMR and electronic spectroscopy. All the products had E,E-geometry. The oxidation and reduction potentials of the dienones were determined by cyclic voltammetry. The potentials were shown to depend on the nature, position, and number of substituents in the benzene rings. A linear correlation was found between the difference of the electrochemical oxidation and reduction potentials and the energy of the long-wavelength absorption maximum. This correlation can be employed to analyze the properties of other compounds of this type. Quantum chemistry was used to explain the observed regularities in the electrochemistry, absorption, and fluorescence of the dyes. The results are in good agreement with the experimental redox potentials and spectroscopy data.
Introduction
The enormous synthetic potential of the carbon-carbon bond conjugated to a carbonyl group has been long and successfully used in organic chemistry. A typical example is the Michael reaction and its numerous varieties [1]. The introduction of two double bonds in conjugation with a keto group enables some additional reactions [2]. These compounds, called cross-conjugated dienones, ketocyanine dyes, or diarylidene ketone derivatives, attract researchers' attention for their versatile synthetic chemistry and extensive applicability, first of all, in biology and medicine [3][4][5]. One more potential application of cross-conjugated dienones is the design of photoactive materials [6][7][8][9].
The known methods for the synthesis of these compounds have been developed primarily for dienones derived from cyclopentanone or cyclohexanone and various aromatic aldehydes. Their homologues with larger or smaller rings have been rarely addressed, which may be attributable to the poor availability of the corresponding cycloalkanones as well as side reactions decreasing the yield of the target compound. The first attempts to synthesize a diarylidene derivative of cyclobutanone through aldol condensation, catalyzed by 60% KOH in ethanol, were described in [10]. However, it was shown later that the dienone obtained in that paper was a dimer of 2,4-dibenzylidenecyclobutanone. Upon the action of the hydroxide, a fast dimerization of the resulting dienone takes place. It was found that the susceptibility to dimerization increases with the temperature and the basicity of the reaction mixture [11,12]. Another possible reason for the low reaction yield can be side processes arising from the high reactivity of cyclobutanone, which is susceptible to self-condensation in basic media [13,14]. Therefore, only single examples of 2,4-dibenzylidenecyclobutanones have been characterized in the literature [11][12][13][14][15]. Meanwhile, recently, cross-conjugated dienones derived from cyclobutanone have attracted attention as sensitizers to the generation of singlet oxygen for the photodynamic therapy of cancer [16,17].
Apart from participation in additional reactions, double bonds are also responsible for two important properties of dienones. First, a prominent feature of dienones is the existence of Eand Z-isomers, which can be interconverted under the action of various stimuli such as light, acids, or transition metals [18][19][20][21][22]. Most often, the E,E-isomer is most stable in the series of cyclopentanone and cyclohexanone derivatives. The proportion of E,Zand Z,Z-isomers increases with the increasing ring size [3,20]. For cyclobutanone-derived dienones, information of this type is scarce.
One more feature of this class of compounds is the ability to undergo [2 + 2]-photocycloaddition (PCA) reaction [20,23,24]. In the case of free dienones, this reaction can proceed both in the crystal and in solution. The possibility and the stereoselectivity of PCA can be controlled by the supramolecular preorganization of double bonds, which may ensure the most appropriate geometry for the preceding dimer pair [25][26][27]. In the case of crystalline pyridine derivatives, such reactions can be accomplished using supramolecular templating (initiation) by metal complexes [28], resorcinol (pyridine-containing monoenones and acyclic dienones [29]), or silver ions (only pyridine-containing acyclic dienones were studied) [30,31] The possibility of PCA for 2,4-dibenzylidenecyclobutanones has not been discussed in the literature.
With the purpose of developing photoswitchable supramolecular systems, we started a comprehensive study of cross-conjugated dienones containing crown-ether moieties as ionophore substituents [32,33]. This research is meant to design hybrid molecules combining two functional moieties: an ionophore able to bind metal and ammonium cations and guest molecules, and a photoswitchable moiety needed for controlled binding, using light as an energy source.
Gaining a more in-depth understanding of the involved photochemical transformations required a detailed investigation of the properties and physicochemical characteristics of the model compounds. In addition, it was necessary to elucidate the possibility of a PCA reaction in the crystal without supramolecular preorganization of the reacting double bonds.
Here, we synthesized and studied dienones 1a-f, differing in the nature and number of alkoxy, alkylthio, and dialkylamino substituents ( Figure 1). The structures in question are chromophore models of crown ethers and aza/thia-crown ethers. Cyclobutanone was chosen as the central moiety as an analog of cyclopentanone derivatives, such as 2, and cyclohexanone derivatives, addressed in our previous studies [34,35].
Molecules 2022, 27, x FOR PEER REVIEW 2 of 23 found that the susceptibility to dimerization increases with the temperature and the basicity of the reaction mixture [11,12]. Another possible reason for the low reaction yield can be side processes arising from the high reactivity of cyclobutanone, which is susceptible to self-condensation in basic media [13,14]. Therefore, only single examples of 2,4dibenzylidenecyclobutanones have been characterized in the literature [11][12][13][14][15]. Meanwhile, recently, cross-conjugated dienones derived from cyclobutanone have attracted attention as sensitizers to the generation of singlet oxygen for the photodynamic therapy of cancer [16,17]. Apart from participation in additional reactions, double bonds are also responsible for two important properties of dienones. First, a prominent feature of dienones is the existence of E-and Z-isomers, which can be interconverted under the action of various stimuli such as light, acids, or transition metals [18][19][20][21][22]. Most often, the E,E-isomer is most stable in the series of cyclopentanone and cyclohexanone derivatives. The proportion of E,Z-and Z,Z-isomers increases with the increasing ring size [3,20]. For cyclobutanonederived dienones, information of this type is scarce.
One more feature of this class of compounds is the ability to undergo [2 + 2]-photocycloaddition (PCA) reaction [20,23,24]. In the case of free dienones, this reaction can proceed both in the crystal and in solution. The possibility and the stereoselectivity of PCA can be controlled by the supramolecular preorganization of double bonds, which may ensure the most appropriate geometry for the preceding dimer pair [25][26][27]. In the case of crystalline pyridine derivatives, such reactions can be accomplished using supramolecular templating (initiation) by metal complexes [28], resorcinol (pyridine-containing monoenones and acyclic dienones [29]), or silver ions (only pyridine-containing acyclic dienones were studied) [30,31] The possibility of PCA for 2,4-dibenzylidenecyclobutanones has not been discussed in the literature.
With the purpose of developing photoswitchable supramolecular systems, we started a comprehensive study of cross-conjugated dienones containing crown-ether moieties as ionophore substituents [32,33]. This research is meant to design hybrid molecules combining two functional moieties: an ionophore able to bind metal and ammonium cations and guest molecules, and a photoswitchable moiety needed for controlled binding, using light as an energy source.
Gaining a more in-depth understanding of the involved photochemical transformations required a detailed investigation of the properties and physicochemical characteristics of the model compounds. In addition, it was necessary to elucidate the possibility of a PCA reaction in the crystal without supramolecular preorganization of the reacting double bonds.
Here, we synthesized and studied dienones 1a-f, differing in the nature and number of alkoxy, alkylthio, and dialkylamino substituents ( Figure 1). The structures in question are chromophore models of crown ethers and aza/thia-crown ethers. Cyclobutanone was chosen as the central moiety as an analog of cyclopentanone derivatives, such as 2, and cyclohexanone derivatives, addressed in our previous studies [34,35]. The structures of compounds were determined by X-ray diffraction analysis and by NMR and electronic spectroscopy. X-ray diffraction analysis was used to find out whether the arrangement of double bonds of neighboring molecules is favorable for the PCA to occur in the crystal without a supramolecular effect. Cyclic voltammetry was employed to determine the oxidation and reduction potentials in order to elucidate the dependences of the energy characteristics of the molecules on the position, nature, and number of substituents in the benzene rings. Quantum chemistry was used to find the preferred conformation of 1c, to relate the electrochemical and photochemical data, and to elucidate the mechanism of luminescence emission and quenching in 1a-f. In addition, we attempted to identify a correlation between the photophysical and electrochemical characteristics of dienones 1a-f.
The structures of compounds were determined by X-ray diffraction analysis and by NMR and electronic spectroscopy. X-ray diffraction analysis was used to find out whether the arrangement of double bonds of neighboring molecules is favorable for the PCA to occur in the crystal without a supramolecular effect. Cyclic voltammetry was employed to determine the oxidation and reduction potentials in order to elucidate the dependences of the energy characteristics of the molecules on the position, nature, and number of substituents in the benzene rings. Quantum chemistry was used to find the preferred conformation of 1c, to relate the electrochemical and photochemical data, and to elucidate the mechanism of luminescence emission and quenching in 1a-f. In addition, we attempted to identify a correlation between the photophysical and electrochemical characteristics of dienones 1a-f.
A similar conclusion can be drawn from the data of NMR spectroscopy. Indeed, the chemical shifts of the olefinic protons of dienones 1a-f in the 7.01-7.19 ppm range attest to the E,E-isomers [20].
X-ray Diffraction Analysis
The crystals of compounds 1b-f and 2 (syn,syn-and anti,anti-conformers) suitable for X-ray diffraction study were grown from MeCN solutions. Each of these crystals was subjected to X-ray diffraction analysis. The solved structures of 1b-f supported the results of the NMR spectroscopic study, indicating that only E,E-isomers formed for the dibenzylidenecyclobutanone molecules. The structures of compounds 1b-f are depicted in Figure 2. Compounds 1a-f were isolated as bright-colored crystalline solids. X-ray diffraction data were obtained for all the compounds, except for 1a (for details, see below), and indicated that all the dienones were formed as E,E-isomers.
A similar conclusion can be drawn from the data of NMR spectroscopy. Indeed, the chemical shifts of the olefinic protons of dienones 1a-f in the 7.01-7.19 ppm range attest to the E,E-isomers [20].
X-ray Diffraction Analysis
The crystals of compounds 1b-f and 2 (syn,syn-and anti,anti-conformers) suitable for X-ray diffraction study were grown from MeCN solutions. Each of these crystals was subjected to X-ray diffraction analysis. The solved structures of 1b-f supported the results of the NMR spectroscopic study, indicating that only E,E-isomers formed for the dibenzylidenecyclobutanone molecules. The structures of compounds 1b-f are depicted in Figure 2.
In the crystals of 1b-d, the molecule occupies a special position on a twofold axis, whereas in 1e, the molecule is in a general position. In the crystal of 1f, there are two crystallographically independent molecules [1f (A) and 1f (B)]. In all of the compounds, the molecular skeleton is nearly planar.
Selected bond lengths and bond angles for 1b-f are listed in Table S1, Supplementary Materials.
The geometric parameters of both the independent molecules of 1f are actually identical. Figure 3 shows a superposition of these molecules. Only small differences in the torsion angles associated with the benzyl-ring orientation can be observed.
X-ray Diffraction Analysis
The crystals of compounds 1b-f and 2 (syn,syn-and anti,anti-conformers) suitable for X-ray diffraction study were grown from MeCN solutions. Each of these crystals was subjected to X-ray diffraction analysis. The solved structures of 1b-f supported the results of the NMR spectroscopic study, indicating that only E,E-isomers formed for the dibenzylidenecyclobutanone molecules. The structures of compounds 1b-f are depicted in Figure 2. In the crystals of 1b-d, the molecule occupies a special position on a twofold axis, whereas in 1e, the molecule is in a general position. In the crystal of 1f, there are two crystallographically independent molecules [1f (A) and 1f (B)]. In all of the compounds, the molecular skeleton is nearly planar.
Selected bond lengths and bond angles for 1b-f are listed in Table S1, Supplementary Materials.
The geometric parameters of both the independent molecules of 1f are actually identical. Figure 3 shows a superposition of these molecules. Only small differences in the torsion angles associated with the benzyl-ring orientation can be observed. It is seen from the data of Table S1 (Supplementary Materials) that the most important corresponding geometric parameters are well reproduced for molecules 1b-f, containing the central four-membered ring. In particular, a significant deformation of the exocyclic angles at the C2 atoms of the four-membered ring is observed, with the C1-C2-C4 angle being ~10° smaller than the C3-C2-C4 angle. Some deformation of the cyclobutane ring angles is also noted: the C2-C3-C2 angle is smaller than the other angles of the ring. One would assume that the above-mentioned non-equivalence in the exocyclic bond an- In the crystals of 1b-d, the molecule occupies a special position on a twofold axis, whereas in 1e, the molecule is in a general position. In the crystal of 1f, there are two crystallographically independent molecules [1f (A) and 1f (B)]. In all of the compounds, the molecular skeleton is nearly planar.
Selected bond lengths and bond angles for 1b-f are listed in Table S1, Supplementary Materials.
The geometric parameters of both the independent molecules of 1f are actually identical. Figure 3 shows a superposition of these molecules. Only small differences in the torsion angles associated with the benzyl-ring orientation can be observed. It is seen from the data of Table S1 (Supplementary Materials) that the most important corresponding geometric parameters are well reproduced for molecules 1b-f, containing the central four-membered ring. In particular, a significant deformation of the exocyclic angles at the C2 atoms of the four-membered ring is observed, with the C1-C2-C4 angle being ~10° smaller than the C3-C2-C4 angle. Some deformation of the cyclobutane ring angles is also noted: the C2-C3-C2 angle is smaller than the other angles of the ring. One would assume that the above-mentioned non-equivalence in the exocyclic bond angles might be minimized via the rotation of the benzene rings around the C4-C5 bonds. However, this is not observed. The reason may be a significant conjugation over the whole Ph-C=C-C(O)-C=C-Ph moiety, although the bond length distribution does not display an elongation of the carbonyl bond and C=C double bonds. These bond lengths are close to It is seen from the data of Table S1 (Supplementary Materials) that the most important corresponding geometric parameters are well reproduced for molecules 1b-f, containing the central four-membered ring. In particular, a significant deformation of the exocyclic angles at the C2 atoms of the four-membered ring is observed, with the C1-C2-C4 angle being~10 • smaller than the C3-C2-C4 angle. Some deformation of the cyclobutane ring angles is also noted: the C2-C3-C2 angle is smaller than the other angles of the ring. One would assume that the above-mentioned non-equivalence in the exocyclic bond angles might be minimized via the rotation of the benzene rings around the C4-C5 bonds. However, this is not observed. The reason may be a significant conjugation over the whole Ph-C=C-C(O)-C=C-Ph moiety, although the bond length distribution does not display an elongation of the carbonyl bond and C=C double bonds. These bond lengths are close to the normal It is pertinent to compare the data of Table S1 (Supplementary Materials) with the corresponding data for compounds 2 based on a five-membered central ring (2,5-dibenzylidenecyclopentanones) (syn,syn-and anti,anti-conformers). Figure 4 shows the molecular structures of two such compounds (syn,syn-2 and anti,anti-2), while the normal values for localized bonds. The 1-2-4-5 moieties are nearly planar, the corresponding torsion angles are close to 180° or 0°. It is pertinent to compare the data of Table S1 (Supplementary Materials) with the corresponding data for compounds 2 based on a five-membered central ring (2,5-dibenzylidenecyclopentanones) (syn,syn-and anti,anti-conformers). Figure 4 shows the molecular structures of two such compounds (syn,syn-2 and anti,anti-2), while The geometric parameters from Table S2 (Supplementary Materials) demonstrate close similarity for both (syn,syn-and anti,anti-) conformers of 2. These data well agree with the corresponding values for other derivatives of dibenzylidenecyclopentanones [34,36]. Moreover, the most important features are reproduced for both five-and fourmembered ring systems. In particular, this is the planarity of the molecule, bond-length alternation within the Ph-C=C-C(O)-C=C-Ph moiety, and also the same type of exocyclic angle deformation.
Earlier, it was established that the vast majority of conjugated planar molecules form six canonic types of crystal packings, two of which (stacking and parallel-dimeric packing) are favorable for the solid-state [2 + 2] photochemical reaction [27]. In spite of the planarity of these molecular systems (1b-f and 2), they do not form the canonic packing motifs in the crystalline state. Relatively close contacts between the ethylene bonds of the adjacent molecules were observed only in the structures of 1b-d ( Figure 5). The geometric parameters from Table S2 (Supplementary Materials) demonstrate close similarity for both (syn,syn-and anti,anti-) conformers of 2. These data well agree with the corresponding values for other derivatives of dibenzylidenecyclopentanones [34,36]. Moreover, the most important features are reproduced for both five-and four-membered ring systems. In particular, this is the planarity of the molecule, bond-length alternation within the Ph-C=C-C(O)-C=C-Ph moiety, and also the same type of exocyclic angle deformation.
Earlier, it was established that the vast majority of conjugated planar molecules form six canonic types of crystal packings, two of which (stacking and parallel-dimeric packing) are favorable for the solid-state [2 + 2] photochemical reaction [27]. In spite of the planarity of these molecular systems (1b-f and 2), they do not form the canonic packing motifs in the crystalline state. Relatively close contacts between the ethylene bonds of the adjacent molecules were observed only in the structures of 1b-d ( Figure 5). the normal values for localized bonds. The 1-2-4-5 moieties are nearly planar, the corresponding torsion angles are close to 180° or 0°. It is pertinent to compare the data of Table S1 (Supplementary Materials) with the corresponding data for compounds 2 based on a five-membered central ring (2,5-dibenzylidenecyclopentanones) (syn,syn-and anti,anti-conformers). Figure 4 shows the molecular structures of two such compounds (syn,syn-2 and anti,anti-2), while Table S2 The geometric parameters from Table S2 (Supplementary Materials) demonstrate close similarity for both (syn,syn-and anti,anti-) conformers of 2. These data well agree with the corresponding values for other derivatives of dibenzylidenecyclopentanones [34,36]. Moreover, the most important features are reproduced for both five-and fourmembered ring systems. In particular, this is the planarity of the molecule, bond-length alternation within the Ph-C=C-C(O)-C=C-Ph moiety, and also the same type of exocyclic angle deformation.
Earlier, it was established that the vast majority of conjugated planar molecules form six canonic types of crystal packings, two of which (stacking and parallel-dimeric packing) are favorable for the solid-state [2 + 2] photochemical reaction [27]. In spite of the planarity of these molecular systems (1b-f and 2), they do not form the canonic packing motifs in the crystalline state. Relatively close contacts between the ethylene bonds of the adjacent molecules were observed only in the structures of 1b-d ( Figure 5). The distance between the ethylenic bonds in 1c is longer than 5 Å . In all the cases, the molecules in a pair are shifted with respect to one another in parallel planes. The distances between the ethylenes are much longer than 4.2 Å , which makes the PCA reaction in these molecules impossible [27]. Apparently, the PCA reaction of compounds 1b-f and 2 in the solid state requires the use of a supramolecular template.
The results of the quantum chemical calculations are in line with the X-ray diffraction data. The 5-4-3-2 torsion angles decrease from 25° for the unsubstituted dienone 1a to 16-17° for 1e,f. This trend is correlated with the variation of the electron-donating properties of the substituents in the benzene rings of the dienones.
NMR Spectroscopy
NMR spectroscopy can serve to elucidate the fine structure of organic molecules and molecular assemblies in solutions [37]. The NMR data are rarely compared with the structures of compounds known from a crystallographic analysis because it is impossible to obtain high-quality crystals for the whole series. In the case of dienones, the determination of their conformational behavior in solutions is especially important for the prediction and determination of the structures of supramolecular systems based on bis-crown-containing dienones [32]. Therefore, we studied the structural characteristics of (E,E)-dienones 1c and 2 using various NMR techniques.
In the crystalline state, (E,E)-tetramethoxydienone 1c and (E,E)-tetramethoxydienone 2 occur as nearly planar symmetrical syn,syn-, syn,syn-, and anti,anti-conformers, respectively, which is probably attributable to the requirement of the close packing of molecules. Upon dissolution, a fast, conformational equilibrium may be established between the symmetrical syn,syn-and anti,anti-and unsymmetrical syn,anti-conformers ( Figure 4), by analogy with the previously studied bis-crown-containing stilbenes and distyrylbenzenes [38,39].
The NOESY spectrum of compound 1c, which is given in Figure S9, Supplementary Materials (the atom numbering differing from IUPAC rules is presented in Figure 1), and the spectrum of 2 in CD2Cl2 show averaged signals from the different conformers. The spectrum of dienone 1c has an intense cross-peak, corresponding to the through-space intramolecular interaction of the H(2′) protons of the benzene ring with the H(3) methylene protons of the cyclobutanone moiety, and a less intense cross-peak between these protons and the H(6′) aromatic protons. It is noteworthy that the spectrum contains no cross-peaks between the H(2′) and H(6′) protons of the benzene ring and the H(α) protons of the ethylene bonds ( Figure S9, Supplementary Materials). This overall spectral pattern can be interpreted by assuming that the syn,(syn/anti)-conformers predominate in the equilibrium.
The NOESY spectrum of dienone 2 also exhibits intense intramolecular cross-peaks, indicating the coupling of the H(2′) and H(6′) protons of the benzene ring and the H(3) methylene protons of the cyclopentanone moiety. Apart from the indicated cross-peaks, the H(6′) aromatic protons of dienone 2 give a small NOESY cross-peak with the H(α) ethylene protons [35]. The distance between the ethylenic bonds in 1c is longer than 5 Å. In all the cases, the molecules in a pair are shifted with respect to one another in parallel planes. The distances between the ethylenes are much longer than 4.2 Å, which makes the PCA reaction in these molecules impossible [27]. Apparently, the PCA reaction of compounds 1b-f and 2 in the solid state requires the use of a supramolecular template.
The results of the quantum chemical calculations are in line with the X-ray diffraction data. The 5-4-3-2 torsion angles decrease from 25 • for the unsubstituted dienone 1a to 16-17 • for 1e,f. This trend is correlated with the variation of the electron-donating properties of the substituents in the benzene rings of the dienones.
NMR Spectroscopy
NMR spectroscopy can serve to elucidate the fine structure of organic molecules and molecular assemblies in solutions [37]. The NMR data are rarely compared with the structures of compounds known from a crystallographic analysis because it is impossible to obtain high-quality crystals for the whole series. In the case of dienones, the determination of their conformational behavior in solutions is especially important for the prediction and determination of the structures of supramolecular systems based on bis-crown-containing dienones [32]. Therefore, we studied the structural characteristics of (E,E)-dienones 1c and 2 using various NMR techniques.
In the crystalline state, (E,E)-tetramethoxydienone 1c and (E,E)-tetramethoxydienone 2 occur as nearly planar symmetrical syn,syn-, syn,syn-, and anti,anti-conformers, respectively, which is probably attributable to the requirement of the close packing of molecules. Upon dissolution, a fast, conformational equilibrium may be established between the symmetrical syn,synand anti,antiand unsymmetrical syn,anti-conformers (Figure 4), by analogy with the previously studied bis-crown-containing stilbenes and distyrylbenzenes [38,39].
The NOESY spectrum of compound 1c, which is given in Figure S9, Supplementary Materials (the atom numbering differing from IUPAC rules is presented in Figure 1), and the spectrum of 2 in CD 2 Cl 2 show averaged signals from the different conformers. The spectrum of dienone 1c has an intense cross-peak, corresponding to the through-space intramolecular interaction of the H(2 ) protons of the benzene ring with the H(3) methylene protons of the cyclobutanone moiety, and a less intense cross-peak between these protons and the H(6 ) aromatic protons. It is noteworthy that the spectrum contains no cross-peaks between the H(2 ) and H(6 ) protons of the benzene ring and the H(α) protons of the ethylene bonds ( Figure S9, Supplementary Materials). This overall spectral pattern can be interpreted by assuming that the syn,(syn/anti)-conformers predominate in the equilibrium.
The NOESY spectrum of dienone 2 also exhibits intense intramolecular cross-peaks, indicating the coupling of the H(2 ) and H(6 ) protons of the benzene ring and the H(3) methylene protons of the cyclopentanone moiety. Apart from the indicated cross-peaks, the H(6 ) aromatic protons of dienone 2 give a small NOESY cross-peak with the H(α) ethylene protons [35].
Although the NOESY spectra do not enable an accurate estimate of the contribution of each conformer of 1c and 2 to the equilibria due to the strong coupling of the methylene protons with both type H(2 ) and type H(6 ) protons in all the conformers, the revealed spectral features provide the conclusion that syn,(syn/anti)-conformers predominate in the solution in both cases.
Using the data on the energies of stable structures for these compounds found by the FireFly program package, we calculated the theoretical ratio between the three possible conformers for dienones 1c and 2 ( Figure 6).
Molecules 2022, 27, x FOR PEER REVIEW Although the NOESY spectra do not enable an accurate estimate of the contrib of each conformer of 1c and 2 to the equilibria due to the strong coupling of the meth protons with both type H(2′) and type H(6′) protons in all the conformers, the rev spectral features provide the conclusion that syn,(syn/anti)-conformers predominate solution in both cases.
Using the data on the energies of stable structures for these compounds found b FireFly program package, we calculated the theoretical ratio between the three po conformers for dienones 1c and 2 ( Figure 6). The theoretical 1 H NMR spectra of conformers of (E,E)-dienone 1c and 2 simu using their mole fractions are in good agreement with the experimental data (Table particular, The theoretical 1 H NMR spectra of conformers of (E,E)-dienone 1c and 2 simulated using their mole fractions are in good agreement with the experimental data (Table 1); in particular, the calculations predict higher field positions for the H(5 ) meta-protons than for the H(6 ) and H(2 ) ortho-protons. The calculated distances between the H(2'), H(6') protons and the H(α) methine protons, or between the H(2'), H(6') protons and the H(3) methylene protons, are sufficient for the manifestation of NOE interactions, see Figure 7. The conformer models predict the possible appearance of NOESY cross-peaks between these groups of protons. (a) syn,syn-1c The predominance of syn,(syn/anti)-conformers of dienone 1c can be explained by competition between the steric factor and the stabilization via dipole-dipole interactions with polar molecules of the medium. According to calculations, these conformers have the highest theoretical dipole moments. Despite the sterically unfavorable conformation, additional stabilization is brought about by strong interactions with the molecules of the medium. Ongoing to the anti,anti-conformation, the energy benefit caused by the decrease in the steric strain is counterbalanced by the trend towards a decrease in the dipole moment of the molecule, which leads to destabilization in polar dichloromethane. In the case of anti,anti-conformer, the dipole moment decreases by a few units. As a result, destabilization starts to predominate.
Electrochemistry
A cyclic voltammetry (CV) study of compounds 1a-d,f was carried out on a cleaned surface of a glassy carbon (GC) electrode in MeCN in order to reveal the effect of substituents in the aromatic rings of cross-conjugated dibenzylidene cyclobutanones on the The predominance of syn,(syn/anti)-conformers of dienone 1c can be explained by competition between the steric factor and the stabilization via dipole-dipole interactions with polar molecules of the medium. According to calculations, these conformers have the highest theoretical dipole moments. Despite the sterically unfavorable conformation, additional stabilization is brought about by strong interactions with the molecules of the medium. Ongoing to the anti,anti-conformation, the energy benefit caused by the decrease in the steric strain is counterbalanced by the trend towards a decrease in the dipole moment of the molecule, which leads to destabilization in polar dichloromethane. In the case of anti,anti-conformer, the dipole moment decreases by a few units. As a result, destabilization starts to predominate.
Electrochemistry
A cyclic voltammetry (CV) study of compounds 1a-d,f was carried out on a cleaned surface of a glassy carbon (GC) electrode in MeCN in order to reveal the effect of substituents in the aromatic rings of cross-conjugated dibenzylidene cyclobutanones on the frontier orbital energies in comparison with the same characteristics of the cyclopentanone-and cyclohexanone-based dienones studied previously [34,35]. The CV curves were recorded starting from 0 V and moving towards the cathodic and anodic potentials. Table 2 gives the first peak potentials determined in the MeCN for comparison of the electrochemical characteristics with the data of other physicochemical investigations in the same solvent. The same Table 2 presents the differences (shifts) of the reduction and oxidation potentials of the substrates relative to those of the unsubstituted compound 1a. The cathodic and anodic processes of compounds 1a-d,f are irreversible (Figure 8a). The compounds 1b-d,f containing substituents in the aromatic rings are reduced with more difficulty than the unsubstituted 1a. The first cathodic peak potentials of dienones 1b-d,f are shifted by 140-310 mV to more negative values relative to that of 1a (Table 2, Figure 8a). The most pronounced electron-donating effect on the cathodic potential is observed for the diethylamino group (310 mV), while the least pronounced effect was found for the methylthio group (50 mV).
Molecules 2022, 27, x FOR PEER REVIEW 9 of 23 frontier orbital energies in comparison with the same characteristics of the cyclopentanone-and cyclohexanone-based dienones studied previously [34,35]. The CV curves were recorded starting from 0 V and moving towards the cathodic and anodic potentials. Table 2 gives the first peak potentials determined in the MeCN for comparison of the electrochemical characteristics with the data of other physicochemical investigations in the same solvent. The same Table 2 presents the differences (shifts) of the reduction and oxidation potentials of the substrates relative to those of the unsubstituted compound 1a. The cathodic and anodic processes of compounds 1a-d,f are irreversible (Figure 8a). The compounds 1b-d,f containing substituents in the aromatic rings are reduced with more difficulty than the unsubstituted 1a. The first cathodic peak potentials of dienones 1b-d,f are shifted by 140-310 mV to more negative values relative to that of 1a (Table 2, Figure 8a). The most pronounced electron-donating effect on the cathodic potential is observed for the diethylamino group (310 mV), while the least pronounced effect was found for the methylthio group (50 mV). The cathodic peak potentials of dibenzylidene cyclobutanones shift to the anodic region relative to the potentials of similar cyclopentanone derivatives and, to a larger extent, relative to cyclohexanone derivatives (Table 3 [34,35]). This may be due to the more planar molecular geometry of the dibenzylidene cyclobutanone derivatives, resulting in a higher degree of conjugation in comparison with similar cyclopentanone and cyclohexanone derivatives. The cathodic peak potentials of dibenzylidene cyclobutanones shift to the anodic region relative to the potentials of similar cyclopentanone derivatives and, to a larger extent, relative to cyclohexanone derivatives (Table 3 [34,35]). This may be due to the more planar molecular geometry of the dibenzylidene cyclobutanone derivatives, resulting in a higher degree of conjugation in comparison with similar cyclopentanone and cyclohexanone derivatives. The electron-donating ability of the para-substituents in the benzylidene moieties of dienones has a more pronounced effect on the anodic potentials than on the cathodic potentials. The shifts of the oxidation peak potentials of 1b-d,f to lower anodic values relative to that of the unsubstituted dienone 1a are 430-1190 mV.
The frontier orbital energies corresponding to the ionization potentials (HOMO) and electron affinities (LUMO) calculated by quantum chemical techniques are summarized in Table 2. The theoretical values are correlated with the oxidation and reduction potentials and qualitatively reproduce the pattern of variation of these values in the series of dienones found experimentally (the shift relative to 1a). The calculation confirms the conclusion about the effect of the electron-donating properties of the substituents on the shifts of the cathodic and anodic peaks and the band gap ∆E. The ionization mechanism is similar to that found for dienones based on cyclopentanone and cyclohexanone [34,35]. An electron is removed from the HOMO-1 quasi-degenerate with HOMO and is accepted by the LUMO.
Photophysics
Electronic absorption and fluorescence spectra were measured to determine the effect of substituents on the spectral properties of dienones 1a-f (Table 4, Figure 9, Figure S20, Supplementary Materials). the LUMO.
Photophysics
Electronic absorption and fluorescence spectra were measured to determine the effect of substituents on the spectral properties of dienones 1a-f (Table 4, Figure 9, Figure S20, Supplementary Materials). All dienones exhibit a long-wavelength absorption band (LWAB) (λ max from 341 nm for 1a to 481 nm for 1f) and several more bands in the shorter wavelength range. LWABs can be assigned to the HOMO-LUMO transitions [40,41], while the shorter wavelength bands can be attributed to local electron transitions in the aromatic rings. Attention is attracted by the qualitative dependence of the LWAB maxima on the electron-donating ability of the substituents in the para-position of the benzylidene moieties. The LWABs of para-dialkylamino-substituted 1e,f are red-shifted relative to that of the unsubstituted 1a, to the greatest extent, which is in good agreement with the high electron-donating ability of the substituents and is correlated with the lowest oxidation potentials among the considered series (Table 4). In the case of meta-methoxy-substituted compound 1c, the LWAB maximum shifts to 401 nm, and the oxidation potential increases relative to those of 1e,f. The further red shift of LWAB of dienone 1d is also accompanied by increasing E ox and is caused by the higher electron-donating ability of the SMe substituent.
The calculated absorption and fluorescence spectra are in qualitative agreement with the experiment and reproduce the trend in the series. The first electron transition for all the dyes, except for 1a, is the π-π* transition. In 1a, the first electron transition is the dark n-π* transition. This accounts for the absence of fluorescence in dye 1a. The corresponding orbitals are depicted in Figure 10. Generally, the red shifts of the LWABs of dienones 1a-f, relative to those of dienones based on cyclopentanone and cyclohexanone, are caused by the more planar geometry of the π-electron system of cyclobutanones. All dienones exhibit a long-wavelength absorption band (LWAB) (λmax from 341 nm for 1a to 481 nm for 1f) and several more bands in the shorter wavelength range. LWABs can be assigned to the HOMO-LUMO transitions [40,41], while the shorter wavelength bands can be attributed to local electron transitions in the aromatic rings. Attention is attracted by the qualitative dependence of the LWAB maxima on the electron-donating ability of the substituents in the para-position of the benzylidene moieties. The LWABs of paradialkylamino-substituted 1e,f are red-shifted relative to that of the unsubstituted 1a, to the greatest extent, which is in good agreement with the high electron-donating ability of the substituents and is correlated with the lowest oxidation potentials among the considered series (Table 4). In the case of meta-methoxy-substituted compound 1c, the LWAB maximum shifts to 401 nm, and the oxidation potential increases relative to those of 1e,f. The further red shift of LWAB of dienone 1d is also accompanied by increasing Eox and is caused by the higher electron-donating ability of the SMe substituent.
The calculated absorption and fluorescence spectra are in qualitative agreement with the experiment and reproduce the trend in the series. The first electron transition for all the dyes, except for 1a, is the π-π* transition. In 1a, the first electron transition is the dark n-π* transition. This accounts for the absence of fluorescence in dye 1a. The corresponding orbitals are depicted in Figure 10. Generally, the red shifts of the LWABs of dienones 1a-f, relative to those of dienones based on cyclopentanone and cyclohexanone, are caused by the more planar geometry of the π-electron system of cyclobutanones.
n orbital (HOMO-2) of dienone 1a The dependence of fluorescence properties of compounds 1a-f on their oxidation potentials is generally similar to the λmax(LWAB)-Eox dependence. The brightest and longestwavelength fluorescence is observed for the amino derivatives 1e,f ( Table 4).
The trends of variation of the fluorescence properties of cyclobutanones 1a-f differ from those described for a series of related compounds [35]. Unlike cyclopentanone-and cyclohexanone-based dienones, most compounds with electron-donating substituents 1bf, except for 1a,b, exhibit fluorescence.
According to the calculations, the E,E-isomer is dominant for each of the compounds in the S0 state. In the S1 state, a relatively fast E-Z isomerization can proceed, resulting in several isomers being present in comparable amounts (Table 5). Dienone 1d contains significant amounts of all three possible isomers. However, unlike the other series, the dominant isomers of compounds 1b-f are characterized by short radiation lifetimes. Therefore, fluorescence is the preferable deactivation pathway for the excited states as compared to the nonradiative deactivation pathways such as internal conversion or intersystem crossing (Table 4). This is in line with the experimental data. The dependence of fluorescence properties of compounds 1a-f on their oxidation potentials is generally similar to the λ max (LWAB)-E ox dependence. The brightest and longest-wavelength fluorescence is observed for the amino derivatives 1e,f ( Table 4).
The trends of variation of the fluorescence properties of cyclobutanones 1a-f differ from those described for a series of related compounds [35]. Unlike cyclopentanone-and cyclohexanone-based dienones, most compounds with electron-donating substituents 1b-f, except for 1a,b, exhibit fluorescence.
According to the calculations, the E,E-isomer is dominant for each of the compounds in the S0 state. In the S1 state, a relatively fast E-Z isomerization can proceed, resulting in several isomers being present in comparable amounts (Table 5). Dienone 1d contains significant amounts of all three possible isomers. However, unlike the other series, the dominant isomers of compounds 1b-f are characterized by short radiation lifetimes. Therefore, fluorescence is the preferable deactivation pathway for the excited states as compared to the nonradiative deactivation pathways such as internal conversion or intersystem crossing (Table 4). This is in line with the experimental data. Table 5. Relative energies of (E,E), (E,Z), and (Z,Z) isomers in kcal·mol −1 and mole fractions of the isomers of dienones 1a-f in the S1 state. We have analyzed the factors affecting the fluorescence quantum yield in the cyclobutanone series. Our calculations showed that the dyes in study have two low-lying excited states of the π-π* and n-π* types. The n-π* transition is forbidden, and the corresponding excited state is dark, while the π-π* transition corresponds to an intense absorption and emission band. In the unsubstituted dienone 1a, the dark n-π* state lies below the bright π-π* state, which results in the lack of emission. Previously, we observed a similar picture in a cyclohexanone analog of dienone 1a and attributed its lack of fluorescence to the same reason [35,42].
Mole
The main mechanism of deactivation in the studied dienones is the structural relaxation resulting in the distortion of the conjugation. Our calculation showed that the twisting of the formally double bond during E-E-E-Z isomerization is the most probable way (Figure 11a). The (E,E)-dienone is excited, and its relaxation on the S1 potential energy surface leads to the twisting of the formally double bond. After overcoming a relatively low barrier, the molecule reaches the S1-S0 conical intersection, which corresponds to thẽ 90-degree twist. This conical intersection is of the funnel type, and further relaxation may proceed on the S0 potential energy surface towards either of the (E,Z) or (E,E) forms. We have built the potential energy profiles for the ground and two lowest excited states ( Figure 11b shows the general scheme common for all the compounds in the series). Note that TDDFT poorly describes the behavior of the potential energy surfaces near conical intersections, therefore, these profiles can only be used for qualitative conclustions. We have analyzed the factors affecting the fluorescence quantum yield in the cyclobutanone series. Our calculations showed that the dyes in study have two low-lying excited states of the π-π* and n-π* types. The n-π* transition is forbidden, and the corresponding excited state is dark, while the π-π* transition corresponds to an intense absorption and emission band. In the unsubstituted dienone 1a, the dark n-π* state lies below the bright π-π* state, which results in the lack of emission. Previously, we observed a similar picture in a cyclohexanone analog of dienone 1a and attributed its lack of fluorescence to the same reason [35,42].
The main mechanism of deactivation in the studied dienones is the structural relaxation resulting in the distortion of the conjugation. Our calculation showed that the twisting of the formally double bond during E-E-E-Z isomerization is the most probable way (Figure 11a). The (E,E)-dienone is excited, and its relaxation on the S1 potential energy surface leads to the twisting of the formally double bond. After overcoming a relatively low barrier, the molecule reaches the S1-S0 conical intersection, which corresponds to the ~90-degree twist. This conical intersection is of the funnel type, and further relaxation may proceed on the S0 potential energy surface towards either of the (E,Z) or (E,E) forms. We have built the potential energy profiles for the ground and two lowest excited states (Figure 11b shows the general scheme common for all the compounds in the series). 1 Note that TDDFT poorly describes the behavior of the potential energy surfaces near conical intersections, therefore, these profiles can only be used for qualitative conclustions. Figure 11. (a) Scheme of the photoisomerization mechanism. (b) Typical potential energy profile (kcal/mol) of the ground S0 and lowest excited π-π* and n-π* states. is the dihedral angle at the C=C bond.
All the profiles show a conical intersection (CI) between the S0 and S1 states and two barriers on the S1 state, from the E,E form to CI and from the E,Z form to CI. The energy of the CI point is lower than the energy of either the E,E or E,Z minima on the S1 surface. This means that the relaxation should proceed nonradiatively, hindered only by a barrier. The depth of the CI relative to the deepest minimum (E,E form) is the driving force of the relaxation. This depth decreases from 1a to 1f (Table 6). From the CI point, the molecule nonradiatively goes to the ground state. The transition states (TSs) separate the S1 minima from the CI point. A TDDFT calculation gives the TS position at ~ 60-70°. The barriers hinder the rotation, and their height increases from 1a to 1f (Table 6). This means that increasing the donor capacity of the substituent makes nonradiative relaxation less probable, both thermodynamically and kinetically, and facilitates fluorescence. Table 6. E,E -to-CI (1) and E,Z -to-CI (2) activation energies and CI depths for 1a-f in the S1 state. The calculated activation energies were used to assess the isomerization rate constants (Table 7). A comparison of the characteristic isomerization times with the radiative lifetime shows that E-E-E-Z isomerization pathway can explain partial fluorescence quenching in 1b and the lack of fluorescence in 1a. The left barrier on the S1 potential energy surface can be overcome via thermal vibrations, and rotation around the double bond can be activated. Increasing the donor capacity of the substituents from 1a to 1f hinders isomerization, with almost unchanged τr, which leads to the noticeable fluorescence of 1c-1f, in excellent agreement with the experiment. Figure 11. (a) Scheme of the photoisomerization mechanism. (b) Typical potential energy profile (kcal/mol) of the ground S0 and lowest excited π-π* and n-π* states. ϕ is the dihedral angle at the C=C bond.
All the profiles show a conical intersection (CI) between the S0 and S1 states and two barriers on the S1 state, from the E,E form to CI and from the E,Z form to CI. The energy of the CI point is lower than the energy of either the E,E or E,Z minima on the S1 surface. This means that the relaxation should proceed nonradiatively, hindered only by a barrier. The depth of the CI relative to the deepest minimum (E,E form) is the driving force of the relaxation. This depth decreases from 1a to 1f (Table 6). From the CI point, the molecule nonradiatively goes to the ground state. The transition states (TSs) separate the S1 minima from the CI point. A TDDFT calculation gives the TS position at ϕ~60-70 • . The barriers hinder the rotation, and their height increases from 1a to 1f (Table 6). This means that increasing the donor capacity of the substituent makes nonradiative relaxation less probable, both thermodynamically and kinetically, and facilitates fluorescence. Table 6. E,E-to-CI (1) and E,Z -to-CI (2) activation energies and CI depths for 1a-f in the S1 state. The calculated activation energies were used to assess the isomerization rate constants (Table 7). A comparison of the characteristic isomerization times with the radiative lifetime shows that E-E-E-Z isomerization pathway can explain partial fluorescence quenching in 1b and the lack of fluorescence in 1a. The left barrier on the S1 potential energy surface can be overcome via thermal vibrations, and rotation around the double bond can be activated. Increasing the donor capacity of the substituents from 1a to 1f hinders isomerization, with almost unchanged τ r , which leads to the noticeable fluorescence of 1c-f, in excellent agreement with the experiment. In addition, we considered an alternative deactivation channel proposed for styryl dyes in [43], namely, the twisting of the phenyl ring around the formally single bond. Our calculations showed that, in dienones, such twisting gives neither stable structures on the S1 surface nor S1-S0 conical intersections.
The absorption and emission of compound 1f and its aza-crown ether analog are described in detail elsewhere [42].
Correlations
Previously, we studied the relationships between a number of calculated and experimental characteristics for a broad range of compounds, including dienones. In particular, we demonstrated the existence of a linear relationship between the electrochemical and spectrophotometric characteristics of dienone molecules [34,35,44,45].
In order to establish the relationship between the data of absorption spectra and electrochemical measurements, and to elucidate the possible dependences of these results on the electronic properties of the substituents in the benzene rings of dienones 1a-f, we carried out a correlation analysis of the relationship between the long-wavelength absorption maximum and oxidation/reduction potential difference ( Figure 12). In addition, we considered an alternative deactivation channel proposed for styryl dyes in [43], namely, the twisting of the phenyl ring around the formally single bond. Our calculations showed that, in dienones, such twisting gives neither stable structures on the S1 surface nor S1-S0 conical intersections.
The absorption and emission of compound 1f and its aza-crown ether analog are described in detail elsewhere [42].
Correlations
Previously, we studied the relationships between a number of calculated and experimental characteristics for a broad range of compounds, including dienones. In particular, we demonstrated the existence of a linear relationship between the electrochemical and spectrophotometric characteristics of dienone molecules [34,35,44,45].
In order to establish the relationship between the data of absorption spectra and electrochemical measurements, and to elucidate the possible dependences of these results on the electronic properties of the substituents in the benzene rings of dienones 1a-f, we carried out a correlation analysis of the relationship between the long-wavelength absorption maximum and oxidation/reduction potential difference ( Figure 12). The results presented here indicate that the energy characteristics of frontier orbitals in a series of related compounds, such as cross-conjugated dienones, can be adequately described by both electrochemical and spectrophotometric results, despite the irreversibility of the electrochemical reduction step. The results obtained in this work can be used to analyze new dienones and to design the desired characteristics of new molecules. The results presented here indicate that the energy characteristics of frontier orbitals in a series of related compounds, such as cross-conjugated dienones, can be adequately described by both electrochemical and spectrophotometric results, despite the irreversibility of the electrochemical reduction step. The results obtained in this work can be used to analyze new dienones and to design the desired characteristics of new molecules.
Methods
The reactions were monitored by thin layer chromatography using DC-Alufolien Aluminiumoxid 60 F 254 neutral plates, Merck. The melting points (uncorrected) were determined on a Mel-Temp II instrument. The 1 H and 13 C NMR spectra were measured on a Bruker DRX-600 spectrometer (operating at 500.13, 600.22, and 125.76 MHz, respectively) in CD 2 Cl 2 at 25-30 • C, using the solvent as an internal standard (δ H 5.30 and δ C 54.00 ppm, respectively). The proton and carbon signals were assigned using homonuclear 1 H-1 H COSY and heteronuclear 1 H-13 C COSY (HSQC and HMBC) 2D spectra. The chemical shifts were determined with an accuracy of 0.01 ppm, and the spin-spin coupling constants were measured with an accuracy of 0.1 Hz.
IR spectra were recorded using the Fourier transform spectrometer Nicolet iS5 (Thermo Fisher Scientific, using an internal reflectance attachment with diamond optical elementattenuated total reflection (ATR, iD7) with 45 • angle of incidence. Resolution 4 cm −1 , the number of scansis 20.
The electronic absorption spectra were recorded on a Cary 4000 spectrophotometer in the MeCN. The fluorescence spectra were obtained on a Cary Eclipse spectrofluorometer at room temperature. All the manipulations with solutions of dyes 1a-f were performed in a darkroom under red light (daylight induces the E-Z photoisomerization).
High-resolution mass spectra (HR MS) were run on a Bruker micrOTOF II instrument using electrospray ionization (ESI) [37]. The measurements were done in a positive ion mode (interface capillary voltage of −4500 V), mass range from m/z 50 to 3000 Da, and external or internal calibration was done with Electrospray Calibrant Solution (Fluka). A syringe injection was used for the solutions in MeCN (flow rate of 3 mL min −1 ). Nitrogen was applied as a dry gas, and the interface temperature was set at 180 • C.
The elemental analysis was carried out at the Microanalytical Laboratory of the A. N. Nesmeyanov Institute of Organoelement Compounds (Russian Academy of Sciences, Moscow, Russian Federation). The samples for the elemental analysis were dried at 80 • C in vacuo.
Cyclic Voltammetry
The electrochemical measurements were carried out using an IPC_Pro M potentiostat in a three-electrode system. A glassy carbon disk (d = 2 mm) served as the working electrode, a 0.1 M Bu 4 NClO 4 solution in MeCN was used as the supporting electrolyte, and an Ag/AgCl/KCl(aq., sat.) reference electrode and a platinum plate auxiliary electrode were used. The working electrode surface was polished by alumina powder with a particle size of less than 0.5 µm (Sigma-Aldrich). In the CV measurements, the potential sweep rate was 100 mV s −1 . The potentials are presented with iR-compensation. The number of transferred electrons was determined by comparing the peak current in the substrate and the current of single-electron oxidation of ferrocene taken in the same concentration. The concentration of compounds 1a-d,f was 1 × 10 −4 M.
X-ray Diffraction Experiments
A suitable single crystal of each of compounds 1b-f and 2 was mounted on a CCD SMART APEX-II diffractometer under a stream of cooled nitrogen, and the crystallographic parameters and X-ray reflection intensities were measured (MoK α -radiation (λ = 0.71073 Å), graphite monochromator, ω-scan mode). The reduction of the experimental data was performed using the SAINT program [46]. The structures were solved by direct methods and refined by least squares on F 2 in the anisotropic approximation for non-hydrogen atoms. The hydrogen atom positions were calculated geometrically and refined, at the final stage, using the riding model.
Density Functional Theory (DFT) Calculations
The structures and energies of the molecules were calculated using the densityfunctional theory (DFT) with the PBE0 functional and 6-31 + G(d,p) basis set by the FireFly program [49], partially based on GAMESS code [50]. The solvent (MeCN) effects were taken into account using the dielectric polarizable continuum model (D-PCM) [51]. The vertical absorption and emission spectra and E-E-E-Z isomerization energy profiles were calculated by the time-dependent DFT (TDDFT) with the same functional, basis set, and solvent model. The vertical absorption spectra were calculated by the TDDFT after DFT optimization of the ground state geometry, while the vertical emission spectra were calculated in a similar way after geometry optimization of the π-π* excited state using the TDDFT and D-PCM. The radiative lifetimes were calculated using the formula: k r = ( 2 / 3 )f 0i ν 2 i0 , τ r = 1/k r where f 0i and ν i are oscillator strength and the frequency of the electronic transition of the ith isomer respectively; k r is the radiation constant. The isomerization periods were calculated using the formula: k tc = cν i ·exp(−E Ai /RT), t tc = 1/k tc where x i is the vibrational mode frequency of the ith isomer, and E Ai is the activation barrier of this isomer. We considered the (E,E), (E,Z), and (Z,Z) isomers of dyes 1a-f and 2. We have found that the (E,E) isomers have the lowest energy and account for >99.9% of the isomer mixture. The spectral and ionization properties were calculated only for the (E,E) isomer.
In dyes 1c and 2, free rotation around the C4-C5 bond is possible. The mole fractions of the three possible rotamers of dienones 1c and 2 were calculated using the partition function: where x i is the mole fraction of the ith conformer and E i is the ground state energy of this conformer. The calculated UV-Vis absorption and emission properties of the rotamers are almost the same (within 2 nm), therefore, we give only the values for the syn,syn-conformer.
We simulated the structural relaxation only for the E,E isomers. It was found that, in the equilibrium mixture of the ground state, the fraction of this isomer is above 90%. It is the main source of photoinduced transformation products.
To construct the profiles of the E-E-E-Z isomerization, we used a simple (unrelaxed) scan of the potential energy surface along the dihedral angle, corresponding to the rotation around the formally double C=C bond with 5-degree increments. The energy values correspond to the non-optimized structures obtained by twisting of the initial isomer.
The left rotation barriers were estimated by the energy difference of the saddle points of the left transition state and stable structures of the E,E isomers (left minimum). The right ones were estimated as the difference between the top of the right peak of the S1 curve and the local minimum observed on the way to the E,Z geometry.
We understand that phototransformation, which proceeds via a conical intersection, requires multireference quantum chemistry for an adequate description of the potential energy profiles [52]. Nevertheless, our semi-quantitative description gives insights into the mechanism of phototransformations in organic dyes [53].
The vertical ionization potentials (IP) and electron affinities (EA) were calculated by restricted-open-shell DFT (RO-DFT) for the corresponding monocation and monoanion of each dye. The functional, basis set, and solvation model were the same. 1 H NMR spectra were calculated using Priroda program package [54,55] with the PBE functional and triple-zeta quality basis set. The optimized geometries were taken from the B3LYP/6-31 + G(d,p)/DPCM calculation. Previously [33], we have shown that, for dienones, the solvent effects are important to properly reproduce the structures and conformation energies.
Conclusions
The effect of the structure on the photophysical and electrochemical properties of a series of symmetrical dibenzylidene derivatives of cyclobutanone containing electrondonating substituents in the benzene rings of dienones has been studied. It was shown that the products of the condensation of cyclobutanone with benzaldehyde derivatives tend to exist as E,E-isomers. The conformational analysis of (E,E)-dienones using X-ray diffraction and NMR data revealed the structural features of these compounds in the crystal and in solution. Quantum chemical calculations confirmed more favorable syn,(syn/anti)conformations of the conjugated moieties for dibenzylidenecyclobutanones with four methoxy groups. It was found that the PCA reaction of 1a-f in the solid state requires the use of a supramolecular template. Using electronic spectroscopy, the spectral properties of the dienone derivatives were compared. Quantum chemical calculations explained the observed regularities of the luminescence of 1a-f as a function of the donor capacity of the substituent and elucidated the mechanism of luminescence emission and quenching. The dependence of the redox potentials on the position, nature, and number of substituents in the benzene ring, and their correlation with photophysical and quantum chemical characteristics, are of considerable interest for the subsequent investigation of photoactive dienone derivatives. The studied structure and properties of cyclobutanone-based dienones can also be used in the design of photoactive supramolecular systems. | 2022-11-09T16:21:56.207Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "5202bcd574347f239df0f626ad6671b1ef927dce",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/21/7602/pdf?version=1667653097",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2ba9fb9698ea2307bf42a77ffb98dab139a34ed0",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.